| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | We at Nonlinear would like to make it easier for people with ideas to find money and collaborators, so we’re trying a simple experiment. Submit your idea to this form here. If you're interested in potentially collaborating or funding anyone, reach out to them directly to arrange! | View other cause areas in the bottom tabs ⬇️ | ||||||||||||||||||||||
2 | I'm working on/I want to work on... | I could use help with... | I need this much funding... | Contact me at... | Date added... | Links... | Source... | |||||||||||||||||
3 | Preventing the long-term emergence of antibiotic resistant bacteria Antibiotic stewardship via regulation is intractable in poor countries. Perpetual development of new antibiotics followed by emergence of new resistant strains could make bacterial infections intractable in the long-term future. Research already exists showing that targeting enzymes involved in DNA mutation repair in bacteria can drastically reduce mutation rates, which should prevent new resistant strains from emerging. I want to scale up this technology through a start-up. | I need a non-technical cofounder and funding for initial experiments with a contract research organisation. I don't yet know how much funding I'd need but I will know by mid-July. | Not sure yet | sanjushdalmia@gmail.com | 5/14/22 | |||||||||||||||||||
4 | Make Canada a global champion on AGI governance Canada can and should be a wise AGI governance champion on the world stage, leading initiatives at the UN, hosting int'l conferences, and much more. But to be effective, it needs a broad, well-organized and resilient home-grown constituency that can inform public opinion and keep the issue top of mind inside and outside of government. The goal here is to kick-start these efforts by connecting, expanding and enabling the AGI governance community in Canada. | 1-2 year(s) salary and costs so I can focus on this full-time | $87,000 USD (one year) - $174,000 USD (2 year) | wyatt@wyatttessari.ca / WyattTessari.ca / www.linkedin.com/in/wyatttessari/ | 5/1/22 | Detailed plan and theory of change available (email me at wyatt@wyatttessari.ca) | ||||||||||||||||||
5 | Language model-based research assistant Elicit is an AI research assistant using language models to automate parts of researchers' workflows. The main workflow in Elicit is literature review, which helps researchers answer questions using academic literature. Eventually, our goal is to make Elicit a platform housing the basic building blocks of reasoning that people use to automate cognitive tasks. | Hiring for these roles: ought.org/careers | > $500K | jungwon@ought.org | 4/20/22 | elicit.orgought.org https://ought.org/updates/2022-04-08-elicit-plan https://ought.org/updates/2022-04-06-process | ||||||||||||||||||
6 | Non-fiction book on threat of AGI for a general audience There are already many great books on AGI (eg Superintelligence, Human Compatible) but there is not yet a short book that is more accessible to reach policy makers and a broader audience. It would have all the arguments/counterpoints in one place, presented in an accessible manner (eg not using the word 'orthogonal'). The benefit to the AI safety community would not be the novelty ideas presented, but the accessible reframing of the concepts and concerns. Additionally, the marketplace seeks new content, so having new books come out on the subject of AGI that really highlight the threat would be of value. The goal would be to have a publisher but if not it could be self-published on Amazon or through other means. The funding request is to both focus my time on the writing and partially to act as a commitment device. Additionally, funds would be used to compensate those helping to edit/research/review the book. Thank you kindly, Darren | editing, research, review | $25,000-$75,000 | eunoia@gmail.com | 4/20/22 | |||||||||||||||||||
7 | AI safety research ideas platform Apart Research is creating a web platform for shovel-ready AI safety research ideas that can help ML engineers enter the field, assist in idea generation of institutional and independent research, and support idea execution through advisership and collaboration facilitation. The platform showcases ideas with their associated advisors and collaboration opportunities along with active RFPs in AI safety research. We have talked extensively with researchers about the possibilities of such a platform. I can share several links if you write to me. | $150,000 | esben@apartresearch.com | 04/19/22 | AI safety technical ideas platform https://aisafetyideas.com | |||||||||||||||||||
8 | Funding for YouTube Channel Hey, I am Chongtham. I make documentary style YouTube videos ranging from a variety of topics like The East India Company to geopolitical explainers: https://youtu.be/6WXMbWpxGzw I am planning to make a series of videos on AI, superintelligence and AI safety. Reach out to me if you're interested in helping. | resources for production - like quality voiceovers, equipment, lighting, etc. | $5000 - $10,000 | Joshichan19@gmail.com | 04/18/22 | https://youtu.be/6WXMbWpxGzw | ||||||||||||||||||
9 | An Organization To Promote Independent Research In AI Safety We need more people working on AI Safety research, but opportunities to do good work in this field are very limited, and so excellent researchers often end up working in non-safety AI roles because of this. EA grantmakers often fund independent researchers (IRs), and there are many open problems in AI Safety which could be tackled by IRs. However, IR lacks the institutional benefits of credibility, reliable income, motivation, collaboration and serendipity, especially when compared to jobs available to skilled AI researchers and engineers in industry. This could be fixed by creating an organisation to make IR in this field an attractive career path. This organisation would provide an institutional umbrella for researchers to work under to engender credibility; free workspace and food; accountability and productivity incentives; assistance in obtaining initial and ongoing funding; collaboration opportunities between researchers and with other labs in industry and academia through talks, socials and workshops; and generally make IR in AI Safety an appealing prospect for talented researchers who we would otherwise lose to non-safety AI roles elsewhere. This will cost around £150,000 p/a. I would like to raise this amount to run a one year trial in central London to assess impact. Please contact me at jessicamarycooper@gmail.com if you are interested in making this happen! Contact: jessicamarycooper@gmail.com | £150,000 | jessicamarycooper@gmail.com | 02/03/22 | Astral Codex Ten | |||||||||||||||||||
10 | Long-Termism Advocacy Org In Israel ALTER, the Associations for Long Term Existence and Resilience, is an academic research and advocacy organization being started in Israel, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The founder, David Manheim, has a PhD in public policy and a track record of research in effective altruist priority areas and risk reduction, and in policy engagement. The key goals of the organization will be to foster academic and policy work in key areas in Israel, via organizing conferences, academic engagement, and fostering collaboration with international organizations in this space. If you have connections to interested Israeli academics, experience with making this type of academic outreach successful, or you can provide funding for this work, please contact david@alter.org.il | david@alter.org.il | 02/03/22 | Astral Codex Ten | ||||||||||||||||||||
11 | Independent Research In Human-Machine Collaboration The most pressing long-termist priorities (e.g. AI safety, climate change, global governmence) require remarkable intellectual efforts to tackle. In this context, I'd like to conduct independent research into human-machine collaboration, investigating avenues for augmenting human cognition using AI. By making use of my background in machine learning and cognitive science, I'd explore tools for perceiving large amounts of information (e.g. user-centered recommendation systems, personalized summarization, artificial salience maps, etc.), navigating complex problem spaces (e.g. virtual assistants, intelligent tutors, conversational tree pruning), and debugging belief systems (e.g. ideological unit tests, liquid epistemics, version control for beliefs, constrained belief generation etc.). Augmenting human intellect might empower knowledge workers across fields, including in cognitive enhancement itself, potentially leading to fruitful positive feedback loops. If you're interested in supporting this line of work, reach me via paulbricman.com/contact. | paulbricman@protonmail.com | 02/03/22 | Astral Codex Ten | ||||||||||||||||||||
12 | Non-Fiction Book With Case Studies On Resilience And Design I'm Nikhil Mulani and I'm looking for connections and funding to support a non-fiction book project. “Patient Designs” is an exploration of case studies of organizational resilience, technological design, and investment management that could provide valuable guidance for building a society oriented around the benefit of future generations. Case studies include the successes and failures of centuries-old family-run businesses in Japan, governance frameworks for early Internet architecture and recent AI development, and ethical safeguards created for new and old public investment bodies such as Norway's sovereign wealth fund and the City of London's "City Cash" fund. My experience includes product management roles at a variety of large companies and startups, and management consulting engagements across a variety of clients in the public and private sectors. My educational background includes a B.A. in Classics from Harvard and an M.B.A. from Wharton. If you can provide funding, connections, or advice, please email nikhilrmulani@gmail.com | nikhilrmulani@gmail.com | 02/03/22 | Astral Codex Ten | ||||||||||||||||||||
13 | Apply Constructor Theory To AI Constructor theory is a framework developed by the physicist David Deutsch which seeks to express scientific theories as claims about which physical transformations are possible and which are impossible. This is in contrast to the standard framework which describes physical systems in terms of their initial conditions and laws of evolution. It is hoped that this framework will solve fundamental problems in physics and other fields. I believe that there is an analogy between the problems in the natural sciences which constructor theory was developed to solve and the AI alignment problem. I would like to spend a couple of months thinking about this and fleshing out my ideas as posts on LessWrong/The Alignment Forum and opening them up for discussion. I am currently in the final few months of a PhD in theoretical physics during which I have published two papers. After my PhD finishes, I would like to spend some time (two or three months) researching this problem and will need some funding to do this full time during this period. If you would like to fund this work or discuss the idea further, please send an email to . | 02/03/22 | Astral Codex Ten | |||||||||||||||||||||
14 | A Wiki For Rebuilding Civilization After Disaster My name is Jehan, I've created the site Wikiciv.org as a guide to rebuilding civilization in case of global catastrophe. Its editing is crowdsourced like Wikipedia because a project this large is far too much for one person, or even a team. Technologies and raw materials are linked so both upstream and downstream technologies are easily accessible. There are other projects with similar goals, but they are 1) Not publicly accessible 2) The wrong scale. Books such as "The Knowledge" and "How to Invent Everything" are too cursory to be a practical guide for recreating critical technologies like steel, fertilizer and antibiotics. Meanwhile the "Manual for Civilization" from the Long Now Foundation is 3500 paper books in one corner of San Franciso. Wikiciv fully open and available for database downloads. Distributed backups are encouraged to ensure resiliency during a disaster. WikiCiv could be be helpful even for regional supply-chain disruptions. For example during the Covid-19 pandemic, there were critical oxygen shortages in India. It turns out that a reasonable oxygen generator can be made from zeolite and an air compressor. Wikiciv aims to be a single, interconnected database of "from scratch" manufacturing instructions for situations like these. It is the eventual goal of Wikiciv to be accepted as a Wikimedia Foundation project (like Wikipedia, Wikiquote, Wikivoyage etc). The better Wikiciv becomes, the more likely this is. Get in touch at admin@wikiciv.org | admin@wikiciv.org | 02/03/22 | Astral Codex Ten | ||||||||||||||||||||
15 | Long-Termism + Progress Studies Unconference Long-termism + Progress UnConference. We intend to solve the problem of unproductive conferences and the challenges of the interdisciplinary nature of long-termism, existential risk and progress studies by applying participatory techniques (OpenSpace) in an UnConference format. We need collaborators more than money, but budget is c. $15K. What: An innovative conference format bringing cross-silo thinkers and doers together to think about the long-term and progress. Typical conferences work badly. Hierarchies and old networks impede new connections and growth in social and relationship capital. The best conversations occur in the corridors of typical conferences. Why: The long-term is vital for humanity. Ideas are multidisciplinary and emergent. There is debate as to how much progress was are making and what we can do. The challenge cuts across a wide range of domains. Governments and traditional institutions are struggling to rise to the challenge. New ideas are needed. For those interested in these ideas, we believe participatory meeting events could lead to fruitful new ideas and connections. Perhaps low probabilities or very impactful outcomes/meetings. How: The Long-termism UnConference will be a one/two day event bringing together a range of thinkers from a wide range of domains and backgrounds to discuss long-term challenges and solutions in a self-selecting participatory manner. More on me: thendobetter.com/links or @benyeohben Pod: Ben Yeoh Chats | $15,000 | thendobetter.com/contact | 02/03/22 | Astral Codex Ten |