
The
Foundation
Layer
A philanthropic strategy for the AGI transition
by TYLER JOHN
1
There is a further argument in favor of transformative economic growth from AI. This is that growth begets growth. The more agents we have contributing to the economy, and the more productivity that is happening, the more opportunities we have to identify new ideas and technologies that could precipitate a rate change in economic growth. An economic rate change happened twice in the last 10,000 years despite zero such rate changes in the 300,000 years the human species lived before it, due to the growing productivity of the economy. Given that today 8% of every human who has lived in the past 300,000 years is alive, and that 99% of all historical GDP has been generated in just the last 60 years, you might say that we’re due for another such rate change this century.
2
Thanks to Holden Karnofsky for this framing.
Introduction
Every once in a while, when the frontier of some critical area of technology is advancing very rapidly, small groups of philanthropists step up and become the glue that holds our world together. This happened during the Cold War, when two of the century’s most famous intellectuals, Bertrand Russell and Albert Einstein, wrote a manifesto to urge peaceful international resolution to avoid “universal death.” The manifesto inspired Rockefeller protege Cyrus Eaton, a Cleveland-based investor, to start the Pugwash Conferences on Science and World Affairs in 1957. These international conferences were established to “diminish the part played by nuclear arms in international politics.” Attended by the world’s brightest physicists, they created an opportunity for East-West dialogue, injecting humanitarian considerations into a heavily securitized dialogue. The Pugwash Conferences are credited for the success of the Partial Test Ban Treaty (1963) and the Nuclear Non-Proliferation Treaty (1968) — the latter which is arguably the most successful arms treaty of all time. They were awarded the 1995 Nobel Peace Prize.
Since then, philanthropically funded projects have driven forward international agreements like the New START Treaty and the Iran nuclear agreement, set into motion the IAEA Low-Enriched Uranium Fuel Bank to provide nuclear power without an incentive to nuclearize, and directly procured enough fissile materials to produce several nuclear bombs. The relative stability of nuclear weapons today is not due to a stroke of luck, but in large part to a concerted and wildly successful effort by UHNWs to safely navigate a dramatic new technology — one that could simultaneously level cities, reshape geopolitics, and produce as much clean energy as 1.5 million times its weight in coal.
There are many analogies between our current predicament and the Cold War dilemma around nuclear fission. The U.S., once again, sees itself as a competitor in a technological race with foreign adversaries, dashing ahead to build poorly understood dual-use technology which could reshape the entire world order. Whereas the scientists building it feared that the nuclear bomb could ignite the entire atmosphere, or plunge the world into an ice age, the progenitors of artificial intelligence fear that it could take power away from all of humanity or economically disenfranchise the vast majority of the human population. AI researchers leave their companies almost every week over safety concerns, just as physicist Joseph Rotblat fled the Manhattan Project on grounds of conscience to join the Pugwash Conferences as secretary-general. Today’s accelerationists, meanwhile, echo the grandiosity of Berkeley physicist Ernest O. Lawrence, who spoke of his hope that atomic weapons would “effectively end war as a possibility in human affairs.”
These philanthropists approached an astounding challenge — to shape a proto-scientific discipline understood by few scientists in the middle of an international war for technological dominance. Nuclear weapons always were and remain in the domain of security, with the world’s best-kept secrets and strongest national interests. And despite this, the philanthropists won. Humanity won, simultaneously averting the end of civilization and the specter of a global fascist Nazi regime.
Fast forward to the present day. Even compared to the scientists in the 1940s who found themselves in the midst of an international arms race, we are much more confused about AI’s capabilities now than we were about nuclear fission back then. Today we are building a black-box, general-purpose technology with no clear ceiling on capabilities and highly discontinuous progress. And whereas the atomic bomb took shape in the context of a single, centralized government project, frontier AI development is happening all over the world. There is a very real sense in which anything could happen. AI could be anything at all.
Nowhere is this confusion more evident than in the sometimes dogmatically held views in various AI camps. Many claim that AI is all hype, smoke and mirrors, and a bubble ready to burst. Many others claim that it is the next industrial revolution, imminently capable of adding trillions of skilled workers to the economy and achieving a century of scientific and economic progress over the course of a few years. Some see the current AI paradigm as an inherently dangerous and uncontrollable path that must be avoided at all costs. And still others see current AI progress as smoothly continuous with the economic and social trends of the 20th and 21st century, and want to manage this technology much like we have any other. The only thing everyone can seem to agree on is that AI is fast, and we need to maximize the benefits and minimize the risks.
Importantly, none of these hypotheses can simply be ruled out. Even the idea that AI will imminently lead to a 10-100x economic rate change is distinctly reasonable. This has already happened in human history twice: first during the agricultural revolution, which created enough surplus resources to allow the human population to grow leading to 10x per capita growth, and second during the industrial revolution, when mechanization increased the value of human labor leading to around 100x per capita growth. The idea that mechanizing cognitive labor could have the same effect again seems entirely plausible.1 But if, like agriculture and the industrial revolution, AI has the transformative potential to lead to — for example — 10-100x per capita growth, then getting this technology right is surely the grand challenge of our century.
Debates over AI safety sometimes result in a frustrated state of polarization, framed as an argument between people trying to stop AI progress and people trying to keep it moving forward. In reality we have an enormous range of choice points over how to develop AI, what kind of AI to develop, how quickly to develop AI, who to put in charge of developing AI, to whom to distribute the benefits and harm of AI, and so on. As Reid Hoffman stated eloquently in October:
“The AI doomer and bloomer outcomes are a set of probabilities, not certainties. Our actions constantly tilt the odds toward better or worse outcomes. All of us in the valley have a responsibility to steer the future through dialogue, and figuring out a way forward that balances safety, innovation, and humanism”
For philanthropists, this is not just hypothetical. Already, even though we are only three years after the release of ChatGPT, philanthropically funded efforts have:
started the field of AI safety and several subfields,
substantively shaped the most important legislation on AI,
designed the development and deployment policies of top AI companies,
trained some of the most influential figures in artificial intelligence and hundreds of researchers,
started the only dialogues between the U.S. and China on AI safety,
and much more.
The Opportunity Right Now
Despite these successes to date, the biggest impacts of philanthropy are far from behind us. On the contrary, 2026 is quite plausibly the best year in history for new entrants in the field of AI safety and security philanthropy. We are currently sitting at the intersection of five factors that make now an extremely opportune time to dramatically scale private resources for safe AI:
Today is the least crowded the field will ever be.
In the future, philanthropists, governments, and VCs will increasingly invest in AI safety and security as AI models demonstrate high-risk capabilities and as early adopters are preceded by followers. If you enter the space now, you still have the opportunity to be a frontrunner and an early adopter who will shape the field that follows you. It’s like entering climate change in the ‘80s, or nuclear weapons policy in the ‘50s.
There are numerous tractable and shovel-ready approaches for philanthropists to implement.
Even just three years ago, if you wanted to support AI safety and security, you had two options: you could theorize about future AI capabilities and risks, or you could raise awareness.2 Philanthropists who wanted to fund work in this space had to accept that their options for impact were speculative and indirect.
But today we have AI models that you can pick up and play with, and which are reasonably close in capabilities to the kinds of systems that pose significant risks, and which may well have the exact architecture that transformative AI will have. Technical safety research, while not a fully mature science, is no longer pre-paradigmatic. There are many hopeful approaches such as mechanistic interpretability (a form of “neurosurgery for AI”) which are developed enough to have $1bn companies founded to pursue them. AI capabilities have become advanced enough that they can increasingly be deployed fruitfully in the service of security and defense, with the White House now catalyzing work to apply AI directly to risks from synthetic biology. And the policy ecosystem has matured, leading to actual laws being passed by broad coalitions of supporters, and many more robust avenues for better AI governance that are worth pursuing.
Despite the early state of the field, philanthropists who have engaged in this space to date have had profound opportunities for impact, and the opportunities are only getting better. (See Section V for more details.)
For now, AI science is still done out in the open.
Today, most advances in artificial intelligence are published on the internet, and there are open-source models that are reasonably close to cutting edge models that academics and nonprofits can study. We largely know what models AI companies are training and how they work, and we know about new capabilities within a few months of their development. Safety testing is done in the open, with AI companies freely sharing negative results demonstrating their models engaging in frightening forms of deception and uplifting terror attacks.
While many of us would like it if AI science were more open and transparent, and many more scientists could participate, things could be a lot worse. As AI models become increasingly capable, competition becomes more intense, and the political conversation heats up, much more of this science may go underground. Situational Awareness sketches one scenario where governments could escalate security protocols and lock down all of AI development in a Manhattan Project style scenario. Research reports from groups like Apollo Research show how AI companies could increasingly have incentives to deploy their models in secret where competitors cannot use them to advance their own capability and so that they can avoid scrutiny. As AI clusters get bigger, academics could be priced out of the high-end computer chips they need to do their research, ever more with each passing year.
For now, philanthropists can amply participate in shaping the frontier. But that may not always be the case.
Party lines on AI governance are not yet written.
On issues as diverse as climate change, transportation infrastructure, and reproductive technology, every political party has decided what its approach to the issue is going to be and getting vested interests to change their approach is an enormously intractable ordeal. By contrast, no one knows what the Democratic or the Republican angle on AI is going to be, or even whether this is going to be a fully bipartisan issue of national security that everyone rallies behind. We don’t know if governments will take an increasing interest in this issue or drop the ball entirely.
We don’t know when that policy window will close, but it will almost certainly be years or months from now, not decades.
Broad donor participation matters more than ever.
While a plurality of donors have gradually entered AI safety and security, such as Eric Schmidt, Vitalik Buterin, Elon Musk, the Packard Foundation, Omidyar Network, the MacArthur Foundation, and TED Audacious, nonprofit work on AI safety is still heavily dependent on a very small number of donors who have a very specific funding thesis and political baggage—Dustin Moskovitz of Good Ventures and Asana was the second largest donor to President Biden’s 2020 campaign.
In practice, philanthropic donors are not just bags of cash. They are people with relationships, ideas, reputations, and access to unique opportunities and talent. To succeed, AI safety and security philanthropy will need to rely on Republican donors and Democrat donors, EU donors and Chinese donors, risk-taking donors and cautious donors, lab-friendly donors and lab-adversarial donors, empiricist donors and theorist donors, patient donors and impatient donors, private donors and public donors. To have the best chance of managing AI risks, a whole-of-society approach is needed.
Given the enormous stakes, the opportunities at hand, and the optimal window for intervention, there will simply never be a better time to begin work in AI safety and security.
This Report
This report is a resource for philanthropists who want to help build the foundation layer of AI safety and security. Each section is designed to stand on its own, so that you can read the report in part or in any order that you like.
Section II: The Exponential Trend to Superintelligence examines where AI capabilities are headed and how quickly. I survey the evidence for continued exponential progress, the arguments that progress might slow, and the possibility that AI research automation could make things go even faster. The upshot is an emerging median forecast of human-level artificial intelligence by roughly 2030 with a range of plausibility of 2027 and 2038 — AI systems that can progress science and out-think humans, running at a hundred times speed with millions of copies running in parallel, in just five years.
Section III: Civilization-Scale Threats takes this scenario seriously and examines what could go wrong. I cover three major risk categories: loss of control (AI systems that don't reliably do what we want), malicious use (AI enabling more destructive attacks, especially in biosecurity), and concentration of power (AI enabling authoritarian control or economic disenfranchisement). Any one of these outcomes could permanently alter—or end—life as we know it.
Section IV: The Philanthropic Solution presents a more optimistic picture. Philanthropists can solve these problems with the careful deployment of resources. I describe five pillars of intervention: alignment science, nonproliferation of dangerous capabilities, defensive technology, distributing power, and building talent and infrastructure. For each, I explain what success looks like and where philanthropic dollars can make the most difference.
Section V: The Case for Philanthropy argues empirically that AI safety philanthropy works. I document the track record: philanthropically funded organizations have founded entire research fields, shaped landmark legislation, designed industry safety standards, trained key personnel across labs and governments, and opened international dialogues. By being laser focused on solutions in a way that capital and government cannot, philanthropy has had as much impact as companies and governments with a tiny fraction of the capital.
Section VI: Political Spending and Impact Investing briefly offers two other approaches to AI safety that complement charitable giving, to leverage markets and governments directly.
Section VII: Why the Problem Remains Neglected explains why, despite the stakes, AI safety philanthropy hasn't attracted sufficient capital to solve the problems, and why that is increasingly changing.
In the appendices, you will find resources to enable your giving, a guide to understanding AI consciousness and related funding, and possibly the world’s shortest and most accessible explainer of how artificial intelligence works.
Appendix A is of particular note. This appendix contains numerous resources for philanthropists interested in going deeper on AI philanthropy. Among them are:
a list of advisors and philanthropists seeking co-funding who can help you start your journey, including my own organization, the Effective Institutions Project;
an opportunity to express interest in upcoming Foundation Layer events;
my personal contact information, and;
a brand new multi-donor philanthropic fund for donors who want to support the aims set out in this report.
My idealistic hope is that this report will help catalyze hundreds of millions of dollars in new giving in AI safety and security.
My idealistic hope is that this report will help catalyze hundreds of millions of dollars in new giving in AI safety and security. To that end, it is chock full of shovel-ready opportunities for impact, further reading materials, and additional resources to help you hit the ground running. Many of these resources can be found in the appendix. But if you’d prefer a guided tour, or to have visibility on the very latest developments in the space, just get in touch!
← Previous
Overview
Next →
The Exponential Trend
to Superintelligence
The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.

