The
Foundation
Layer

A philanthropic strategy for the AGI transition

by TYLER JOHN

Why The Problem Remains Neglected — 

For Now

Over the last five years I’ve talked to a few dozen billionaire philanthropists, and their foundation leads about AI safety. By far, the most common question I’ve received is: why, if the risks from AI are so significant and the case for philanthropy so compelling, are there not more people working on this? 

This is a worthy question. Most really wealthy people have become wealthy in some part because they understand market efficiency. In market economies, where there is a buyer who is willing to pay a large amount of capital to have a problem they care about solved, usually problems get solved by default. If you can get in on the ground floor of a new problem before nobody else does, then you’ll make a ton of money by beating everyone else to it. So, capitalists race each other to be the first to solve a new problem and take over as much of the market as possible. There are cases of market failures where this doesn’t work so well: commons problems, for example, or when there is no buyer because the beneficiary is the environment or a very poor country. But markets generally work very well because buyers are willing to pay to have their problems solved. 

Philanthropy, however, is not like an efficient market. There is no market feedback loop, and the culture of philanthropy is very different from the culture of business. Where business has a culture of risk-taking, ambition, focus, and problem-solving, too often philanthropy is seen as a passion project, where people give to causes that tug at their heartstrings or to which they are personally connected. Although some philanthropists, like Bill Gates and Dustin Moskovitz, treat giving like a capital exercise of maximizing ROI, it’s generally uncommon to see philanthropists really carefully figure out what the most important problems in the whole world are and then execute on them as efficiently as possible. 

Because there is no market feedback loop, there is no incentive for philanthropy to be driven by maximizing any societal value metric. At their best, daring philanthropists have had historic impacts, like precipitating the green revolution and essentially solving nuclear arms control. But more commonly, philanthropy follows trends similar to fashion and technology adoption, where there are first movers who are seen as weird and edgy, early adopters who like to follow the latest trends, and then later, as these trends succeed, others start to adopt them as well. All of us like backing winners. This is human nature: when there are no strong incentives otherwise (like with markets), almost all human behavior can be predicted just by looking at what the local norm or trend is and assuming people will follow it. 

There are various other psychological reasons AI safety is not adopted more quickly that have to do with AI in particular. For example: 

  • Human brains are generally not good at extrapolating exponential terms, and in every sector, people anchor on the latest AI capability for their planning rather than extrapolating ahead to what is coming in months or years. 

  • More generally, all markets and governments tend towards short-termism, or discounting what will happen in a few years to focus on the current thing. And AI risk is broadly speaking a next year or the year after thing, not a now thing. 

  • Like many online discourses, the discourse around AI has been polarized into doomers and boosters, or people who hate AI and people who like AI. This is wildly simplistic, but the result is that people who like and understand technology can become somewhat alienated from safety, which can be seen as somehow being negative on technology. 

  • AI is a fairly technical and fast-moving area that most people don’t understand, and the people who understand it tend to be engineers rather than excellent public communicators. 

  • We generally think about modal or median outcomes and don’t think a lot about hedging against risk. Since the modal or median outcome is that AI is great, people act like AI is great instead of acting like there is a 25% chance that AI kills everyone on earth. 


While these features help explain why AI safety is neglected relative to its importance. It is certainly not the case that a large number of intelligent people aren’t taking the problem seriously. 

Here are some of the people and groups who take seriously the concerns that I have articulated in this piece: 

  • AI “godfathers” Geoffrey Hinton and Yoshua Bengio, who invented deep learning, and one of whom won a Nobel Prize in Physics 

  • Joe Biden, whose executive order established the first protocols for AI security risk and whose national AI R&D strategic plan discusses “existential risk associated with the development of artificial general intelligence through self-modifying AI or other means” 

  • Barack Obama, who explicitly recommended the evaluations organization METR, previously the “Alignment Research Center”, discussed in this piece 

  • Donald Trump who remarked that “we don’t want AI to be the rabbit that got away” 

  • Pope Francis, who regarded autonomous AI weapons as an existential risk to humanity and Pope Leo XIV, who chose his very name as a symbol of the moral role he hopes to play in the artificial intelligence revolution 

  • The United Nations Office of the Secretary-General, who in 2025 launched a research group on the governance of artificial general intelligence 

  • The OECD, which in 2025 released a report studying potential “AGI-level” risks 

  • The EU Commission, whose president stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” 

  • The Elders, a group founded by Nelson Mandela 

  • King Charles III, who advocated addressing AI risks with urgency and unity, and handed Nvidia CEO Jensen Huang a private letter warning of AI risks in 2023 

  • Glenn Beck and Steve Bannon, who signed a statement with over 100,000 others calling for a ban on superintelligence 

  • The late Stephen Hawking, who in 2015 signed a letter warning of extinction risk from AI 

  • Elon Musk, who has spoken publicly of existential risk from AI almost every year since 2014 

  • The CEOs of all three leading labs — OpenAI, Anthropic, and Google DeepMind — who speak publicly about the issue and signed the statement on AI risk 

If you find the ideas in this report convincing, you don’t have to do your giving alone. There is now a community of like-minded donors, experts, and leaders who are trying to solve this problem. I would love to connect you.  

Please get in touch.

The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.