The
Foundation
Layer

A philanthropic strategy for the AGI transition

by TYLER JOHN

18

Credit is also due to another nonprofit early to the space, the Machine Intelligence Research Institute, which did much of the earliest thinking on superintelligence, recursive self-improvement, and the alignment problem.

19

This figure comes to over a trillion when including Microsoft and Meta.

The Case for Philanthropy

In 2004, Oxford University received the single largest donation in the university’s history, a sum of £60m from information technology engineer James Martin. Martin developed an approach to software development called “rapid application development”, which involves truncating the planning phase of app development and instead using iterative development of software prototypes that you can break and play with to build the best tools. His approach is now dominant in the development of user interfaces, and it earned him recognition as the #4 most influential person on computer science in Computerworld’s 1992 25th anniversary issue. 

But Martin’s biggest influence on computer science wasn’t his lifetime of work improving software engineering methodology. It was the 2005 founding of The James Martin 21st Century School, better known today as the Oxford Martin School. The school aimed to “formulate new concepts, policies, and technologies that will make the future a better place to be”, and its first major project was to establish and fund the Future of Humanity Institute, a multidisciplinary research group founded by Professor Nick Bostrom. FHI’s early staff included characters like Swedish futurist Anders Sandberg — a man who manages to simultaneously be at the center of transhumanism, cryonics, human enhancement, and the search for extraterrestrial intelligence — and Eric Drexler, the earliest pioneer of nanotechnology. Together they conspired to build a better future for humanity, assessing the social and technological factors that could positively and negatively shape our future. 

2014 saw the publication of Bostrom’s seminal work, Superintelligence. The book was the first widely read strategic analysis of superintelligence risks and the AI alignment problem. It was recommended by Bill Gates and by subsequent OpenAI co-founders Elon Musk and Sam Altman. 

Bostrom’s Superintelligence and his subsequent work advising think tanks, AI labs, and governments can very credibly be said to have founded the fields of AI alignment and AI safety, fields that would later become a core part of the strategies of frontier companies OpenAI, Anthropic, and DeepMind.18 But this was only one of FHI’s numerous contributions to safer AI. 

Throughout its 19-year tenure, FHI developed the idea of AI governance and came up with the foundational ideas that made the field possible, and counted among its alumni two of OpenAI’s governance leads, Google DeepMind’s governance and safety lead and its biosecurity lead, Anthropic’s policy lead, senior leadership at RAND Corporation, C-suite leadership at the AI Security Institute (the world’s most seminal government institution for secure AI), and grantmakers for major foundations that would allocate hundreds of millions of dollars to AI safety, defensive AI, and worst-case biosecurity. It developed the idea of a “global catastrophic risk” and led to the publication of Toby Ord’s best-selling book The Precipice, which would be read by some of the world’s highest government offices. 

By being far ahead of the curve on artificial intelligence, the Oxford Martin School’s Future of Humanity Institute would not so much shape the fields of AI safety, security, and governance, but bring them into existence. There is a credible case that the relatively good safety culture of AI labs — staffed with safety, alignment, and preparedness teams and with C-suite leadership who talk openly about these problems — and the relative strategic clarity that we have about AI safety (e.g. see Section IV) are a direct result of the Oxford Martin School. 

Bostrom and FHI have had their detractors, and some of the founder’s behavior would unfortunately come to mirror the behavior of other early adopters and mad scientists, who manage to be tremendously seminal through their idiosyncratic, and at times reckless, behavior. But whatever its flaws, there can be zero doubt that through Bostrom and FHI, legendary computer software engineer James Martin had his last and greatest impact on the future of technology. 

From 2014 through 2024 about a half a billion dollars went into AI safety philanthropy. Funders of this work have included James Martin, Dustin Moskovitz, Eric Schmidt, Vitalik Buterin, Elon Musk, Jaan Tallinn, Bill Gates, the David and Lucile Packard Foundation, the Omidyar Network, TED Audacious, Amlin, the European Research Council, the National Science Foundation, and the Leverhulme Trust. 

Half a billion dollars, however, is not a lot of money. For comparison, it is 1/1,000th of Project Stargate, the new computing infrastructure project announced by OpenAI, Softbank, and Donald Trump in early 2025. Microsoft invests more than a hundred times this much money into OpenAI every year for its data centers. Seven Anthropic co-founders have individual net worths that are seven multiples of this figure. 

Yet when compared next to the contributions of the top eight AI companies, which, as of one year ago, collectively controlled $120 billion,19 and compared to the contributions of governments, with budgets in the trillions, the impacts of philanthropy’s shoe-string budget look astounding. This is in some part because philanthropists have been so far ahead of the curve. And today we are still ahead of the curve on the issue of AGI, with many public commentators denying it entirely, and markets failing to anticipate AI’s transformative impacts. Further, today there is vastly more strategic clarity about AI safety, security, and preparedness than at any other moment in history. We can have more impact now than the early philanthropists who defined the field. 

To date, philanthropically-funded work has (1) started the field of AI safety and several subfields, including governance, control, corrigibility, interpretability, robustness, and evaluations; (2) substantively shaped the most important legislation on AI; (3) developed the most central AI development and deployment policies of AI companies; (4) trained many of the most central figures in the field and hundreds of researchers; (5) started the only dialogues between the U.S. and China on AI safety; (6) cultivated the top talent in AI security in Washington; and (7) driven the most widely-read communications campaigns in AI safety. 

Below I detail some of the biggest wins from nonprofit organizations (FHI excluded) for safety and security over the past decade:

 

The Track Record of Philanthropy

Nonprofit organizations whose work has vastly improved our preparedness for AGI

Corporate Governance

METR

Berkeley AI research lab that researches, develops, and runs evaluations of frontier AI systems’ ability to complete complex tasks without human input 

  • Started the field of model evaluations


  • Developed the dominant corporate governance framework: responsible scaling policies

Eleos AI & NYU Mind, Ethics, and Policy

Berkeley research center focused on AI sentience and wellbeing; New York University center dedicated to the study of AI consciousness 

  • Started the field of AI welfare


  • Anthropic hired their first ai welfare lead on their recommendation


  • Eleos co-authored Anthropic's welfare evaluation of Claude 4

Talent and Infrastructure

UC Berkeley Center for Human-compatible AI

The first academic center for the study of AI safety, led by Prof. Stuart Russell

  • Trained the two most senior alignment scientists at google deepmind, and the founders of the center for ai safety and far ai

Center for Security and Emerging Technology 

Georgetown University AI policy think tank 

  • Founded by Jason Matheny, who would go on to lead RAND Corporation.  


  • Pioneered the most influential bipartisan U.S. AI policy priority: export controls on cutting edge chips. 

ML Alignment and Theory Scholars

12-week fellowship program that connects talented machine learning researchers with top mentors in AI alignment, governance, and security 

  • Alumni founded Apollo Research and hold senior roles at AISI, Anthropic, and DeepMind


  • 116 arXiv publications with over 5000 citations

Not-for-profit Think Tanks and Career Accelerators

Mercatus Institute, Horizon Institute for Public Service, Foundation for American Innovation, Centre for the Governance of AI, and others

  • Supported Biden’s and Trump’s White House with senior research talent, trained the governance leads for all three leading labs, and infused technical expertise into the legislative process 

Policymaking

FAR AI, Safe AI Forum

Berkeley, CA technical research lab focused on AI alignment and field-building in machine learning safety 

  • Started the leading track II dialogues on U.S.-China cooperation on AI
     

  • Established the leading alignment workshops for professional machine learning researchers

OECD Foresight

Public-nonprofit partnership on advanced AI and synthetic biology forecasting 

  • Established the Expert Group on AI Futures, the only international body with a mandate to analyze extreme risks from AI
     

  • Created the scientific analysis for an international legal instrument on synthetic biology

RAND Corporation

The first modern think tank, focused on defense, security, healthcare, education, employment, and science and emerging technology 

  • Significant in shaping key provisions of the Biden Executive Order on Advanced AI — the first formal government initiative to advance AI safety model testing — and other key policy initiatives

Encode, Secure AI Project, Center for AI Safety 

Three CA organizations — 501(c)(3)s with affiliated 501(c)(4)s — working on state-level AI policy.

  • Drove SB 1047 and later SB 53, requiring large frontier AI companies to publish their safety frameworks and model safety assessments, report to the CA government on major incidents, and protect whistleblowers

AI Technical Safety

Redwood Research

Berkeley, CA AI safety and security research organization 

  • Started the field of “AI control”, or research on how to control misaligned AI systems


  • Several major labs and government institutions have now started AI control programs

Apollo Research

London-based research organization focused on developing and using new tools to find out if AI models are engaging in strategic deception 

  • Started the field of evaluations for AI deception


Center for AI Safety 

San Francisco-based technical research, field-building, and advocacy organization

  • Developed Humanity’s Last Exam, HarmBench, and numerous other SOTA benchmarks for AI capabilities and safety. 


  • Developed robustness research leading to the founding of a company that partners with OpenAI, Anthropic, and Google DeepMind

Truthful AI 

Berkeley, CA technical research group

  • Supported Biden’s and Trump’s White House with senior research talent, trained the governance leads for all three leading labs, and infused technical expertise into the legislative process 

Machine Intelligence Research Institute

The earliest research organization working on AI alignment, based in Berkeley, CA. 

  • Theorized many early areas of AI alignment, especially the problem of “corrigibility,” or how to ensure that developers can change AI models’ goals and values even though this is directly in conflict with what the AI “wants”

Academic Labs

Many academic labs working on benchmarking and interpretability have pushed the frontier on AI safety 

  • Harvard's Martin Wattenberg and Fernanda Viegas made breakthroughs on the key problem of “superposition”, or how AI systems store information, driving such progress on interpretability Anthropic co-founder Chris Olah claimed that interpretability is no longer a fundamental problem

Public Understanding

Center for AI Safety

San Francisco-based technical research, field-building, and advocacy organization

  • Launched the most influential AI risk advocacy campaign, the “Statement on AI Risk,” signed by AI godfathers, AI company CEOs, and Bill Gates

Epoch AI

Research institute investigating the capabilities trajectory of AI. 

  • Developed the most widely cited technical analysis of AI capabilities trajectories, relied on by the likes of the UK’s Department of Science, Innovation and Technology and Our World in Data


In summary, to date many of the most important breakthroughs in AI safety, security, preparedness, and governance have come directly from philanthropically funded organizations. Every single one of these organizations is continuing to have an immense impact, in real time. The strongest case for philanthropy: that it has worked, and it is continuing to work, with impact on the scale of the best labs and governments for a tiny fraction of the cost. Philanthropy manages to succeed in an ecosystem of wealthy companies and powerful governments by intensely focusing on the most important problems. 

The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.