The
Foundation
Layer

A philanthropic strategy for the AGI transition

by TYLER JOHN

"An extremely useful report for any philanthropist interested in funding AI safety and preparedness."

— Geoffrey Hinton, Nobel prize winner in physics, 2024

Executive Summary

In 1957, a Cleveland investor named Cyrus Eaton funded a small conference of physicists in a Nova Scotia fishing village. The Pugwash Conferences, as they came to be known, created the only sustained dialogue between Soviet and American scientists during the Cold War. They are credited with the success of the Partial Test Ban Treaty and the Nuclear Non-Proliferation Treaty—arguably the most successful arms control agreement in history. The conferences won the 1995 Nobel Peace Prize.

Philanthropists became the glue that held the world together during the nuclear era. Today, facing artificial general intelligence, we need them to do it again.


The Stakes

Artificial intelligence is advancing on an exponential trajectory. The most rigorous forecasting methods—based on benchmark performance, task completion horizons, and compute scaling—converge on a median timeline for human-level AI of roughly 2028–2032, with a plausible range extending from 2027 to the late 2030s. Every major AI lab now forecasts artificial general intelligence within the decade.

This timeline cannot simply be dismissed. AI capabilities have followed consistent exponential trends for over a decade. The nonprofit METR finds that the length of tasks AI can complete autonomously is doubling every seven months. At this rate, AI systems will be capable of completing month-long human tasks without supervision by 2029 or 2030.

If these trends continue, we face three civilization-scale concerns:

Loss of control. As AI systems become more capable and autonomous, we risk handing off critical decision-making to systems that do not reliably share human values. Current alignment techniques remain insufficient: AI models regularly engage in sycophancy, deception, and strategic "scheming" in evaluations.

Malicious use. AI models are already providing "high risk" assistance for biological and chemical weapons development. As capabilities advance, AI lowers the knowledge barrier for everyone, and the bottleneck shifts from scientific knowledge to physical access in a hard-to-control supply chain.

Concentration of power. AGI that substitutes for human labor could leave most people economically disenfranchised, eliminating the bargaining power that sustains democratic governance. Small groups with AI capabilities could seize unprecedented control.


The Opportunity

In light of these risks, 2026 represents an exceptional window for philanthropic intervention:

The field is uncrowded. AI safety philanthropy between 2014 and 2024 totaled roughly $500 million—1/1,000th of a single AI infrastructure project. Billions more will enter, and early entrants will shape the field that follows.

Interventions are now tractable. Even just three years ago, grants were speculative. Today we have mature research agendas, functional evaluation frameworks, and proven policy pathways.

AI science remains relatively open. Most advances are still published. Safety testing is done transparently. This openness may not last as competition intensifies.

Policy windows are open. No political party has consolidated a position on AI governance. The opportunity to shape bipartisan frameworks exists now, but will close.


The Track Record

Philanthropy's track record in AI safety demonstrates extraordinary leverage. With a shoe-string budget of $500 million philanthropically funded organizations have:

Founded entire fields: AI alignment, model evaluations, AI control, and security

Shaped legislation: The Biden Executive Order, California's SB 53, the EU AI Act all bear the fingerprints of nonprofit research

Designed industry standards: "Responsible Scaling Policies," now used by every frontier lab, were invented by the nonprofit METR

Trained key personnel: The governance leads at OpenAI, Anthropic, and DeepMind; senior UK AI Security Institute leadership; top government advisors

Opened international dialogue: The only US-China track II dialogues on AI safety were started by nonprofit researchers

Philanthropy has already worked at scale, and the opportunities are only getting better.


The Strategy

A comprehensive philanthropic strategy for the AGI transition rests on five pillars:

1. Alignment Science

Technical research to ensure AI systems reliably follow human intentions. Priority areas include mechanistic interpretability (understanding how AI models "think") and AI control (containing misaligned systems through external safeguards). This summer, the UK AI Security Institute led perhaps the most rigorous AI funding round in history, identifying $50 million in alignment research projects with widely-recognized PIs but had only $10 million to fund them. The gap remains at the time of publication.

2. Targeted nonproliferation

Model evaluations systematically test AI systems before deployment to identify and address dangerous capabilities. Organizations like METR, Apollo Research, and AVERI have built the evaluation infrastructure now used by governments and labs worldwide. Security ensures that adversaries can’t steal powerful models and remove safeguards. Compute governance leverages the concentrated AI chip supply chain—effectively one company designs them (Nvidia), one makes them (TSMC)—to enable verification for international agreements on development and deployment.

3. Defensive Technology

Harden the world to malicious attacks. Here, the most neglected area is biodefense. Physical air-cleaning technologies—far-UVC light and glycol vapors—could end airborne pandemics much as chlorination ended waterborne disease. The science has been understood since the 1940s. The main bottleneck for industrial deployment is comprehensive safety testing. Blueprint Biosecurity estimates this work could absorb tens of millions of dollars immediately and testing requires seven flu seasons to complete. Every year of delay narrows our window.

4. Distributing Power

AI can enhance or undermine democratic governance. Philanthropists can fund: AI systems designed for collective deliberation (like Google DeepMind's "Habermas machine"); auditing frameworks to ensure AI decision-making is transparent and not exploited with secret loyalties and hidden backdoors; research on maintaining human checks at critical junctures; and the academic economics and political science research needed to understand governance after mass automation.

5. Talent and Infrastructure

AI companies currently hold near-monopolies on talent, information, and expertise. Breaking this requires: drawing researchers from scaling labs into safety; building state capacity in governments; infusing technical expertise into policy processes; and funding university computing clusters so researchers aren't priced out of frontier work.

Getting Started

Philanthropists can engage at multiple scales:

Scale

Impact

$50k - $250k

Support individual researchers, early-stage projects, or contribute to pooled funds. High-quality regranting vehicles exist for donors who prefer not to pick grantees directly.

$500k - $2m

Support 6-12 months of safety testing for biodefense, sponsor compute access for a dozen academics, or start a new organization in evaluations, control, or governance.

$5m - $20m


Anchor an organization, run a major research initiative end to end, or create a new piece of defensive technology.

$50m+


Shape an entire pillar of the strategy—become a defining funder of alignment science, biodefense, or decentralization of power.

High-impact organizations seeking funding include:

Technical safety: METR, Apollo Research, AI Safety Institutes’s Alignment Project

Governance and policy: Horizon Institute for Public Service, Safe AI Forum, Secure AI Project, Forethought Research, Institute for Law and AI

Biodefense: Blueprint Biosecurity

Talent development: ML Alignment Theory Scholars, Centre for the Governance of AI, Foundation for American Innovation

Regranting: Effective Institutions Project, Longview Philanthropy, AI Safety Tactical Opportunities Fund, AI Security Institute grants program

For bespoke recommendations tailored to your profile or theory of impact, get in touch.


The Ask

The philanthropists who funded the Pugwash Conferences faced an astounding challenge: shaping a proto-scientific discipline in the middle of an international arms race involving the world's best-kept secrets and strongest national interests. They succeeded. Humanity won.

We face a comparable challenge today, with less time and greater uncertainty. But we also have more tools, more knowledge, and more evidence of what works. The window is open, the problems are tractable, and the stakes are high.

If you are already contributing to this work, or would like to begin, I would welcome the opportunity to connect you with leading researchers, fellow philanthropists, and specific opportunities matched to your interests and capacity.

Contact: tyler@foundation-layer.ai

← Previous

Overview

Next →

Introduction

The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.