The
Foundation
Layer

A philanthropic strategy for the AGI transition

by TYLER JOHN

How to Get Started

If you’re new to funding in AI safety and preparedness or just want a community to come alongside you, this appendix offers resources to help you get started.

You can connect with me directly at tyler@foundation-layer.ai. I’ll be hosting some Foundation Layer events throughout the first half of 2026, and if you’d like to attend please express interest here.

On the launch of this report, I will also be launching the Foundation Layer Fund for donors who want to give to the kinds of interventions discussed in this report. Find this fund, my philanthropic advisory, and other funds and advisors below.

Funds

The Foundation Layer Fund

Work with me, Tyler John, to allocate grants to the kinds of interventions discussed in this report. I’ve personally recommended over 100 grants totaling more than $70m to dozens of philanthropists and would love to support your giving as well.

The Foundation Layer Fund directs grants to five areas, described in this report:

  • Alignment science to ensure that AI reliably follows instructions

  • Technology to enable the nonproliferation of dangerous capabilities and to verify compliance with international treaties

  • Defensive technology to harden the world against AI-enabled pandemics

  • Avoiding the concentration of power through model audits, AI for social coordination, human oversight of advanced AI, and economic mechanism design

  • Talent and infrastructure toward the same

Find The Foundation Layer Fund at Every.org

If you are looking for support for major donations, please contact tyler@foundation-layer.ai.


AI Safety Tactical Opportunities Fund (AISTOF)

Managed by JueYan Zhang, AISTOF is a pooled multi-donor charitable fund focused on reducing catastrophic risks from advanced AI systems. The fund is structured for speed and agility, capturing emerging opportunities in the rapidly evolving AI safety landscape across governance, technical alignment, and evaluations. AISTOF has raised over $30 million and committed more than 150 grants. Zhang brings a decade of experience as a hedge fund manager and serves on the boards of Rethink Priorities and other effective giving organizations, bringing rigorous evaluation methodology to grantmaking decisions.

Website: https://manifund.org/JueYan  

Contact: https://www.linkedin.com/in/jueyan/ 


Longview Philanthropy Frontier AI Fund

The Frontier AI Fund (FAIF) is a private discretionary grantmaking fund managed by Longview Philanthropy. In its first nine months of operation between December 2024 and September 2025, FAIF raised $13 million and disbursed $11.1 million to 18 organisations working on research, engineering, field-building, communications, and advocacy to shape the development and regulation of frontier AI systems. FAIF can fund urgent opportunities, seed new organisations, and scale the highest-impact organisations in need of funding.

Website: https://www.longview.org/fund/frontier-ai-fund/ 

Contact: info@longview.org


Philanthropic Advisors

The following organizations offer philanthropic advising services to donors interested in AI safety and security, from developing giving strategies to identifying and evaluating grantees.

Effective Institutions Project (EIP)

Work with me on your AI grantmaking. We are a philanthropic advisor and strategy shop specializing in AI safety, AI and geopolitics, AI and power concentration, peace and security, and U.S. democracy. We work closely with two family offices with whom we meet every month, alongside a much broader network of advisees who trust our advice. We’ve been involved in AI governance since before ChatGPT, and organized the first major town hall style meetings with Schmidt’s P150 in 2023, convening more than 60 funders to cultivate understanding of the AI landscape and featuring several of TIME’s 100 most influential in AI as speakers.

Website: effectiveinstitutionsproject.org

Contact: tyler@foundation-layer.ai


Longview Philanthropy

Work with my colleagues at Longview Philanthropy, where I was previously an executive and set up the AI program. Longview designs and executes bespoke giving strategies for major donors focused on global catastrophic risks, including AI safety and governance, biosecurity, and nuclear weapons policy. As of October 2025, Longview has directed over $85 million toward AI risk reduction. Their services are free of charge and include donor education, expert introductions, grant recommendations, due diligence, and progress reporting. Longview manages several funds including the private Frontier AI Fund (for donors giving over $100,000) and the public Emerging Challenges Fund. Their AI grantmaking team sources opportunities, evaluates cost-effectiveness, and maintains relationships with leading researchers and organizations.

Website: longview.org 

Contact: info@longview.org


Coefficient Giving

Coefficient Giving (formerly known as Open Philanthropy) has been the largest philanthropic funder of AI safety since 2015. Their grantmaking strategy focuses on improving societal visibility into AI capabilities and risks, developing technical and policy safeguards, and building talent and institutional capacity. Since 2017, Open Philanthropy has committed over $300 million to AI safety work. Their partnerships team advises donors giving $250,000 or more per year, constructing bespoke portfolios of giving opportunities aligned with donor interests and preferences. They connect funders with pooled funds and provide support evaluating opportunities.

Website: coefficientgiving.org 

Contact: partnerwithus@coefficientgiving.org


Investment Advisors

While philanthropic giving may be the most targeted way to support AI safety work, impact investing offers an opportunity to deploy capital toward companies building the commercial infrastructure for safe AI development. The following venture funds specialize in AI safety, security, and assurance startups.


Juniper Ventures

Juniper Ventures invests in startups working to make AI safe, secure, and beneficial for humanity. Run by exited founders and backed by investors including Reid Hoffman, Eric Ries, and Geoff Ralston, the fund focuses on the full AI assurance stack: from hardware and compute security to model interpretability, guardrails, and governance. Portfolio companies include Goodfire (interpretability tooling), Gray Swan (AI security), and Haize Labs (trust and safety infrastructure). Juniper writes early-stage checks and provides hands-on support with fundraising, hiring, and go-to-market strategy.

Website: juniperventures.xyz 

Contact: team@juniperventures.xyz


Halcyon Ventures

Halcyon Ventures is a mission-driven venture capital firm dedicated to building a safe, secure, and resilient world in the era of transformative AI. Part of Halcyon Futures, a nonprofit incubator that has launched 15 organizations collectively backed by over $200 million, the fund invests early and provides support across hiring, fundraising, and governance. Halcyon's network includes over 1,000 researchers, founders, and policymakers, making it a hub for sharing talent and ideas in the AI safety ecosystem. The fund has demonstrated early success with notable markups on initial investments.

Website: halcyonfutures.org


Safe AI Fund (SAIF)

Founded by Geoff Ralston, former President of Y Combinator, the Safe Artificial Intelligence Fund (SAIF) focuses on early-stage startups developing tools to enhance AI safety, security, and responsible deployment. SAIF provides initial investments of $100,000 via SAFE agreements with a $10 million cap. Beyond capital, Ralston offers weekly office hours and connections to his extensive investor network. The fund backs companies working on AI interpretability, compliance, disinformation detection, and protective infrastructure, with the conviction that safety enables trust, which is a prerequisite for technological progress.

Website: saif.vc


Entrepreneurs First

Accelerator with investors including Founders Fund, Greylock, John Collison (Stripe), Patrick Collison (Stripe), Reid Hoffman ( LinkedIn), Demis Hassabis (DeepMind), and Mustafa Suleyman (Microsoft AI). Their 2024 def/acc program aimed to build technology to protect us from the biggest threats we face – everything from pandemics and cybercrime to powerful AI and nuclear war — and incubated some fo the most important for-profit start-ups now working om building security infrastructure for advanced AI.

Website: https://www.joinef.com 

Contact: partnerships@joinef.com


Seldon Lab

Seldon Lab is an AGI security accelerator and research organization building a portfolio of technologies to define the emerging field of AGI security. Founded in early 2025, Seldon brings together founders while conducting cutting-edge research on existential security technologies. Their pilot batch companies have raised over $10 million and sold security services to xAI and Anthropic. Portfolio companies include Lucid Computing (hardware-rooted compute verification for AI chips), Andon Labs (AI agent safety evaluations), DeepResponse (autonomous cyber defense), and Workshop Labs (verifiably private AI pipelines). Seldon also hosts events with leaders from organizations like Redwood Research and Juniper Ventures, building a network around the thesis that AGI security will become a major industry requiring infrastructure investment at scale.

Website: seldonlab.com


Political Giving Advisors

Political giving complements philanthropic work by shaping the policy environment in which AI develops. For American donors, contributions to 501(c)(4) organizations and hard-dollar political donations offer a way to influence AI governance without the constraints of foundation giving. Due to campaign finance regulations, individual donations are often highly leveraged compared to corporate lobbying. The following organizations focus on AI policy advocacy.


AI Policy Network

The AI Policy Network is a 501(c)(4) social-welfare organization that builds bipartisan support for policies to help America prepare for AI systems of unprecedented scope and capability. AIPN operates at the intersection of government leaders, technology policy experts, and technical researchers with a singular focus: ensuring the United States is prepared for powerful AI while remaining both dominant and safe. The organization engages directly with members of Congress, congressional staff, federal officials, and media to promote proactive policymaking around transformative AI capabilities. The organization advocates for targeted policies addressing the unique challenges of rapidly advancing systems, including strategic threats to American security and risks from uncontrollable systems with unknown consequences.

Website: https://theaipn.org/


Americans for Responsible Innovation (ARI)

Americans for Responsible Innovation is a bipartisan nonprofit organization dedicated to AI policy advocacy in the public interest. Founded in late 2023 by former Congressman Brad Carson and tech entrepreneur Eric Gastfriend, ARI advocates for thoughtful governance frameworks that protect the public while maintaining America's competitive edge in AI. The organization works across political ideologies and does not accept funding from industry, maintaining independence from tech company influence. ARI's priorities span consumer protection, national security, and responsible AI deployment, including measures like whistleblower protections, reporting requirements for frontier models, and export controls on advanced semiconductors. ARI operates both a 501(c)(4) advocacy arm and a 501(c)(3) research arm (Center for Responsible Innovation), allowing donors to engage through lobbying support or tax-deductible contributions to research.

Website: ari.us 


Center for AI Safety Action Fund (CAIS AF)

The Center for AI Safety Action Fund is a nonpartisan 501(c)(4) advocacy organization dedicated to advancing public policies that maintain U.S. leadership in AI while protecting against AI-related national security threats. Formed in 2023 as the advocacy arm of the Center for AI Safety, CAIS AF spent $270,000 on federal lobbying in 2024 and was a sponsor of California's SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act). Led by Varun Krovi, who brings 15+ years of policy experience including work on the NIST AI Risk Management Framework and the CHIPS and Science Act, the organization convenes lawmakers, business leaders, national security experts, and machine learning engineers to build bipartisan consensus on AI policy.

Website: action.safe.ai 

Contact: contact@safe.ai


Public First

Public First is a newly formed nonpartisan 501(c)(4) organization created by former Representatives Chris Stewart (R-UT) and Brad Carson (D-OK) to advocate for responsible AI policy and counter industry-backed efforts to eliminate AI safeguards. The organization is launching two affiliated super PACs, one Republican (Defending Our Values PAC) and one Democratic (Jobs and Democracy PAC), with plans to raise $50 million to support candidates committed to AI transparency and public interest safeguards in the 2026 midterms. This effort directly counters the $100+ million "Leading the Future" super PAC backed by tech industry figures seeking minimal AI oversight. Public First represents a new vehicle for donors who want to influence electoral outcomes on AI policy specifically.

Website: publicfirst.us

← Previous

Overview

Next →

AI Consciousness

The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.