
The
Foundation
Layer
A philanthropic strategy for the AGI transition
by TYLER JOHN
The Exponential Trend
The Baseline Trend
The Innovation Plateau
Going Even Faster…
How Wild Will Things Get?
3
The appendix contains a detailed explanation of deep learning and how it works. For the uninitiated I recommend reading it, but have moved it to the end of the report for the sake of readability.
4
These details are anecdotal, and therefore not precise. They simply serve to illustrate the broader concept of effective compute, which is more carefully measured.
5
This is a very controversial assumption. However, as long as the compute-intensiveness of the training run is the dominant determinant of capabilities and AI engineers are a marginal contributor the implications of the model remain unchanged.
6
An earlier METR study working with open source developers found that experienced developers working on familiar codebases were actually 19% slower with AI assistance, though they believed they were faster. There have been sophisticated critiques of this study, for example noting that the participants did not have significant experience with AI tools. Nonetheless, the discrepancy between perceived and measured productivity when using AI tools warrants caution when interpreting self-reported efficiency gains.
The Exponential Trend to Superintelligence
As early as 1965, futurists like Herbert Simon began predicting that AI would one day be able to do everything that humans can. This became, in Baidu Chief Scientist Andrew Ng’s terminology, “the AI dream” of all computer scientists: to build artificial general intelligence, or “AGI.” So strong was this dream that even back as far as 1983, DARPA spent a billion dollars on the Strategic Computing Initiative, a ten-year project to build AGI.
But while philosophers and computer scientists have long predicted the development of artificial general intelligence, almost no one was prepared for AGI to emerge from deep learning, or to draw near so quickly.
Deep learning — the dominant approach in generative AI which involves getting AI systems to learn by themselves by stacking many layers of numbers on top of each other and asking AI systems to tweak these numbers again and again until they start getting good results3 — was met with pervasive skepticism for decades. In 1994, Bell Labs’s John Denker scoffed that “neural networks are the second-best way of doing just about anything.” Noam Chomsky, the most widely cited thinker of the last two decades, lectured scathingly in 2011 that deep learning “interprets success as approximating unanalyzed data.” Or in plain English: deep learning models add nothing to the data they are trained on. In 2012 NYU psychologist Gary Marcus wrote that “deep learning takes us, at best, only a small step toward the creation of truly intelligent machines.” Criticism came from all corners: linguistics, psychology, robotics, statistics, causal inference, and rival ML camps.
The skeptics have consistently been proven wrong by the steady march of deep learning. Just one year after Chomsky’s scathing remarks, deep learning saw its first major breakthrough. AI godfather Geoffrey Hinton and his student, later OpenAI co-founder Ilya Sutskever, created “AlexNet.” Named after the less famous third contributor, Alex Krizhevsky, AlexNet was an image classifier capable of learning, all by itself, whether an image is, say, a dog or a cat. AlexNet could classify a very large number of images, better than any other classifier, and was an early precursor to the AI that lets you search for images on Google.
This invention was enough to demonstrate the success of very large neural networks, with numerous layers, for intelligence. From here, the “deep learning revolution” took off. Google DeepMind and OpenAI started training larger and larger models. Their increasing number of layers and weights, while very expensive computationally, allowed them to do increasingly sophisticated statistical inference by increasing the number of neural connections and the different ways the networks could transform their inputs into outputs.
Since 2012, we’ve gone from rudimentary categorization algorithms to an explosion of new capabilities: talking computers capable of writing poetry and fiction, a breakthrough algorithm that solved the protein folding problem in biology and won its inventors a Nobel Prize, game-playing agents that beat the world’s best players in some of the most complex strategy games, “AI scientists” that can walk you through a chemistry synthesis step by step, video generators capable of flying you over Tokyo in the snow, music-making algorithms that write dynamic electropop, and coding agents that can write entire video games end to end.
Most of these advances, however, have taken place in the last three years, not the last thirteen. And unlike in other areas of science with enormous fields and government grantmaking programs, all of this has been achieved by only a few hundred researchers working in a few private companies.
Today many of the people at the frontier of AI technology forecast the coming of AI that is good enough to drive humanity to obsoletion in just a few years. Ex-OpenAI whistleblower Daniel Kokotajlo, who predicted the rise of chatbots and reasoning models before ChatGPT was released, predicted this year that we will see superintelligent AI in 2027.6 Dario Amodei, CEO of leading lab Anthropic, predicted that in 2027 we will have “a country of geniuses in a data center.” Every major AI lab predicts general artificial intelligence within the next decade, with Google execs debating whether AGI will arrive just before or just after 2030. And AI “godfather” Geoffrey Hinton, who has no financial stake in these predictions whatsoever, said in 2023 that we “can’t rule out” artificial general intelligence by 2028.
The skeptics continue to persist, dismissing the rhetoric as a fundraising tactic, or dredging up the same critiques refuted in 2012.
But progress in AI over the past decade and a half has consistently followed regular trendlines. Both the power of the software and the power of the computing chips used have increased exponentially every year.

Fig. 1 Training compute (in FLOPs) used to train notable AI models, by year (log scale).
Source: Epoch.AI
In many domains, such as economic growth, population trends, and technology infrastructure, progress is very often a straight line. So, a good starting point for forecasting is to simply extrapolate the long-term trends. If you simply extrapolate the trends of the past decade and a half, we see human-level artificial intelligence by around 2030, when the best models will have 10-100x as much computing power as the human brain.
But other futures are also possible. From this point forward there are three broad trajectories AI progress could follow:
Baseline Trend: AI progress continues to follow exponential trends, with performance continuing to increase at the same rate.
Innovation Plateau: AI progress stagnates as model training gets more expensive.
Recursive Acceleration: AI progress goes faster on a superexponential trend, as AI systems increasingly write their own code, creating an army of AI software engineers and significantly increasing the productivity of AI engineering.
We don’t have enough information to accept or dismiss any of these trajectories. However, if we treat them all as possible, then we need to be preparing for at least the possibility of artificial intelligence that can do everything humans can do in years, not decades.
Three Perspectives on AI Progress:
The baseline trend
The average human score on the test is 0%. The answers to these questions are not on the internet or in AI system training data. An AI system that could solve all of them would, at a first pass, be better than all academics combined at answering academic questions. And yet scores on the exam indicate that LLMs are steadily on track to solve all of these questions within a few years, if not sooner.

Fig. 3 AI progress on Humanity's Last Exam.
Source: CAIS Dashboard
Human intelligence involves not just a command of facts but also the ability to implement tasks in the real world. The AI evaluations nonprofit METR has studied this using a benchmark with a large database of tasks that remote workers can do on their computer, from Googling a search result to fixing bugs in Python libraries, in order to find out how capable AI models are of solving open-ended tasks. The result is measured in terms of the number of hours it would take a human to complete that same task.

Fig. 4 Length of coding tasks AI systems can do, measured in human hours required to perform that task.
Source: AI Digest
METR finds that the length of tasks AI systems can do autonomously is doubling about every four months. OpenAI’s GPT-5 can reliably complete tasks that would take humans about 3.3 hours to complete without requiring any supervision. Two months later, GPT-5.2 was released, capable of reliably completing tasks that would take humans 6.7 hours to complete. If the trend continues, within four months OpenAI’s models will be able to do tasks that are 12 hours long, within eight months tasks that are 24 hours long, and within a year tasks that are 48 hours long. So sometime in 2026 we’d have models that can do tasks that take humans a full day to complete without the models needing so much as a manager check-in. By the end of 2027, the trend implies that the best models will be able to do three whole weeks of work between each check-in.
METR’s result has a three-year track record of successful prediction, one of the most consistent results in AI development. An interactive version of their full model is available on their website.
Moving beyond benchmarks of AI model performance on specific tasks, we can consider trends in the overall computing power that AI models have. One of the main drivers of AI progress over the last decade has been throwing larger and larger piles of computer chips at AI systems so that they can become more powerful. Since the deep learning revolution, the amount of computing power that has been used to train the best models has increased by 5x per year, from better computer chips and simply using more of them. That is, the hardware used for AI has gotten 5x better every year.
But AI progress has also been driven by breakthroughs leading to better software. Researchers have been able to measure how much better software has become each year with a metric they call “effective compute.” This is easiest to explain with an example. When GPT-4 was first trained in 2023, it cost over $100 million to create. But because our methods have gotten better, today training the same model with the same chips would cost far less, on the order of $500,000.4 This amounts to a 200 times improvement in software. Using the same hardware, you can get the same performance for 200 times less cost.
Researchers at Epoch estimate that the software of AI systems improves by about 2.5x each year. Given that the amount of training compute that is used to train the best models increases by about 4.2x each year, this leads to about a 10x improvement in frontier models’ computing power every year. This is an exponential trend: if it continues, next year models will be 10 times as powerful as they are today. The following year they will be 100 times, and two years from now they will be 1,000 times as powerful.

Fig. 5 Annual improvements in AI performance.
Source: Epoch AI
Because the idea of growth in AI’s “power” is a little abstract, it is helpful to consider what this means in practice. We can do this by examining the change across generations from GPT-2 to GPT-3.5 to GPT-5 today. Here is one of the hardest solved math problems, pulled directly from Humanity’s Last Exam.

Fig. 6 One of the hardest solved problems in physics.
Source: Humanity's Last Exam
Here is GPT-2's (2019) incoherent answer:

Fig. 7 GPT-2's response to the prompt in figure 6.
Source: Hugging Face
You can try out GPT-2 (the first public model in the series) for yourself at Hugging Face.
GPT-3.5 (the original "ChatGPT") is perfectly coherent but wrong:

Fig. 8 GPT-3.5's response to the prompt in figure 6.
Source: OpenAI API
GPT-5 is the first model to get the correct answer (-8) in one shot:

Fig. 8 GPT-5's response to the prompt in figure 6.
Source: ChatGPT
In 2019, the models that got AI companies excited couldn’t write coherent sentences. But the five years that followed saw 1,000 times better software and models using about 1,000 times as much compute, leading to millionfold improvements in the state of the art. Today, models based on the same architecture can write poetry, interpret humor, beat world champions at strategy games, paint the world with Ghibli-style art, and solve open problems in biology. If that is GPT-5, what is GPT-7?
These advances are explained by two laws and one trend. The laws are known as “Moore’s Law” and the “Scaling Law”, and the trend is the unstoppable march of progress by AI software engineers.
Moore’s Law states that microchips get twice as effective every two years. Indeed, this has held since 1970 and shows no signs of stopping. We’ve had exponentially more powerful computer chips every year for the past 55 years.

Fig. 9 The number of transistors per microprocessor.
Source: Our World in Data
The Scaling Law says that AI model efficiency is an exponential function of the size of the model and the size of its data set. This means that models with more and better microchips, when used effectively, will be many times more powerful than models with fewer and worse microchips. We won’t cap out when we get to a certain size of data center, and models will continue to get better and better as we can throw exponentially more computer chips at them with the help of Moore’s Law.
These laws have predicted AI progress effectively for a long time, but they’ve only been well understood for a few years. Without a proper understanding of the scaling laws, we end up looking like Oxford Computer Science Professor Michael Wooldridge, who in 2022 made a list of open problems in AI. His list of problems ranked them from easy, to solved after real effort, to “nowhere near solved.” A year later, his entire list of problems was solved, other than human-level general intelligence.

Fig. 10 Graphic from Oxford University computer scientist's 2021 textbook, ranking problems from easy to nowhere near solved. After just a few years, the only item left unsolved was human-level general intelligence.
Ultimately, if the baseline trend is correct, AI will outperform scientists on knowledge work in years to months, will autonomously drive forward multi-week human tasks in a year or two, and will continue to become 15 times as powerful each year for some time.
Three Perspectives on AI Progress:
How we could see an innovation plateau
Expecting the Baseline Trend to continue only makes sense if there are not compelling reasons to expect the trend to speed up or slow down. However, there are arguments worth considering that cut in each direction.
The best argument that progress will slow down relative to the Baseline Trend resulting in an Innovation Plateau is that training cutting edge models is getting more expensive each year. This is just because the models being trained are so massive. AI companies are using every computer chip they can get their hands on to build bigger and better models, leading to an increase in cost of about 2.6x each year, in another, more pessimistic exponential trend. By 2030, it’s estimated that training the best AI models will cost $1 trillion.

Fig. 11 Estimated cost of largest AI training run each year from 2022 to 2030.
Source: Situational Awareness
AI company revenues are also growing, but not fast enough to keep up with the demand for computing power. AI companies could soon have revenues as large as Amazon, Apple, and Saudi Aramco, in the hundreds of billions of dollars, making it possible for them to afford a trillion dollar data center — even 2031’s ten trillion dollar data center. Or this could be sponsored by enormous corporations or major government investments into AI. This would represent a massive increase in spending on AI, but government spending in the trillions each year would not be impossible or unprecedented.

Fig. 12 OpenAI revenue growth.
Source: Epoch AI
If AI company build-out slows down, then progress on AI capabilities will also slow. How much?
Recent METR research, in collaboration with an MIT economist, attempts to answer this question. In their model, "Forecasting AI Time Horizon Under Compute Slowdowns," METR makes the simplifying assumption that all AI progress comes from using increasingly large computing power and none of it comes from better architectures and good engineering — AI engineers have zero productivity.5 Based on this assumption, they calculate how much progress we see at different levels of computing power, based on the empirical relationship between capability growth and computing power from 2019 to 2025. Using OpenAI’s predictions about how much compute build-out will slow down after 2030, they calculate how much AI progress will slow down:

Fig. 13 The length of tasks AI systems will be able to do with 50% reliability, from 2020 to 2040, if compute build-out slows down, as implied by METR's forecast.
Source: METR
A significant slowdown in compute build-out starting in 2027 would delay the time that AI becomes capable of doing a week of human work autonomously by five years, from 2030 to 2035. It would delay the time that AI becomes capable of doing a month of human work autonomously by seven years, from 2031 to 2038. And it would delay the time that AI becomes capable of doing a year of human work autonomously by 10 years, from 2033 to 2043.
The lesson is that if AI companies do not find ways to increase the supply of cutting-edge computer chips by 2027 — and if AI progress is driven entirely by computing power rather than good software engineering — the rate of AI progress will start to slow down. By itself, this would not amount to a plateau. We would still have AI agents capable of automating month-long human tasks in 2038. But slowdowns in AI progress could also lead to slowdowns in investment in AI research and development, leading to a sharper decline. Expensive models that underperform their user’s expectations could lead to an “AI winter” by 2030, where investment in AI slows, salaries and excitement stagnate, and talent development slows. Such would certainly lead to slower progress on the AI frontier.
Three Perspectives on AI Progress:
How things could go even faster
There is a compelling argument, however, that instead of continuing or slowing down, AI trends could actually pick up. At the moment we are in a kind of “Cambrian Explosion” in AI algorithmic progress, with multiple new techniques for improving AI systems which could keep advancing. For example, AI researchers have learned how to make better models by training AI systems off of the outputs of other AI models. This is the approach that allowed Chinese firm DeepSeek to create the world’s most powerful open-source model. Researchers have also developed a technique called “test time computation,” which involves optimizing models so that instead of spitting out just one answer when prompted, they do whole chains of reasoning much like a human might.
But by far the most compelling argument that AI progress could speed up dramatically comes from AI research automation. Currently every leading lab is working on automating software engineering so that their best AI models can write their own code. The mechanism involves using reinforcement learning: training AI models on coding datasets and then asking them to solve coding problems, rewarding successful solutions, and punishing failures. While we’re only a bit more than a year into this trend, AI models are getting very good at coding. Claude Opus 4.5 now achieves 80.9% on SWE-bench Verified, a benchmark that evaluates real-world software engineering by testing models on actual software issues submitted by users to GitHub. On Anthropic's internal engineering assessment, the same model outperformed all human candidates when given equivalent time constraints.
An internal survey of 132 Anthropic engineers and researchers in December 2025 found that employees now use Claude in roughly 60% of their work and report an average 50% productivity boost. This represents a significant increase from the previous year when employees used Claude in 28% of their work and reported a 20% productivity gain. These self-reported figures are corroborated by more objective metrics: Anthropic observed a 67% increase in merged pull requests per engineer per day after adopting Claude Code across their engineering organization.6
Just two months later, Anthropic released a model with employee surveys reporting a 100% productivity boost on average. This same model, Claude Opus 4.6, was found to have completely solved multiple software engineering benchmarks, including solving all of Anthropic’s cyberoffense benchmarks.
If AI companies manage to create coding agents that are as effective as human software engineers, their AI engineering will instantly go much, much faster. This is because AI companies have enough resources to run millions of copies of these agents in parallel. (Compare: ChatGPT has about 800 million weekly users.) So as soon as you have one AI software engineer, you have a million AI software engineers. This is equivalent to multiplying the staff at AI companies by a thousand, potentially leading to AI progress going about a thousand times as fast as well. And of course, the next model would be even better, which could lead to a dramatic AI take-off.
Of course, this is the most extreme such scenario. More moderate scenarios, like AI companies stalling in the creation of AI coders at the point where they have intern-level agents, would lead to a much less significant speed-up given that their work would be bottlenecked by the time of their human managers. We’ll need consistent, versatile agents to see significant speed-ups in the automation of AI research.
The most famous forecasters expecting AI agents to automate software engineering predict human-level artificial intelligence in 2028, with a significant chance of human-level AI in 2027. Concerningly, their forecasts are based on moderate, not aggressive assumptions about AI progress. The authors of AI 2027 centrally ground their estimate in the METR benchmark described earlier in this section — the best long-run estimate we have for how good AI models are going to get at completing long coding tasks. It requires a bit of speculation from there — in particular they think things will start to go faster than METR’s benchmark currently predicts, and they have to make some assumptions about how good superhuman coders will become.
But the underlying argument doesn't require any specific timeline to hold. Empirically, we can observe that (1) the length of tasks AI agents can perform doubles every seven months, (2) AI companies are attempting to leverage these capabilities to automate their AI engineering and make it go much faster, and (3) AI engineers at the best companies are using these models in their daily work and reporting significant productivity gains. Independent of the specifics, there are concrete, observable trends that drive towards an increasing rate of AI progress, above the already fast baseline trend.
We have no certainty that AI companies will create software engineers capable of automating AI research. But all major labs have made this their priority, AI models continue improving at coding benchmarks, and the underlying argument for acceleration is too grounded in observable trends to simply dismiss.
Three Perspectives on AI Progress:
How wild will things get?
I’ve explained three broad trajectories AI progress could follow:
Baseline Trend: AI progress continues to follow exponential trends, with performance continuing to increase at the same rate.
Innovation Plateau: AI progress stagnates as model training gets more expensive, and the market loses interest.
Recursive Acceleration: AI progress goes faster on a superexponential trend, as AI systems increasingly write their own code, substantially increasing the productivity of AI engineering.
These trajectories give us a shape of the possibility space for AI progress. But they don’t tell us all that much about how quickly we’ll achieve AGI — AI that can do everything a human brain can do. As previously mentioned, the most rigorous and well-known forecast of the Recursive Acceleration trajectory leads to AGI in 2028. What of the Baseline Trend? If things continue to go as they have been for years, how powerful will AI become, and how quickly?
Estimating timelines to AGI is notoriously tricky, and the existing forecasting methods are weak. The dominant approaches involve extrapolating trends. These include trends in:
Benchmarks in somewhat narrow capabilities — as with Humanity’s Last Exam which aims to determine when AI systems will be superhuman at academic problem solving
Benchmarks in general capabilities — as with METR’s benchmark which aims to determine how quickly AI systems improve their ability to solve complex, open-ended problems
Success on the Turing Test to perfectly mimic expert reasoning
Proportion of GDP from AI revenues
Expert opinion about time to AGI
In the below chart I attempt to summarize the implications of each forecasting method:
Trend
Description
Weaknesses
Time to AGI
Humanity's Last Exam
Assessing AI systems’ abilities to solve the hardest solved academic problems without tools or help.
Assesses only question-and-answer capabilities. Limited data.
2028
(The time when AI systems have superhuman academic performance unaided)
METR Benchmark
Assessing AI systems’ abilities to reliably solve tasks that take humans a certain number of hours, especially coding tasks.
The dataset excludes some of the hardest problems for agents. We can only extrapolate on the basis of 50% and 80% success rates, as data is unreliable for very high success rates.
2029 or 2030
(The time when AI systems can do tasks that take humans one month of work with success 50% or 80% of the time, respectively)
Distinguishability
(Turing Test)
A mathematical model assessing how distinguishable AI systems are from experts.
Very limited data on how fast AI is improving at emulating humans.
2029
(The time when AI is indistinguishable from human experts, even on difficult tasks)
Proportion of GDP
AI company revenues as a proportion of global revenues.
Extremely limited data. (Only a few years.) Lack of reliable estimates outside of internal company forecasts. There are paths to enormous revenue other than AGI.
Extremely unclear
(OpenAI’s internal forecasts project hundreds of billions of dollars in revenue towards 2030. Anthropic projects $70bn in revenue by 2028. Naively extrapolating OpenAI’s annual tripling of revenue leads to 1% of GDP by 2029. But there is limited data for this perspective.)
Expert Opinion
A 2023 survey of 2,778 researchers who published in top AI venues, asking them when unaided machines outperform humans in every possible task.
Experts have significantly under- and over-estimated certain AI capabilities in years past. The bar of outperforming humans at everything is higher than the bar for AGI.
10% by 2027, 50% by 2047
As I’ve said, all of these methods are weak. In particular, several of the methods rely on very limited data about trends, and one method relies on aggregating highly fallible expert opinion. What’s more, using the three benchmark methods to forecast time to AGI relies on two core assumptions.
First, this assumes that the powerful advances we’ve seen in AI over the last thirteen years will continue to progress. This assumption is more of a feature than a bug — how else could we reasonably assess the timeline to AGI other than extrapolating from what we observe? But the assumption does mean that if there is some systematic reason to expect progress in AI to slow down across the board, then AGI will arrive later than the forecast implies.
Second, they assume that the relatively narrow slice of AI capabilities that are being measured is representative of the full set of AGI capabilities. HLE Bench is the narrowest benchmark, measuring only question and answer capabilities. But of course, you could in theory have perfect oracle AIs that can answer all of our hardest science questions but not be able to do many other things a true AGI could do. This doesn’t guarantee that the AI can tell a good joke. The other benchmarks are much broader, focusing on whether AI can complete tasks that take humans certain amounts of time or speak exactly like humans do. But still, one of the lessons of AI progress to date is that AI capabilities are jagged, or “spiky.” AIs are superhuman at some things we can do and then will fail absurdly at other things we find simple, even counting the number of “r’s” in a single word.

Fig. 14 Current AI capabilities in varying domains.
Source: A Definition of AGI
“Jaggedness” of this kind is an important phenomenon that shows one kind of mistake we can make in naïvely extrapolating from certain benchmarks to broader AI capabilities. But we need to be very careful not to be fooled by jaggedness into underestimating advances in AI capabilities:
Just as AI systems have often failed in surprising ways they have also generalized in surprising ways: for example, for years AI systems couldn’t add multi-digit numbers — until we trained AI models with 13 billion parameters, and then suddenly AI models could accurately sum large numbers.
Jaggedness tends to be smoothed out with scale; every new frontier model has fewer problems with hallucinations, generating too many fingers, miscounting letters, buggy code, and so on.
AI companies are aggressively working to fix some of the biggest AI capability valleys that lead to weaker general capabilities.
Furthermore, the benchmarks we’ve discussed assess AI capabilities from multiple angles of attack: question and answer, multi-hour agentic tasks, coding capabilities, human mimicry, and so on. The more unique ways we have to measure AI capabilities that suggest a short time to AGI, the more confident we should be that jaggedness will not severely undermine progress towards AGI.
We are beginning to see the development of more sophisticated benchmarks that see how well AI systems perform on messy real-world tasks — from operating a vending machine to beating Pokemon games to fundraising and hosting real-world events. These benchmarks are currently too immature to provide much information about the time to AGI, and more work of this kind can help strengthen our understanding of current AI capabilities and forecasts of what is to come.
Shortcomings aside, the forecasting methods we’ve assessed in this section are the best evidence we have about the date that human-level artificial intelligence will arrive. And they imply that we should expect human-level artificial intelligence around 2030. They are not conclusive or definitive, and there are lots of ways to quibble about the details. But the broad thrust is clear: AGI circa 2030 certainly cannot be ruled out, and is the single most credible emerging picture about where we are headed.
We’ve surveyed arguments that AGI could arrive sooner — from recursive self-improvement as discussed in AI-2027 — or later — from increasingly expensive training runs utilizing ever-larger data centers, reaching a possible zenith at 2030’s “trillion-dollar cluster.” The resulting picture is a median timeline of AGI within years of 2030, and a plausible range of roughly 2027 to 2038.
But whether we see AGI in 2030 as a 50th percentile outcome or a 10th percentile outcome, we clearly cannot dismiss the possibility of superhuman AI systems within mere years. We must prepare to have AI systems that can progress science and out-think humans, running at a hundred times speed with millions of copies running in parallel, in just five years. This would certainly mark a pivotal moment in our history.
← Previous
Overview
Next →
Civilization-Scale Threats
The Foundation Layer is a project by Tyler John, with generous support from
Effective Institutions Project and Juniper Ventures.



