Grading AI 2027’s 2025 Predictions
How has AI progress compared to AI 2027 thus far?
AI 2027 laid out a detailed scenario for how AI would progress from 2025 through 2027, including quantitative predictions and qualitative descriptions of the AI landscape.
Now that we’re in early 2026, we can grade how its 2025 predictions compare to reality! This is exciting to us because we put a lot of effort into filling AI 2027 with concrete, falsifiable predictions, and now we reap the benefit of that effort: an additional method of forecasting AI timelines, to complement the methods we already use.1
The primary question we’ll answer is: How fast is AI progress moving relative to the AI 2027 scenario?
In aggregate, progress on quantitative metrics is at roughly 65% of the pace that AI 2027 predicted. Most qualitative predictions are on pace.
Quantitative pace of progress
For quantitative predictions, we estimate a “pace of progress” multiplier, where 1x means progress is on pace with AI 2027’s predictions, 2x means progress is 2x faster, and 0.5x means progress is half as fast.
For the displayed aggregates, reality is progressing at between 58-66% of the rate of AI 2027. Aggregating over individual predictions rather than prediction categories gives a higher result (mean 75%, median 84%), but we think it is a worse indicator; see footnote.2
In AI 2027, we depicted a takeoff from full coding automation to superintelligence over the course of 2027.
If progress continues at 65% of the rate we depicted, then we will end up with this takeoff happening from late-2027 to mid-2029. However, we expect slowdowns in training compute and human labor growth, leading to slower progress (before taking into account AI R&D automation).3 Adjusting for this consideration using the AI Futures Model says takeoff will happen slightly later, from mid-2028 to mid-2030.4
Mid-2028 is earlier than Daniel’s current median prediction for full coding automation (2029), but the 2-year takeoff to superintelligence is slower than his median takeoff speed of ~1 year. My (Eli’s) median prediction for full coding automation is in the early 2030s, and my median takeoff speed is about 2 years. See here for our forecasts.
You can see all quantitative predictions and resolutions in this spreadsheet.
Takeaways include:
SWEBench-Verified progress was surprisingly slow. AI 2027 predicted 85% by mid-2025, from a starting point of 72%; the best actual score was 74.5% (Opus 4.1). This mirrors the AI 2025 forecasting survey, in which respondents predicted a score of 88% by the end of 2025, as opposed to the actual 81%.
Coding time horizons are on pace for a central AI-2027-speed timelines model trajectory, while being slower than an erroneously graphed one. METR’s 80% coding time horizon is moving at 1.04x the pace of a central AI-2027-speed trajectory from our Apr 2025 model.5 However, we’re at 0.66x the pace of the trajectory originally displayed on the graph we shared, which contained an error (see both trajectories on the same graph here). If we had made predictions with our new model in Apr 2025, the relative pace of progress would be between these 0.66 and 1.04 values.
Revenue grew even (slightly) faster than AI 2027 predicted, but valuation is behind pace. OpenAI’s annualized revenue hit ~$20B, slightly ahead of the $18B prediction. In the AI 2025 forecasting survey, forecasters underestimated revenues more dramatically; they underpredicted the sum of AGI companies’ revenues by ~2x. Meanwhile, OpenAI’s valuation was $500B as of Oct 2025, up from $300B when we published AI 2027. In AI 2027, $500B valuations were achieved in Jun 2025, so reality is well behind pace.
AI software R&D uplift is behind pace. This is primarily because we have updated our estimate of uplift in early 2025 downward, and thus our uplift estimates for the end of 2025 are similar to our original estimates for the start of AI 2027.
Compute growth is mostly on pace, with the possible exception of growth in the largest training run. We estimate that no leading AI company has conducted a substantially larger training run than GPT-4.5, which was released in Feb 2025. However, we have extremely wide uncertainty here. The obscurity around training compute makes it hard to rule out a scale-up, despite our best guess being that no single training runs have exceeded GPT-4.5 in compute.
Qualitative predictions
Below, we comment on how AI 2027 has held up qualitatively. Text from AI 2027 is italicized. We skip sentences that we graded quantitatively.
Mid 2025
The world sees its first glimpse of AI agents.
Advertisements for computer-using agents emphasize the term “personal assistant”: you can prompt them with tasks like “order me a burrito on DoorDash” or “open my budget spreadsheet and sum this month’s expenses.” They will check in with you as needed: for example, to ask you to confirm purchases. Though more advanced than previous iterations like Operator, they struggle to get widespread usage.
This seems correct. ChatGPT agent, released Jul 2025, has an animation at the top of its release post in which it books a trip on Expedia. Thi is similar to the Doordash example. They indeed struggled to get widespread usage.
Meanwhile, out of public focus, more specialized coding and research agents are beginning to transform their professions.
The AIs of 2024 could follow specific instructions: they could turn bullet points into emails, and simple requests into working code. In 2025, AIs function more like employees. Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days. Research agents spend half an hour scouring the Internet to answer your question.
This seems fairly accurate. From Sep 2, 2025: “Claude Code has quickly taken off—already generating over $500 million in run-rate revenue with usage growing more than 10x in just three months.”
Agents are used increasingly autonomously, so we think the spirit of the prediction is correct, but we don’t think there was an especially large amount of usage in Slack or Teams.
The agents are impressive in theory (and in cherry-picked examples), but in practice unreliable. AI twitter is full of stories about tasks bungled in some particularly hilarious way. The better agents are also expensive; you get what you pay for, and the best performance costs hundreds of dollars a month. Still, many companies find ways to fit AI agents into their workflows.
This seems broadly accurate. It’s possible that coding agents were slightly more reliable than we expected.
Late 2025
(To avoid singling out any one existing company, we’re going to describe a fictional artificial general intelligence company, which we’ll call OpenBrain. We imagine the others to be 3–9 months behind OpenBrain.)
The race appears to be closer than we predicted, more like a 0-2 month lead between the top US AGI companies.
Although models are improving on a wide range of skills, one stands out: OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”) and their US competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research.
AIs are helping a bunch with coding, but not as much with other parts of AI research (that said, we didn’t predict that AIs would be great at all of AI research).
By this point “finishes training” is a bit of a misnomer; models are frequently updated to newer versions trained on additional data or partially re-trained to patch some weaknesses.
Indeed, it seems that GPT-4o, GPT-5, and GPT-5.1 are probably different continuations of the same base model.6 More generally, the pace of model releases has become more frequent.
The same training environments that teach Agent-1 to autonomously code and web-browse also make it a good hacker. Moreover, it could offer substantial help to terrorists designing bioweapons, thanks to its PhD-level knowledge of every field and ability to browse the web. OpenBrain reassures the government that the model has been “aligned” so that it will refuse to comply with malicious requests.
Hacking abilities in terms of assisting humans seem very strong, though it’s unclear how good AIs are on their own. Bioweapon capabilities seem on track: OpenAI has upgraded their bio capability level to High, and Anthropic upgraded theirs to ASL-3.
Modern AI systems are gigantic artificial neural networks. Early in training, an AI won’t have “goals” so much as “reflexes”: If it sees “Pleased to meet”, it outputs “ you”. By the time it has been trained to predict approximately one internet’s worth of text, it’ll have developed sophisticated internal circuitry that encodes vast amounts of knowledge and flexibly role-plays as arbitrary authors, since that’s what helps it predict text with superhuman accuracy.
After being trained to predict internet text, the model is trained to produce text in response to instructions. This bakes in a basic personality and “drives.” For example, an agent that understands a task clearly is more likely to complete it successfully; over the course of training the model “learns” a “drive” to get a clear understanding of its tasks. Other drives in this category might be effectiveness, knowledge, and self-presentation (i.e. the tendency to frame its results in the best possible light).
OpenBrain has a model specification (or “Spec”), a written document describing the goals, rules, principles, etc. that are supposed to guide the model’s behavior. Agent-1’s Spec combines a few vague goals (like “assist the user” and “don’t break the law”) with a long list of more specific dos and don’ts (“don’t say this particular word,” “here’s how to handle this particular situation”). Using techniques that utilize AIs to train other AIs, the model memorizes the Spec and learns to reason carefully about its maxims. By the end of this training, the AI will hopefully be helpful (obey instructions), harmless (refuse to help with scams, bomb-making, and other dangerous activities) and honest (resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion).
This was already true at the time we published. It remains true now, but as predictions go, this was an easy one.
OpenBrain’s alignment team is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest? Or will this fall apart in some future situation, e.g. because it’s learned honesty as an instrumental goal instead of a terminal goal? Or has it just learned to be honest about the sorts of things the evaluation process can check? Could it be lying to itself sometimes, as humans do? A conclusive answer to these questions would require mechanistic interpretability—essentially the ability to look at an AI’s internals and read its mind. Alas, interpretability techniques are not yet advanced enough for this.
Instead, researchers try to identify cases where the models seem to deviate from the Spec. Agent-1 is often sycophantic (i.e. it tells researchers what they want to hear instead of trying to tell them the truth). In a few rigged demos, it even lies in more serious ways, like hiding evidence that it failed on a task, in order to get better ratings. However, in real deployment settings, there are no longer any incidents so extreme as in 2023–2024 (e.g. Gemini telling a user to die and Bing Sydney being Bing Sydney.)
A potential counterexample: MechaHitler is an incident as extreme as the ones in 2023-2024. In a footnote, we specified that our prediction only covered incidents that a user didn’t deliberately prompt.7 It’s unclear to what extent MechaHitler should count, as it was a combination of user-prompted and autonomous behavior.
Looking ahead to 2026 and beyond
Over the course of 2025, our timelines got longer. We expect to continue updating our forecasts over the course of 2026.
We’ll be closely tracking the following metrics:
AI R&D uplift studies and surveys. In AI 2027, we depicted an AI software R&D uplift of 1.9x being reached by the end of 2026. METR has now run a randomized controlled trial to measure how early-2025 AI coding tools affect the productivity of open-source developers. The headline result was a slowdown: tasks took longer when AI tools were allowed. More recently and in a different setting, Anthropic surveyed its technical staff and obtained a median of a 2x coding uplift. This still implies much lower than 2x uplift for AI software R&D as a whole, due to compute bottlenecks. We’ll be keeping an eye out for coding uplift studies and surveys, as well as any that cover AI R&D more broadly.
AGI company revenues and valuations. In AI 2027, we depicted the leading company reaching $55B in annualized revenue and a valuation of $2.5T by 2026, making it one of the most valuable companies in the world. We think these are decent indicators of the real-world value that AI is providing.
Coding time horizon. A central AI-2027-speed trajectory from the AI 2027 timelines model predicts ~3 work week 80% coding time horizons by the end of 2026. Time horizons also play a large role in our newer AI Futures Model. In this model, a handcrafted AI-2027-speed trajectory achieves time horizons of about a year by the end of 2026. We’ll be continuing to track time horizons. Unfortunately, they will become more difficult to measure as AIs get more capable.
Other benchmarks. See this survey for a sampling of benchmarks we consider among the most important. Unfortunately, besides coding time horizon, we didn’t register predictions for these benchmarks in AI 2027, because they didn’t exist yet when we wrote it. We’re hoping that higher difficulty benchmarks will be created in 2026.
While we expect to learn a lot from these indicators, we’d guess that it will unfortunately be difficult to be highly confident by the end of 2026 that AI takeoff will or won’t begin in 2027.
To spell out the method: Step 1: Make a detailed, concrete trajectory of how you think the future will go. Step 2: Wait a while. Step 3: Check to see if things are roughly on track, or are veering off in a different direction entirely. If they are roughly on track, quantitatively estimate how fast progress is going in reality vs. your scenario. Step 4: Adjust your guess about how the future will go, to be correspondingly faster or slower.
The method of aggregating over individual values weighs the compute category heavily due to 7 of the 15 individual predictions being about compute. We prefer not to give so much weight to compute forecasts alone because we don’t see it as central as other areas to tracking the pace of AI progress, so we instead aggregate the category means/medians. Most of our uncertainty regarding AI timelines comes from what capability level a given amount of compute gets you, and we can directly track indicators of capability levels.
Specifically, by slower progress we mean a lower effective compute growth rate. But a lower effective compute growth rate doesn’t necessarily translate into an intuitively slower pace of progress.
Specifically, we first set parameters such that the calendar-time-adjusted takeoff would happen at the right time in the case where there is no compute/labor growth slowdown, then we turn the slowdown back on to get the adjusted estimates. Links: without slowdown, with slowdown. Note that the AI Futures Model doesn’t take into account hardware R&D automation, which would shorten its takeoff predictions.
In particular, a central trajectory of the ones that predict Superhuman Coder in March 2027. This pace of progress calculation is after applying an adjustment for METR’s updated version of their suite (Time Horizon 1.1).
This is generally guessed by outsiders but not confirmed. See e.g. “OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024”
The specific text of footnote 27 is: “To be clear, what made these incidents interesting is that they didn’t seem to be the result of the user prompting or otherwise encouraging the AIs to say those things. In 2025, it’ll still be possible to get AIs to say all sorts of things if you try.”


