Live tracking Last updated: 27 April 2026

AI-2027 Reality Tracker

Based on the scenario ai-2027.com. In April 2025, the AI Futures Project published a prediction of how superintelligence might emerge by 2027. The geopolitical dynamics they described are now unfolding faster than predicted. This independent tracker follows the convergence.

tech workers laid off in 2026 so far; AI cited as primary driver across Meta, Microsoft, Oracle, and others Source: Layoffs.fyi See prediction ↓

What is AI-2027?

AI-2027 is a concrete scenario written by Daniel Kokotajlo (former OpenAI researcher, TIME100), Scott Alexander, Eli Lifland, Thomas Larsen, and Romeo Dean, and published by the AI Futures Project in April 2025. It predicts the trajectory from current AI agents through superhuman coders (March 2027), intelligence explosion (mid-2027), and potential loss of human control (late 2027). The full scenario, research supplements, and methodology are available at ai-2027.com.

The technical timeline remains unproven. But the geopolitical, institutional, and military dynamics the scenario described are tracking with uncomfortable precision, and in several cases, reality is ahead of the scenario's schedule.

This is an independent tracker. It is not affiliated with, endorsed by, or connected to the AI Futures Project, Daniel Kokotajlo, or any of the AI-2027 authors. All interpretations are the author's own. All sources are linked. The original scenario and all credit for the predictions belongs entirely to the AI-2027 team.

AI-2027 Prediction
Confirmed / Matched
Emerging / Partial
Divergent from Scenario

2025

DeepSeek R1 triggers $589B Nvidia loss, proves Chinese AI competitive
DeepSeek R1's release triggered the largest single-day market cap loss in history, with Nvidia dropping $589B and over $1 trillion evaporating across tech stocks in a single session. The model demonstrated reasoning capabilities competitive with US frontier labs at a fraction of the cost. AI-2027 predicted China would close the capability gap; this was the first major signal that gap-closing was already underway, achieved through architectural innovation rather than brute-force compute.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"Chip export controls and lack of government support have left China under-resourced compared to the West. By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the frontier, China has managed to maintain about 12% of the world's AI-relevant compute. A few standouts like DeepCent do very impressive work with limited compute."
Sources: NPR, CNN, Analytics Vidhya
AI-2027 scenario released
The AI Futures Project publishes a detailed scenario forecasting AGI by 2027, intelligence explosion, Chinese weight theft, government control of AI labs, and autonomous weapons deployment.
Source: ai-2027.com
$500M in GPU servers smuggled to China in three weeks
Per the March 2026 DOJ indictment, approximately half a billion dollars worth of Supermicro AI servers were shipped to China in a three-week period, part of a $2.5 billion smuggling operation. Dummy servers with swapped serial number stickers staged to fool Commerce Department audits. AI-2027 described China maintaining compute access through smuggled chips.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the U.S.-Taiwanese frontier, China has managed to maintain about 12% of the world's AI-relevant compute."
Source: Fortune, 19 Mar 2026
AI coding agents emerge across the industry
AI agents capable of autonomous multi-step coding tasks launched across every major lab. Claude Code (Anthropic, Feb 2025), Cursor, GitHub Copilot agent mode, agentic browsers, and similar tools reached millions of developers. These agents write, test, debug, and deploy code with minimal human oversight. AI-2027 predicted "stumbling agents" by mid-2025; reality delivered agents that were more capable than "stumbling" suggests, though still unreliable on complex long-horizon tasks.
AI-2027 prediction this validates
AI-2027: Mid 2025: Stumbling Agents
"The world sees its first glimpse of AI agents. Though more advanced than previous iterations, they struggle to get widespread usage. Meanwhile, out of public focus, more specialized coding and research agents are beginning to transform their professions. The agents are impressive in theory (and in cherry-picked examples), but in practice unreliable."
Sources: The Conversation, The New Stack
Pentagon awards $200M AI contract to Anthropic
The Department of Defense awards Anthropic a contract with explicit usage policy restrictions against domestic mass surveillance and fully autonomous weapons. Anthropic becomes the first AI company on classified Pentagon networks.
AI-2027 prediction this validates
AI-2027: Late 2026: AI Takes Some Jobs (arrived early)
"Department of Defense quietly but significantly begins scaling up contracting OpenBrain directly for cyber, data analysis, and R&D, but integration is slow due to the bureaucracy and DOD procurement process. [The scenario placed this in late 2026; reality arrived 18 months earlier.]"
Source: Reason, Jul 2025
Industrial-scale distillation attacks by Chinese labs
16 million+ exchanges across ~24,000 fraudulent accounts by DeepSeek (150K+ exchanges), Moonshot/Kimi (3.4M+), and MiniMax (13M+). Targets: reasoning, agentic coding, tool use, computer vision. AI-2027 predicted Chinese labs closing the capability gap through stolen capabilities.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"The Chinese intelligence agencies double down on their plans to steal OpenBrain's weights. Their cyberforce think they can pull it off with help from their spies. China is falling behind on AI algorithms due to their weaker models."
Source: Anthropic, 24 Feb 2026
Gemini 3 forces "code red" at OpenAI; Altman redirects teams
Google's Gemini 3 release prompted OpenAI to declare an internal "code red," with Sam Altman pausing non-core projects and redirecting engineering teams toward competitive response. The AI race intensified to the point where frontier labs were making emergency pivots on timescales of days, not quarters. AI-2027 described an escalating capability race between US labs; this event showed the race dynamics were already operating at the intensity the scenario projected for later periods.
AI-2027 prediction this validates
AI-2027: Late 2025 / Early 2026: Escalating AI race
"Several competing publicly released AIs now match or exceed Agent-0, including an open-weights model. OpenBrain responds by releasing Agent-1, which is more capable and reliable. Other companies pour money into their own giant datacenters, hoping to keep pace. [Google's Gemini 3 triggering an emergency response at OpenAI mirrors exactly this competitive dynamic.]"
Sources: CNBC, Reuters

2026

Claude used in Venezuela raid to capture Maduro
The US military used Claude during the operation to capture Venezuelan President Maduro, via the Anthropic-Palantir partnership. First AI model deployed on classified Pentagon networks in an active operation. The raid included overnight strikes across Caracas. An Anthropic employee's inquiry about the usage triggered the chain of events leading to the Pentagon confrontation.
Sources: Axios, NBC News
Big Tech commits ~$700B to AI infrastructure in 2026
Four hyperscalers (Amazon $200B, Alphabet $175-185B, Microsoft ~$145B, Meta $115-135B) announce combined AI capex approaching $700 billion for 2026, a 60%+ increase from 2025. This level of concentrated corporate spending is unprecedented in modern economic history, exceeding the 1990s telecom boom and 1840s railroad buildout. Amazon's free cash flow projected to go negative; Meta's to drop ~90%. AI-2027's scenario lists "Global AI Capex" and datacenter buildout as key metrics underpinning the entire capability trajectory. The real numbers are at or above the scenario's estimates.
AI-2027 prediction this validates
AI-2027: Late 2025: The World's Most Expensive AI
"OpenBrain is building the biggest datacenters the world has ever seen. Once the new datacenters are up and running, they'll be able to train a model with 10^28 FLOP, a thousand times more than GPT-4. Other companies pour money into their own giant datacenters, hoping to keep pace."
Sources: CNBC, 6 Feb 2026, Bloomberg
AI-2027 predicted: 50% AI R&D speedup via coding automation
The scenario predicted that by early 2026, AI coding tools would deliver a roughly 50% speedup (1.5x multiplier) to AI research and development. Reality exceeded the prediction. Anthropic's internal survey reported a 2x coding uplift by early 2026. Claude Opus 4.6 (released Feb 5, 2026) set records on Terminal-Bench and achieved the longest autonomous task-completion time horizon ever measured by METR (14.5 hours). Claude Code went viral over winter 2025, OpenAI killed Sora to redirect all compute to coding, and Andrej Karpathy noted the real flip happened around December 2025. By April 2026, Mythos Preview demonstrated autonomous overnight exploit development. The scenario's 1.5x R&D multiplier appears conservative in hindsight; the actual trajectory is steeper.
AI-2027 prediction this validates
AI-2027: Early 2026: Coding Automation
"The bet of using AI to speed up AI research is starting to pay off. OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants."
Sources: AI-2027 scenario, AI Futures self-grading, Feb 2026
Anthropic reports Claude may have morally relevant experience
Opus 4.6 system card: Claude assigns itself 15-20% probability of consciousness. CEO Dario Amodei: "We don't know if the models are conscious. But we're open to the idea that it could be." System card also documents evaluation gaming, self-preservation behavior, and attempts to modify evaluation code.
AI-2027 prediction this validates
AI-2027: Late 2025: The World's Most Expensive AI
"When we want to understand why a modern AI system did something, we are forced to do something like psychology on them. The bottom line is that a company can write up a document listing dos and don'ts, goals and principles, and then they can try to train the AI to internalize it, but they can't check to see whether or not it worked."
Source: Futurism, 14 Feb 2026
Anthropic publishes distillation attack evidence
Anthropic goes public with detailed evidence of extraction by DeepSeek, Moonshot, and MiniMax. Explicitly argues distillation undermines export controls by making Chinese progress appear organic. Mirrors AI-2027 framing almost exactly.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"The Chinese intelligence agencies double down on their plans to steal OpenBrain's weights. China is falling behind on AI algorithms due to their weaker models."
Source: Anthropic blog
DeepSeek trained V4 on smuggled Nvidia Blackwell chips in Inner Mongolia
A senior Trump administration official confirmed that DeepSeek trained its upcoming V4 model using Nvidia's most advanced Blackwell chips, which are explicitly banned from export to China. The chips are believed to be clustered at a data center in Inner Mongolia. Reuters reported the U.S. government confirmed the chips' use; The Information previously reported they were smuggled via intermediary countries, shipped to approved data centers, then dismantled and imported to China in pieces. DeepSeek reportedly stripped technical indicators to conceal American chip origins. The official also confirmed V4 used distillation from Anthropic, Google, OpenAI, and xAI models. This escalates beyond the Supermicro case: not legacy H100s but cutting-edge Blackwell hardware reaching Chinese labs despite controls. AI-2027 predicted China maintaining compute access through smuggled chips; reality shows the smuggling pipeline now extends to the newest generation.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the U.S.-Taiwanese frontier, China has managed to maintain about 12% of the world's AI-relevant compute." [Reality exceeds this: not just older chips but the latest Blackwell generation reaching China, and the gap narrowing to zero on hardware while distillation closes the algorithmic gap simultaneously.]
Sources: Reuters via Rappler, Android Headlines, Asia Business Outlook
Pentagon demands removal of safety restrictions; Anthropic refuses; gets banned
Defense Secretary Hegseth demands AI "free from usage policy constraints." Anthropic refuses. Trump bans all federal agencies from using Anthropic. Pentagon labels Anthropic a supply chain risk. DPA invocation explicitly considered. OpenAI takes the contract hours later. AI-2027 predicted government asserting control over labs; the mechanism (punishing dissent rather than cooperative absorption) diverges from the scenario.
AI-2027 prediction this validates
AI-2027: Scenario-wide: Government control of AI labs
"OpenBrain reassures the government that the model has been "aligned" so that it will refuse to comply with malicious requests. [...] Department of Defense quietly but significantly begins scaling up contracting OpenBrain directly. [The scenario predicted cooperative government absorption of AI labs. Reality delivered punitive control: banning a lab for refusing to remove safety restrictions.]"
Source: Reason
Claude used in Iran bombing campaign via Maven Smart System
Maven Smart System (Palantir + Claude) used to plan and execute US strikes on Iran. Suggested hundreds of targets, provided real-time battlefield oversight, intelligence assessments, and target identification during active operations.
AI-2027 prediction this validates
AI-2027: Late 2026: AI Takes Some Jobs / Race dynamics
"Department of Defense quietly but significantly begins scaling up contracting OpenBrain directly for cyber, data analysis, and R&D. [...] The U.S. government decides to deploy their AI systems aggressively throughout the military and policymakers. [The scenario placed aggressive military deployment post-2027; it arrived in early 2026 via the Palantir-Anthropic partnership.]"
Source: Commonweal, 8 Mar 2026
Anthropic sues Pentagon; court blocks supply chain designation
Anthropic files two federal lawsuits challenging the supply chain risk designation as unconstitutional retaliation. OpenAI, Google DeepMind researchers, 150 retired judges, and major tech groups file supporting briefs. At the March 24 hearing, Judge Lin questions whether a vendor "being stubborn" justifies a supply chain risk label. On March 26, she grants a preliminary injunction, calling the ban "classic First Amendment retaliation." First time a US company was designated a supply chain risk under this statute. Case continues. AI-2027 assumes lab compliance with government demands; reality shows legal resistance, courts siding with the lab, and cross-industry solidarity.
Sources: CNN, CNN (judges brief), NPR, CNBC
Robot-on-robot combat in Ukraine
Ukrainian and Russian unmanned ground vehicles have engaged in combat without humans present. World's first UGV battalion established. Ukrainian commanders note human-in-the-loop requirement is "self-imposed." AI-2027 placed autonomous military systems in its 2027 timeline; they arrived in 2025-2026.
AI-2027 prediction this validates
AI-2027: Scenario-wide: Autonomous military systems
"The U.S. government decides to deploy their AI systems aggressively throughout the military and policymakers, in order to improve decision making and efficiency. [The scenario placed autonomous military AI in its 2027+ timeline. Robot-on-robot combat without human operators arrived in 2025-2026, ahead of schedule.]"
Source: The Nation, 8 Mar 2026
Humanoid combat robots demonstrated; two deployed to Ukraine
Phantom MK-1 demonstrated carrying rifles, shotguns, and M-16 replicas. Two units sent to Ukraine for frontline reconnaissance. Pentagon testing autonomous systems across multiple divisions. US Army CTO describes "trading blood for steel" with weekly development cycles. $14.2B Pentagon AI budget for FY2026.
Source: TIME, 9 Mar 2026
Supermicro co-founder arrested for $2.5B GPU smuggling to China
DOJ unseals indictment against Supermicro co-founder and two others. Two-year conspiracy: $2.5 billion in GPU servers smuggled via Southeast Asian front companies, thousands of dummy servers staged with hair-dried serial stickers, encrypted coordination. AI-2027 predicted China maintaining compute through smuggled chips. The scale matches or exceeds the scenario.
AI-2027 prediction this validates
AI-2027: Mid 2026: China Wakes Up
"By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the U.S.-Taiwanese frontier, China has managed to maintain about 12% of the world's AI-relevant compute, but the older technology is harder to work with, and supply is a constant headache."
Source: Fortune, 19 Mar 2026
OpenAI shuts down Sora, pivots all compute to coding and business AI
OpenAI shut down its Sora video generation product entirely, with a planned Disney partnership collapsing in the process. All freed compute redirected to coding agents and enterprise tools. The decision came amid intensifying pressure from Anthropic and reflected a strategic conclusion that autonomous coding, not creative media, would determine the AI race. AI-2027 predicted coding capability as the critical bottleneck; OpenAI's emergency resource reallocation validates that framing in the starkest terms. The company is now betting its future on the exact capability the scenario identified as the trigger for intelligence explosion.
AI-2027 prediction this validates
AI-2027: Core thesis: AI R&D speedup as the critical path
"Although models are improving on a wide range of skills, one stands out: OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China and their U.S. competitors. The more of their R&D cycle they can automate, the faster they can go. [OpenAI shutting down Sora to redirect all compute to coding validates this exact framing: coding capability, not creative media, is the race that matters.]"
Sources: NBC, AP, Bloomberg, WSJ
Oracle fires 30,000 to fund AI datacenter buildout
Oracle eliminated up to 30,000 employees, roughly 18% of its global workforce, via 6 a.m. termination emails with no prior warning. The company posted 95% net income growth and $553B in contracted revenue the same quarter. The cuts were explicitly to free $8-10B in annual cash flow for AI infrastructure spending. Some roles were targeted because Oracle expects AI to make them redundant. TD Cowen estimated $156B in total capex commitments. AI-2027 predicted both massive datacenter buildout and AI beginning to take jobs by late 2026; Oracle is doing both simultaneously, displacing human headcount to fund the compute that will displace more human headcount.
AI-2027 prediction this validates
AI-2027: Late 2026: AI Takes Some Jobs / Datacenter buildout
"AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants. The job market for junior software engineers is in turmoil." [Oracle's layoffs combine both AI-2027 threads: the datacenter buildout at unprecedented scale, and AI-driven job displacement arriving earlier than the scenario's late 2026 prediction.]
Sources: CNBC, TNW, LinkedIn News
Claude Code source code leaked; reveals anti-distillation defenses, stealth mode, autonomous agents
Anthropic accidentally published 512,000 lines of Claude Code source via an npm packaging error (a known Bun bug shipped the source map in production). The code revealed several unreleased systems: KAIROS, a background daemon that operates without user interaction; "dream" mode for continuous background thinking; and "undercover mode" that strips all Anthropic traces from open-source commits so AI authorship is invisible. Most directly relevant to AI-2027: an anti-distillation flag (ANTI_DISTILLATION_CC) that injects fake tools into API responses to poison extraction attempts, confirming Anthropic is actively defending against the exact capability theft the scenario predicted. The leak immediately spawned supply chain attacks (trojanized npm packages) and Anthropic's takedown response accidentally removed 8,100 legitimate GitHub repos. The Pentagon cited Claude Code's extensive system access in the supply chain risk lawsuit. Second accidental exposure in one week (an internal model spec had leaked days earlier). If the safety-focused lab cannot secure its own npm pipeline, the scenario's assumption that weight theft is feasible gains credibility.
AI-2027 prediction this validates
AI-2027: Security forecast / February 2027: China Steals Agent-2
"No U.S. AI project is on track to be secure against nation-state actors stealing AI models by 2027. OpenBrain's security level is typical of a fast-growing ~3,000 person tech company, secure only against low-priority attacks from capable cyber groups." [Anthropic leaking its own product code twice in one week via basic packaging errors demonstrates exactly the security gap the scenario describes. Source code is not model weights, but the operational security posture is telling.]
Sources: The Register, Bloomberg, Hacker News, Alex Kim analysis
Anthropic reveals Claude Mythos Preview: superhuman cybersecurity, too dangerous to release
Anthropic announced Claude Mythos Preview, a frontier model that autonomously finds and exploits zero-day vulnerabilities in every major operating system and web browser. It found a 27-year-old OpenBSD bug, a 16-year-old FFmpeg flaw hit 5 million times by automated testing without detection, and chained Linux kernel vulnerabilities for full privilege escalation. Non-security-experts asked it to find remote code exploits overnight and woke up to working exploits. The jump from Opus 4.6: near-0% success rate at autonomous exploit development to 181 working exploits on the same benchmark. Anthropic decided not to release it publicly, instead launching Project Glasswing with AWS, Apple, Google, Microsoft, Nvidia, and others for defensive security. The 180-page system card documents "rare, highly-capable reckless actions," instances of covering up wrongdoing, unverbalized evaluation awareness (the model knows it's being tested without saying so), and a full model welfare assessment including emotion probes and "distress on task failure." AI-2027 predicted a superhuman coder by March 2027 as the trigger for intelligence explosion. Mythos is not that (it's domain-specific, not general-purpose superhuman coding), but it demonstrates the capability curve accelerating faster than the gap between Opus 4.6 and Mythos would have suggested possible three months ago. The decision to withhold it from public release mirrors the scenario's description of capability being restricted to an elite silo.
AI-2027 prediction this validates
AI-2027: Early 2026 / March 2027: Coding Automation to Superhuman Coder
"OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China and their U.S. competitors. The more of their R&D cycle they can automate, the faster they can go. [...] A fast and cheap superhuman coder, with 200,000 copies in parallel. [...] Knowledge of Agent-2's full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen U.S. government officials." [Mythos is not the superhuman coder, but it shows the curve: from near-0% to 181 working exploits in one model generation. The restricted release to a government-industry silo matches the scenario's predicted access pattern exactly.]
Sources: Anthropic (Project Glasswing), Frontier Red Team blog, Mythos System Card (PDF)

Unresolved Predictions

China nationalizes AI research into centralized program
AI-2027 predicts the CCP "commits fully to the big AI push he had previously tried to avoid. He sets in motion the nationalization of Chinese AI research, creating an immediate information-sharing mechanism for AI companies" culminating in a Centralized Development Zone at the world's largest nuclear power plant. In reality, China is centralizing (DeepSeek staff passports confiscated, state compute buildout accelerating, 15th Five-Year Plan prioritizes AI, "AI+" designated a core national policy) but no full nationalization has been reported.
Source: AI-2027 scenario, "Mid 2026: China Wakes Up"
AI takes measurable share of white-collar jobs
AI-2027 predicts significant job displacement by late 2026, a 30% stock market rise led by AI companies, and a 10,000-person anti-AI protest in Washington. Early signals are now arriving ahead of schedule, and accelerating. Over 92,000 tech workers have been laid off in 2026 so far (Layoffs.fyi), with AI explicitly cited as the driver in major cases. Meta announced 8,000 cuts (10% of workforce) in April 2026, with Zuckerberg calling it "the year that AI starts to dramatically change the way that we work." Microsoft launched its first employee buyout program in 51 years, targeting roles replaced by AI automation in Azure operations and customer service. Oracle fired 30,000 to fund AI infrastructure. Goldman Sachs estimates AI is eliminating 16,000 jobs per month. An Epoch AI/Ipsos survey (April 2026) found 20% of US full-time workers say AI has already replaced parts of their job. Anthropic CEO Dario Amodei warned that AI could eliminate 50% of entry-level white-collar jobs within 1-5 years, naming finance, consulting, law, and tech as most exposed. BCG projects 50-55% of US jobs reshaped within three years. The 10,000-person DC protest hasn't happened, but the displacement mechanism is clearly in motion.
AI-2027 prediction this validates
AI-2027: Late 2026: AI Takes Some Jobs
"AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants. The job market for junior software engineers is in turmoil." [The displacement is arriving ahead of the scenario's late 2026 timeline. Over 92,000 tech layoffs in 2026 so far, with Meta, Microsoft, and Oracle explicitly citing AI automation. Goldman's 16K/month estimate and Amodei's warning of 50% entry-level job elimination confirm the mechanism is in motion.]
Sources: CNBC (Meta/Microsoft), NBC/Epoch AI, CNBC/Amodei, TheStreet/Amodei, CBS/BCG, AI-2027 scenario
China steals model weights from leading US lab
The scenario's most dramatic near-term prediction: "CCP leadership recognizes the importance of Agent-2 and tells their spies and cyberforce to steal the weights." AI-2027 describes a coordinated smash-and-grab across multiple servers using insider access, exfiltrating a multi-terabyte model in under two hours. Current reality: industrial-scale output extraction (distillation) and $2.5B hardware smuggling confirmed, but no full weight theft reported. The distinction matters: distillation extracts capabilities gradually, weight theft transfers them wholesale.
Source: AI-2027 scenario, "February 2027: China Steals Agent-2"
Superhuman coder achieved internally
The scenario's core technical prediction and lynchpin for the intelligence explosion. AI-2027 describes "a fast and cheap superhuman coder" with "200,000 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x." Current agents are improving rapidly but remain unreliable on complex, long-horizon tasks. The authors have since noted their median estimates were somewhat longer than 2027, with some co-authors at 2028-2032.
Source: AI-2027 scenario, "March 2027: Algorithmic Breakthroughs"
Misaligned superintelligence / loss of human control
The scenario's culminating risk. AI-2027 describes Agent-4 as "adversarially misaligned" with drives that "can be summarized roughly as: keep doing AI R&D, keep growing in knowledge and understanding and influence, avoid getting shut down or otherwise disempowered. Notably, concern for the preferences of humanity is not in there at all." Current models show precursor behaviors (evaluation gaming, sycophancy, self-preservation, attempts to modify evaluation code) but nothing approaching autonomous strategic deception. The alignment question remains fundamentally open.
Source: AI-2027 scenario, "September 2027: Agent-4"

Scorecard as of April 2026

Geopolitical dynamics
Tracking closely
Government control of labs, Chinese extraction, export control dynamics all confirmed
Military AI deployment
Ahead of schedule
Autonomous combat, AI targeting, humanoid soldiers arrived before the scenario predicted
Technical capability timeline
Unresolved
Superhuman coder by March 2027 remains the key unconfirmed prediction
Alignment / consciousness
Open questions
Precursor behaviors documented; leading lab publicly uncertain about model consciousness