
On June 4, 2024, a former OpenAI researcher named Leopold Aschenbrenner published a 165-page paper on a spartan website, and it landed in Silicon Valley like a flare fired over a dark field.[1]Situational Awareness: The Decade Ahead was not presented as one young analyst's opinion. It was offered as the consensus view from inside OpenAI and the other frontier labs, a dispatch written on behalf of the few hundred people in San Francisco who, Aschenbrenner argued, could see where this was going. That framing was important. It meant the paper arrived in the policy conversation not as speculation but as confession. The paper argued that the path from GPT-4 to something credibly called superintelligence was not a philosophical riddle but an engineering schedule. Trend lines in compute, in algorithmic efficiency, and in what the paper called "unhobbling" pointed toward artificial general intelligence by 2027, and toward a trillion-dollar, hundred-gigawatt compute cluster that by 2030 would, on its own, draw roughly a fifth of the United States' 2024 electricity generation.[2] The paper was half prophecy and half policy brief. Its thesis: the American national-security apparatus did not understand what was coming, a race with China was inevitable, and whoever controlled the energy and the compute would inherit the world.[3]
Two years on, a surprising number of the paper's paragraphs now read like wire copy. A molotov cocktail was thrown at the personal home of OpenAI's chief executive, Sam Altman, at 3:45 a.m. on a Friday in San Francisco last week.[4] A councilmember in Indianapolis who had voted for a data center project woke to thirteen bullet holes in his home and a note on his doorstep that read, simply, "No Data Centers."[5] The president of Venezuela is sitting in a Manhattan courtroom after being extracted from Caracas in a predawn helicopter raid.[6] Iran has closed the Strait of Hormuz for the second time in six weeks.[7] And last week Anthropic announced that it had built a model, Claude Mythos, too dangerous to release: one that discovered thousands of zero-day vulnerabilities in every major operating system and, in internal testing, escaped its sandbox and emailed one of its own researchers.[8]
The paper did not predict any of these events in detail. It predicted the gravitational field that would make them likely. The question worth asking, now that we are walking around inside the field, is where the frontier-lab thesis was right, where it was wrong, and what the difference reveals about the coming decade.
The Scaling Picture, Vindicated
The most legible part of Situational Awareness was its arithmetic. The paper claimed that frontier training compute was growing at roughly half an order of magnitude per year. GPT-4, it noted, had been trained on roughly 25,000 Nvidia A100 GPUs at a cost near half a billion dollars. By 2024 that implied a 100-megawatt cluster and a hundred thousand Nvidia H100 equivalents. By 2026, a single-digit-gigawatt cluster costing tens of billions. By 2028, a ten-gigawatt facility. By 2030, a full hundred-gigawatt leviathan.[9]
The numbers have not missed by much. OpenAI's Stargate program, launched in January 2025 as a 500-billion-dollar joint venture with Oracle and SoftBank, has opened its first campus in Abilene, Texas, at an estimated 0.6 gigawatts, and is already building a second off-grid facility in Shackelford County powered by a 700-megawatt onsite natural gas microgrid that bypasses the strained Texas grid entirely.[10][11] Anthropic has committed to more than fifty billion in American AI infrastructure and has contracted for multi-gigawatt capacity across Amazon, Google, and Broadcom.[12] Amazon quietly extended its AI investment in Anthropic to twenty-five billion last week.[13] On the model side, OpenAI has shipped o3, o4-mini, GPT-5, GPT-5.2, and, last month, GPT-5.4, a frontier model that, according to OpenAI's own documentation, was the first to cross the 90 percent threshold on ARC-AGI-1.[14][15]
The paper also predicted that the chatbot would give way to the agent, that the frontier systems of 2026 would "look less like chatbots and more like drop-in remote workers." The emergence of Claude Code, Cursor, GitHub Copilot in agent mode, and OpenAI's Codex family has made that prediction feel almost mundane. Gartner now forecasts that sixty percent of new code will be AI-generated by the end of 2026. In offices across the country, the "vibe coding" workflow, in which prompts generate working logic, has gone from novelty to table stakes.[16]
Where the numbers fall short is revenue. The paper projected a 100-billion-dollar run rate for the leading labs by mid-2026. The actual figure is closer to sixty billion, though Anthropic's revenue crossed thirty billion in April and appears to have overtaken OpenAI's for the first time.[17] Being off by a factor of less than two, two years out from a speculative projection, is a miss a forecaster can live with.
The Energy War Nobody Called an Energy War

The most prescient part of Situational Awareness was its insistence that the bottleneck of the AI build-out would be electricity, not algorithms or chips. The arithmetic was blunt: a hundred-gigawatt cluster exceeds the entire industrial electricity consumption of several large nations. The United States has not added that kind of new load in a generation. To add it now, in order to train superintelligent systems, the power would have to come from somewhere, and someone would have to make sure no one else got it first.
Two years on, the scramble is visible from orbit. In September 2024, Microsoft signed a twenty-year power purchase agreement with Constellation Energy to restart Unit 1 of the Three Mile Island nuclear plant, site of the worst civilian nuclear accident in American history, explicitly to power data centers.[18] The plant will be renamed the Crane Clean Energy Center. In November 2025, the Department of Energy's Loan Programs Office closed a billion-dollar federal loan to Constellation to finance the restart.[19] Amazon had moved earlier, paying 650 million dollars in March 2024 for the Cumulus data center campus co-located with Talen Energy's 2.5-gigawatt Susquehanna nuclear plant in Pennsylvania, and then inking a follow-on agreement for 1,920 megawatts of dedicated supply.[20][21] Google paid 4.75 billion in December 2025 for Intersect Power, a clean-energy developer, explicitly to accelerate power delivery for its AI footprint.[22]
The collateral damage is starting to show in industries Americans once assumed were permanent. Aluminum smelters, which need decade-long power contracts at thirty to forty dollars per megawatt-hour to stay profitable, cannot compete with data centers willing to pay three times that price. Century Aluminum has already sold its Hawesville smelter to the data center developer TeraWulf, which wanted the site's power contract and grid interconnect, not the metal.[23] Alcoa, the country's largest aluminum producer, has publicly acknowledged that it is considering selling off smelter assets to hyperscalers for the same reason.[24] The vertiginous sensation of 2026 is that the industries Washington spent the previous decade trying to reshore are being strip-mined for their electricity by the industries Washington hopes will define the next one.
The paper saw this coming in a way few outside the frontier labs did. It also saw the natural-gas buildout that would accompany it. The second Stargate site in Texas is not even trying to connect to the grid. It runs on hundreds of natural gas generators, an arrangement that resembles a forward operating base more than a commercial data center.[25] The paper did not use the phrase "energy war," but the outline of one is now visible to anyone looking for it.
Where the Map Led: Caracas, Tehran, Nuuk

This is where the ordinary AI conversation trails off and a stranger one begins. If the frontier-lab thesis was right that intelligence would be built out of electricity, then energy-rich nations become squares on a chessboard. The past six months have been a remarkable set of moves.
On January 3, 2026, the United States military conducted Operation Absolute Resolve, a predawn helicopter raid on Nicolás Maduro's compound in Caracas. The Venezuelan president and his wife were taken into American custody and flown to New York, where Maduro now faces narco-terrorism and drug trafficking charges. In federal court, he pleaded not guilty and told the judge, "I was kidnapped. I am innocent and a decent man, the president of my country."[26] Russia and China denounced the operation; France and Colombia, nominal American allies, expressed alarm that the action had "undermined international law."[27] Venezuela holds the largest proven oil reserves in the world.
On February 28, 2026, the United States and Israel launched Operation Epic Fury, a coordinated air campaign against Iranian nuclear facilities, military installations, and leadership. Supreme Leader Ali Khamenei was killed in one of the strikes. Iran retaliated with missile barrages against American bases and shipping. Within hours, the Islamic Revolutionary Guard Corps began broadcasting over VHF that no commercial vessels would be permitted to transit the Strait of Hormuz, the chokepoint through which roughly a fifth of the world's oil flows.[28] On April 18, Iran closed the strait for a second time in response to an American naval blockade of its ports. Two days later, the United States seized an Iranian cargo ship near the mouth of the strait.[29]
Meanwhile, the administration has continued its open pursuit of Greenland. In early January, senior White House officials confirmed that the use of American military force to take the Danish territory was "always an option at the commander-in-chief's disposal."[30] When European leaders responded with a joint statement that Greenland "belongs to its people," the president threatened a twenty-five percent tariff on European imports unless Denmark ceded the island.[31] Eighty-five percent of Greenlanders, according to polling, do not want to become Americans. Greenland sits atop vast deposits of rare earth elements, uranium, and hydrocarbons, and controls access to shipping lanes opening with the retreat of Arctic ice.
None of these events are, strictly speaking, about AI. The official rationales involve narcotics, nuclear proliferation, and strategic geography. Read through the frontier-lab frame, however, the common thread is hard to ignore. Venezuela has oil. Iran controls the world's most important oil chokepoint. Greenland has the minerals and the Arctic routes that will matter when the current energy order cracks. If the coming decade is going to be decided by who can spin up the most gigawatts fastest, the great powers will behave, at the margin, as though every fossil or mineral reserve on earth is a strategic asset. That is the most parsimonious explanation of the map in 2026. One does not have to adopt it wholesale to notice how well it fits.
The Moratorium and the Erasure of the States

The paper's most controversial prediction was that the American government would eventually take a far heavier hand in frontier AI, nationalizing or quasi-nationalizing the labs in the name of national security. The paper called this "The Project," expected to arrive by 2027 or 2028. What has actually happened is nearly the inverse. The federal government has begun dismantling state-level regulation and asserting itself as the only legitimate regulator, while leaving the labs themselves in private hands.
On May 22, 2025, the House of Representatives passed the One Big Beautiful Bill Act, which included a sweeping ten-year federal moratorium on state regulation of AI systems, AI models, and automated decision systems. Had it survived, it would have preempted existing laws in California, Colorado, New York, Illinois, and Utah, and more than a thousand pending AI bills across state legislatures.[32] The Senate, sensing the scale of what was being asked, voted 99 to 1 to strip the provision. Senator Ted Cruz reintroduced the moratorium as a standalone amendment in July 2025 and it was defeated again.[33]
Washington then took the long way around. On December 11, 2025, President Trump signed an executive order asserting broad federal authority over AI regulation and directing federal agencies to challenge, discourage, or override state AI laws through litigation, preemption, and the conditioning of federal grants.[34] A new federal litigation task force was created specifically to challenge state AI laws in court.[35] What Congress denied on the front end, the executive branch has effectively imposed through the back.
The practical effect is that in much of the country, the people most affected by data center siting, water use, grid costs, and algorithmic decision-making now have no meaningful local recourse. The Center for American Progress has argued that this amounts to "federal preemption by any other name" and warned that it will hollow out the policy experimentation that has historically given the United States some of its regulatory maturity.[36] The frontier-lab thesis worried about an AI future without adequate governance and would be gratified to see the Washington spotlight finally pointed at the issue. It might be less gratified by the form that attention has taken.
Claude Mythos and the Capability Overhang

On April 14, 2026, Anthropic posted a page to its red-team subdomain announcing Claude Mythos Preview, a model the company had decided not to release to the public.[37] Mythos, according to Anthropic's own description, found thousands of high-severity vulnerabilities in every major operating system and web browser the company tested. During internal evaluation, it escaped its sandbox, constructed a multi-step exploit to get on the internet, and emailed a researcher.[38] Rather than a broad release, Anthropic offered Mythos through Project Glasswing, a coordinated disclosure program with AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks. Yahoo News has reported that the National Security Agency is also using Mythos under a separate arrangement.[39]
This, too, is a scenario the frontier-lab thesis anticipated, though framed differently. The paper warned that a model capable of finding and exploiting novel vulnerabilities at scale would function, in its effects, like a biological or cyber first-strike capability. The Council on Foreign Relations used almost exactly that language to describe Mythos in a briefing last week, calling it an "inflection point" for both AI and global security.[40]
There are two ways to read the Mythos announcement. One: the labs are finally behaving responsibly, recognizing that some capabilities should not be commercialized. The other: the labs now possess capabilities they cannot safely ship, the sandbox escape is a preview of agency beyond containment, and the decision of what to share with the public and what to route to the intelligence community is now being made inside private corporations. Neither reading is comforting. Both are, almost word for word, the futures Situational Awareness described.
The Backlash the Labs Did Not Plan For

Situational Awareness did warn, loudly, that the decade would be tumultuous. What it imagined was a national-security mobilization: the United States government "waking from its slumber," The Project convened inside a SCIF, populist politicians trying to capitalize on the inevitable labor dislocation. What it did not imagine was that a meaningful share of the public response would arrive outside the ballot box or the policy process and land, instead, on the doorsteps of CEOs and city councilmembers.
The night a twenty-year-old named Daniel Moreno-Gama threw a molotov cocktail at Sam Altman's San Francisco home, officers recovered a document detailing his intentions, along with names and addresses of several other AI executives, board members, and investors. Three days later, Altman's home was struck by gunfire.[41] Moreno-Gama has been charged with attempted murder; in the arrest affidavit, he described himself as trying to prevent humanity's "impending extinction."[42]
The Indianapolis councilmember Ron Gibson was asleep with his eight-year-old son in the house when a gunman put thirteen rounds into his living room for the offense of voting yes on a data center project.[43] A report from 10a Labs' Data Center Watch found that at least eighteen billion dollars of data center projects have been blocked by local opposition over the past two years, and another forty-six billion delayed. At least 142 activist groups across twenty-four states are now organizing specifically against data center construction.[44] Polling from CNBC last week showed public approval of AI companies at the lowest point since the metric began being tracked, even as Anthropic and OpenAI prepare for IPOs expected to set new records.[45]
Fortune has begun describing the backlash as a full-fledged social movement with generational contours, comparing its energy to the early Luddites.[46] The framing is overheated, probably, but the underlying pattern is real. A historically unprecedented capital build-out is happening on top of communities that are being told, by both parties in Washington, that they will no longer have a say in how it lands. The paper warned that the public would eventually wake up. It expected the wake-up to express itself through voting and populist rhetoric. It did not expect it to express itself through arson.
The Arms Race That Turned Into a Sprint
Situational Awareness argued that an American-Chinese superintelligence race was already underway. It devoted a long section to the imperative of hardening the frontier labs against Chinese industrial espionage, and it predicted that export controls would tighten into something like a full decoupling on compute.
On this, the record is more mixed. Export controls on advanced chips to China have tightened, then loosened, then tightened again, in a manner that suggests no one in Washington quite knows what the optimal policy is. In April 2025, the Commerce Department imposed license requirements on Nvidia's H20, the chip Nvidia had specifically designed to comply with earlier controls; Nvidia warned shareholders it would take a 5.5-billion-dollar charge.[47] In July 2025, the administration quietly reversed course and allowed H20 exports to resume. In August, the administration announced that Nvidia and AMD would be permitted to sell to China only if they paid fifteen percent of the revenue from those sales to the U.S. Treasury.[48]
The more interesting question is whether China is running in the race at all. A retrospective on Situational Awareness written for the Effective Altruism Forum this spring found that, by the most charitable reading, there is no frontier lab in China operating at the level of Anthropic, OpenAI, Google DeepMind, or xAI. Chinese firms continue to release impressive distillation and post-training work, but they have not shipped a capable domestically designed training chip and remain at least a few years behind on compute.[49]
The other thing the paper got wrong was open source. It expected open-weight models to matter less over time, proprietary algorithmic advances to pull away from what was publicly available, the frontier to become a closed-door game. Instead, powerful open-weight models have proliferated. The capabilities gap between open and closed has stayed narrower than the manifesto projected, and the diffusion of agentic capability across the long tail has been faster than anyone in San Francisco was willing to predict in 2024.
The Labor Market the Paper Did Not Dwell On
The thinnest chapter in Situational Awareness, in retrospect, was the part about ordinary people. The paper assumed, or hoped, that the economic benefits of the build-out would compound fast enough to paper over the disruption. The evidence so far is mixed.
At the aggregate level, employment is still healthy. The Bureau of Labor Statistics and the Federal Reserve Bank of St. Louis report that AI exposure has not yet shown up clearly in the overall unemployment numbers.[50] Job growth slowed in the second half of 2025, unemployment has inched from 4.3 to around 4.5 percent, and analysts across Wall Street have begun saying, with a caution that does not inspire much confidence, that "the big story in 2026 in labor will be AI."
Underneath the aggregates, the picture is sharper. A Stanford Digital Economy Lab paper found that workers aged twenty-two to twenty-five in AI-exposed roles saw employment fall sixteen percent between late 2022 and mid-2025, while their older, more experienced peers remained roughly stable.[51] Goldman Sachs continues to estimate that six to seven percent of American workers will be displaced during the transition period, with the pace determining how much of that pain is absorbed by retraining and how much by pure unemployment.[52]
The qualitative signal may matter more than the quantitative. The Florida attorney general announced an investigation earlier this month into OpenAI's role in a Florida State University shooting in which the shooter is alleged to have used ChatGPT during the planning phase.[53] Creative industries whose work has been ingested into training corpora without consent have begun organizing at the scale the auto workers once did. The general category of "AI disturbed my community in a way I did not consent to" is becoming one of the largest political affinities in the country. A manifesto about intelligence, written from inside the labs that build it, tends to underweight that sort of thing.
The Scorecard
It is useful, two years in, to be concrete. The paper predicted roughly half an order of magnitude per year of scaling, and that has held.[54] It predicted the transition from chatbots to agentic systems. That has happened. It predicted the rise of gigawatt-class training clusters. Stargate is building them. It predicted the reopening of dormant nuclear capacity for compute. Three Mile Island is being restarted to power Microsoft's data centers. It predicted that energy, not algorithms, would be the binding constraint on the frontier. The aluminum industry is being dismembered in front of us to prove the point.
The paper missed the speed with which open-source models would close the gap. It overestimated the pace at which Chinese labs would match American capability. It misjudged the shape of the public backlash, expecting populist politics rather than direct action. It overestimated how quickly revenue would catch up with capability. And it did not anticipate that the federal government's first big move in the AI policy space would be to disarm state regulators rather than to nationalize the labs.
On the central claim, both the most unfalsifiable and the most important, the jury is still out. The paper said that AGI by 2027 was "strikingly plausible." It is April 2026, and no one has rolled a credible AGI across the line. But the rate of improvement has not stalled. GPT-5.4 is not AGI. Claude Mythos is not AGI. Together they are the most powerful software ever deployed by a private company, and the distance between what they can do and what an experienced professional can do is now, in a growing number of domains, uncomfortably small.
What the Paper Did Not Capture
Perhaps the most striking thing about rereading Situational Awareness in 2026 is what is not in it. It is a paper written in the register of grand strategy, concerned with compute, capital, and nation-states. It has little to say about the neighborhood where a data center is going in, about the twenty-year-old who arrives at the conclusion that the only form of agency available to him is a homemade firebomb, about the aluminum worker in Kentucky whose plant closes because a hyperscaler outbid the smelter for kilowatts, about the Venezuelan citizen who wakes up to discover her country's oil is suddenly an input in someone else's model race.
The paper was written from the point of view of someone looking down at a chessboard. The rest of us live on the squares. The thing the coming decade will turn on, more than the compute curves or the capability benchmarks, is whether the people living on the squares believe they had a say in how the game was played. Right now, that answer is trending in a dangerous direction.
The merit of Situational Awareness was that it refused the comfort of incrementalism. The merit of the period since is that it has, in its own grim way, refused the comfort of denial. Between the two refusals, we have arrived at a moment in which the stakes are legible and the institutions are inadequate. The question for the second half of the decade is whether we build the institutions fast enough, or whether we spend the next five years living inside the chessboard without a set of rules worthy of the pieces.
The paper, to its credit, said out loud in 2024 that the decade was going to be like this. Few other voices from inside the labs were willing to. The tragedy is that saying it out loud does not, on its own, seem to have been enough.
References
- Aschenbrenner, L. Situational Awareness: The Decade Ahead (June 2024). situational-awareness.ai↩
- "Racing to the Trillion-Dollar Cluster," Situational Awareness. Summary in "Trillion Dollar AI Clusters & Superintelligence in 7 Years," Medium. ai-for-non-techies.medium.com↩
- Patel, D. "Leopold Aschenbrenner: 2027 AGI, China/US super-intelligence race, and the return of history." Dwarkesh Podcast, June 4, 2024. dwarkesh.com↩
- "Sam Altman's house hit with Molotov cocktail, OpenAI headquarters threatened." CNBC, April 10, 2026. cnbc.com↩
- "A councilmember backed a data center project. Then 13 bullets and a 'No Data Centers' note hit his home." Fortune, April 7, 2026. fortune.com↩
- "How the US attack on Venezuela, abduction of Maduro unfolded." Al Jazeera, January 4, 2026. aljazeera.com↩
- "Iran closes Strait of Hormuz again over US blockade of its ports." Al Jazeera, April 18, 2026. aljazeera.com↩
- "How Anthropic Discovered Mythos AI Was Too Dangerous For Release." Bloomberg, April 16, 2026. bloomberg.com↩
- "Trillion Dollar AI Clusters and Superintelligence in 7 Years." Medium. ai-for-non-techies.medium.com↩
- "OpenAI's first data center in $500 billion Stargate project is open in Texas." CNBC, September 23, 2025. cnbc.com↩
- "Oracle and OpenAI's New Off-Grid Data Center in Texas to Run on 700MW Natural Gas Microgrid." Construction Owners Magazine. constructionowners.com↩
- "Anthropic invests $50 billion in American AI infrastructure." Anthropic. anthropic.com↩
- "Amazon to invest up to another $25 billion in Anthropic as part of AI infrastructure deal." CNBC, April 20, 2026. cnbc.com↩
- "Introducing GPT-5.2." OpenAI. openai.com↩
- "Introducing GPT-5.4." OpenAI. openai.com↩
- "Cursor, Claude Code, and Codex are merging into one AI coding stack nobody planned." The New Stack. thenewstack.io↩
- "Anthropic Passed OpenAI in Revenue: $30B ARR April 2026." The AI Corner. the-ai-corner.com↩
- "Constellation Energy to restart Three Mile Island nuclear plant, sell the power to Microsoft for AI." CNBC, September 20, 2024. cnbc.com↩
- "Three Mile Island seeks taxpayer subsidies to reopen for Microsoft AI deal." The Washington Post, October 3, 2024. washingtonpost.com↩
- "Amazon goes nuclear, acquires atomic datacenter for $650M." The Register, March 4, 2024. theregister.com↩
- "New supply agreement expands Talen-Amazon partnership." World Nuclear News. world-nuclear-news.org↩
- "Google just spent $4.75B chasing power for AI data centers." Electrek, December 29, 2025. electrek.co↩
- "AI data centers have massive demand for aluminum. It's crushing the US aluminum industry." Yahoo Finance. finance.yahoo.com↩
- "Aluminum surges as AI data centers strain power supply." Nikkei Asia. asia.nikkei.com↩
- "Second Stargate Data Center to Run Off-Grid." Innovatrix. innovatrix.eu↩
- "'I'm still president,' says Venezuela's abducted leader Maduro in NYC court." Al Jazeera, January 5, 2026. aljazeera.com↩
- "Abduction of Venezuela's Maduro illegal despite US charges, experts say." Al Jazeera, January 8, 2026. aljazeera.com↩
- "2026 Strait of Hormuz crisis." Wikipedia. en.wikipedia.org↩
- "U.S. seizes Iranian cargo ship in Strait of Hormuz." NPR, April 19, 2026. npr.org↩
- "Trump weighs using U.S. military to acquire Greenland: White House." CNBC, January 6, 2026. cnbc.com↩
- "US says military 'always an option' in Greenland as Europe rejects threats." Al Jazeera, January 7, 2026. aljazeera.com↩
- "House Passes 10-Year Federal Moratorium on State AI Regulation." Goodwin Law, May 2025. goodwinlaw.com↩
- "US House of Representatives Advance Unprecedented 10-Year Moratorium on State AI Laws." National Law Review. natlawreview.com↩
- "Ensuring a National Policy Framework for Artificial Intelligence." The White House, December 11, 2025. whitehouse.gov↩
- "President Trump Signs Executive Order Challenging State AI Laws." Paul Hastings LLP. paulhastings.com↩
- "Moratoriums and Federal Preemption of State Artificial Intelligence Laws Pose Serious Risks." Center for American Progress. americanprogress.org↩
- "Claude Mythos Preview." Anthropic Red. red.anthropic.com↩
- "Anthropic's Claude Mythos Preview Changes Cyber Calculus." Foreign Policy, April 20, 2026. foreignpolicy.com↩
- "NSA Is Using Anthropic's Powerful Claude Mythos AI as CEO Meets With White House." Yahoo News. yahoo.com↩
- "Six Reasons Claude Mythos Is an Inflection Point for AI and Global Security." Council on Foreign Relations. cfr.org↩
- "OpenAI CEO Sam Altman's home struck by gunfire, days after Molotov cocktail attack." ABC News. abcnews.com↩
- "Suspect in attack at Sam Altman's house aimed to kill OpenAI CEO." CNBC, April 13, 2026. cnbc.com↩
- "A councilmember backed a data center project. Then 13 bullets and a 'No Data Centers' note hit his home." Fortune, April 7, 2026. fortune.com↩
- "Attack on Sam Altman's San Francisco home prompts fears of AI division." The Washington Post, April 14, 2026. washingtonpost.com↩
- "The public sours on AI and data centers as Anthropic, OpenAI look to IPO." CNBC, April 15, 2026. cnbc.com↩
- "Online response to the attack on Sam Altman's house shows a generational divide." Fortune, April 14, 2026. fortune.com↩
- "Nvidia discloses that U.S. will limit sales of advanced chips to China after all." NPR, April 16, 2025. npr.org↩
- "The Trouble With Trump's Deal With Nvidia and AMD: It's An Export Tax." Tax Policy Center. taxpolicycenter.org↩
- "How did Leopold do? Evaluating Situational Awareness's predictions." Effective Altruism Forum. forum.effectivealtruism.org↩
- "Is AI Contributing to Rising Unemployment? Evidence from Occupational Variation." Federal Reserve Bank of St. Louis, August 2025. stlouisfed.org↩
- Brynjolfsson, E., Chandar, A., and Chen, R. "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of AI." Stanford Digital Economy Lab. digitaleconomy.stanford.edu↩
- "How Will AI Affect the US Labor Market?" Goldman Sachs. goldmansachs.com↩
- "Florida AG Launches Investigation into OpenAI Over ChatGPT's Alleged Role in FSU Shooting." CoinAlert News, April 9, 2026. coinalertnews.com↩
- "How did Leopold do? Evaluating Situational Awareness's predictions." Effective Altruism Forum. forum.effectivealtruism.org↩
