At the halfway of each empire is an ideology, a content strategy that propels the strategy guardant and justifies enlargement – adjacent if the outgo of that enlargement straight defies the ideology’s stated mission.
For European assemblage powers, it was Christianity and the committedness of redeeming souls portion extracting resources. For today’s AI empire, it’s artificial wide quality to “benefit each humanity.” And OpenAI is its main evangelist, spreading zeal crossed the manufacture successful a mode that has reframed however AI is built.
“I was interviewing radical whose voices were shaking from the fervor of their beliefs successful AGI,” Karen Hao, writer and bestselling writer of “Empire of AI,” told TechCrunch connected a recent episode of Equity.
In her book, Hao likens the AI manufacture successful general, and OpenAI successful particular, to an empire.
“The lone mode to truly recognize the scope and standard of OpenAI’s behavior…is really to admit that they’ve already grown much almighty than beauteous overmuch immoderate federation authorities successful the world, and they’ve consolidated an bonzer magnitude of not conscionable economical power, but besides governmental power,” Hao said. “They’re terraforming the Earth. They’re rewiring our geopolitics, each of our lives. And truthful you tin lone picture it arsenic an empire.”
OpenAI has described AGI arsenic “a highly autonomous strategy that outperforms humans astatine astir economically invaluable work,” 1 that volition someway “elevate humanity by expanding abundance, turbocharging the economy, and aiding successful the find of caller technological cognition that changes the limits of possibility.”
These nebulous promises person fueled the industry’s exponential maturation — its monolithic assets demands, oceans of scraped data, strained vigor grids, and willingness to merchandise untested systems into the world. All successful work of a aboriginal that galore experts accidental whitethorn ne'er arrive.
Techcrunch event
San Francisco | October 27-29, 2025
Hao says this way wasn’t inevitable, and that scaling isn’t the lone mode to get much advances successful AI.
“You tin besides make caller techniques successful algorithms,” she said. “You tin amended the existing algorithms to trim the magnitude of information and compute that they request to use.”
But that maneuver would person meant sacrificing speed.
“When you specify the quest to physique beneficial AGI arsenic 1 wherever the victor takes each — which is what OpenAI did — past the astir important happening is velocity implicit thing else,” Hao said. “Speed implicit efficiency, velocity implicit safety, velocity implicit exploratory research.”
Image Credits:Kim Jae-Hwan/SOPA Images/LightRocket / Getty ImagesFor OpenAI, she said, the champion mode to warrant velocity was to instrumentality existing techniques and “just bash the intellectually inexpensive thing, which is to pump much data, much supercomputers, into those existing techniques.”
OpenAI acceptable the stage, and alternatively than autumn behind, different tech companies decided to autumn successful line.
“And due to the fact that the AI manufacture has successfully captured astir of the apical AI researchers successful the world, and those researchers nary longer beryllium successful academia, past you person an full subject present being shaped by the docket of these companies, alternatively than by existent technological exploration,” Hao said.
The walk has been, and volition be, astronomical. Last week, OpenAI said it expects to pain done $115 cardinal successful cash by 2029. Meta said successful July that it would walk up to $72 billion connected gathering AI infrastructure this year. Google expects to deed up to $85 billion successful superior expenditures for 2025, astir of which volition beryllium spent connected expanding AI and unreality infrastructure.
Meanwhile, the extremity posts support moving, and the loftiest “benefits to humanity” haven’t yet materialized, adjacent arsenic the harms mount. Harms similar occupation loss, attraction of wealth, and AI chatbots that substance delusions and psychosis. In her book, Hao besides documents workers successful processing countries similar Kenya and Venezuela who were exposed to disturbing content, including kid intersexual maltreatment material, and were paid precise debased wages — astir $1 to $2 an hr — successful roles similar contented moderation and information labeling.
Hao said it’s a mendacious tradeoff to pit AI advancement against contiguous harms, particularly erstwhile different forms of AI connection existent benefits.
She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is trained connected amino acerb series information and analyzable macromolecule folding structures, and tin present accurately foretell the 3D operation of proteins from their amino acids — profoundly utile for cause find and knowing disease.
“Those are the types of AI systems that we need,” Hao said. “AlphaFold does not make intelligence wellness crises successful people. AlphaFold does not pb to colossal biology harms … due to the fact that it’s trained connected substantially little infrastructure. It does not make contented moderation harms due to the fact that [the datasets don’t have] each of the toxic crap that you hoovered up erstwhile you were scraping the internet.”
Alongside the quasi-religious committedness to AGI has been a communicative astir the value of racing to beat China successful the AI race, truthful that Silicon Valley tin person a liberalizing effect connected the world.
“Literally, the other has happened,” Hao said. “The spread has continued to adjacent betwixt the U.S. and China, and Silicon Valley has had an illiberalizing effect connected the satellite … and the lone histrion that has travel retired of it unscathed, you could argue, is Silicon Valley itself.”
Of course, galore volition reason that OpenAI and different AI companies person benefitted humanity by releasing ChatGPT and different ample connection models, which committedness immense gains successful productivity by automating tasks similar coding, writing, research, lawsuit support, and different knowledge-work tasks.
But the mode OpenAI is structured — portion non-profit, portion for-profit — complicates however it defines and measures its interaction connected humanity. And that’s further analyzable by the quality this week that OpenAI reached an statement with Microsoft that brings it person to yet going public.
Two erstwhile OpenAI information researchers told TechCrunch that they fearfulness the AI laboratory has begun to confuse its for-profit and non-profit missions — that due to the fact that radical bask utilizing ChatGPT and different products built connected LLMs, this ticks the container of benefiting humanity.
Hao echoed these concerns, describing the dangers of being truthful consumed by the ngo that world is ignored.
“Even arsenic the grounds accumulates that what they’re gathering is really harming important amounts of people, the ngo continues to insubstantial each of that over,” Hao said. “There’s thing truly unsafe and acheronian astir that, of [being] truthful wrapped up successful a content strategy you constructed that you suffer interaction with reality.”















English (US) ·