Karen Hao’s recent book on the company argues that its ambitions are not merely about scale or the market but the creation of a global force that rivals a colonial power of old.
Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business through AI” in Tokyo, Japan, 2025. (Tomohiro Ohsumi / Getty Images)
In the opening pages of Empire of AI, Karen Hao writes that “this is not a corporate book.” For readers hungry for an insider’s view of OpenAI, though, there is plenty to chew on. Hao pulls together a meticulous history of the company and its most headline-making dramas. That includes the splintering off of key executives and researchers to found the rival AI company Anthropic; the internal scramble to scale up ChatGPT as it became the fastest-growing consumer application in Silicon Valley history; and the brief but dramatic ousting of CEO Sam Altman by the OpenAI board in 2023.
Hao’s reporting makes for a forensic and comprehensive look at the company: She interviews more than 90 current and former OpenAI executives and employees, and these conversations are bolstered by company memos, Slack messages, and interviews with dozens of competitors and critics across the AI industry. Corporate tribalism, the ills of founders’ syndrome, and start-up culture self-parody are all captured by Hao in vivid detail. Not once but twice, she recounts corporate retreats where former OpenAI chief scientist Ilya Sutskever burned a wooden figure in effigy representing deceitful AI superintelligence.
Hao is right, in one sense, to say that Empire of AI is not a “corporate book,” given how little direct access to the company she received. She writes that OpenAI’s communications team rescinded an invitation to interview employees at its San Francisco headquarters. This was not the first time that Hao had been rebuffed by the company. After she wrote the first major profile of OpenAI in 2019 for MIT Technology Review, the company refused to speak with her for three years. At the time, OpenAI was still presenting itself as an idealistic and transparent research nonprofit, even though Hao’s blistering profile anticipated its creep toward commercialization and anticompetitive practices.
With new details on the company’s inner workings and a character study of Altman in particular, Empire of AI checks all the boxes of a conventional Silicon Valley page-turner. Beneath the corporate history, though, is a more ambitious project, one seeded in Hao’s earlier reporting on the company. In her eyes, OpenAI’s ambitions are imperial and its ways of doing business have reenacted the structure of European empires. Hao argues that OpenAI uses altruistic and utopian rhetoric—positing a societal abundance that will come about through increasing automation and superintelligence—to justify its rapid growth. And she finds that in achieving that growth, OpenAI has exploited environmental resources and human labor, taking from the global majority to consolidate the wealth of a small number of companies and individuals in the United States. “OpenAI is now leading our acceleration towards this modern-day colonial world order,” she writes.
This assertion is boldly stated and compellingly argued by Hao, although it’s often relegated to the background as she returns to episodes of corporate intrigue at OpenAI’s offices in San Francisco. At their strongest, Hao’s claims are rooted in both place and people, backed by her reporting on AI developments in half a dozen countries. In Colombia, Hao profiles a refugee fighting for pennies on gig-work platforms while labeling the data used to train generative AI models. In Chile and Uruguay, she introduces us to activists fighting off proposed data centers, which threaten to usurp land and siphon potable drinking water from communities facing drought. In Kenya, she meets an outsourced worker who was paid just a couple of dollars an hour to filter through the most violent and toxic content produced by ChatGPT.
Hao’s reporting strips away the veneer of total automation that has been lathered onto generative AI products. She lays bare how precarious the human labor and vital natural resources are that power these products, and how the rise of the AI industry in Silicon Valley has impacted communities throughout its global supply chain. She also goes one step farther and proposes a diagnosis for what ails OpenAI and the ethos that has infected the industry at large. And she names it outright: the pursuit of scale.
Hao paints scaling as a core tenet of the company’s business model and Altman’s leadership agenda. OpenAI has hinged its success on exponential, unencumbered growth. This hunger for larger models—and, in turn, more data, more water, and more land to develop them—echoes the expansionist ideologies that underlay European imperial projects. And like those projects, OpenAI’s expansion has come at a human cost.
OpenAI is hardly the first Silicon Valley company to resemble a colonial power. And while Hao spends little time drawing parallels to this growth in recent history, Silicon Valley’s denizens have long prayed at the temple of scale. With Google and Meta setting the pace, consumer technology companies over the past two decades have chased endless user acquisition and “emerging market” penetration. Whether Netflix, Amazon, or Twitter, these companies have traversed every corner of the globe to acquire new customers. This ability to scale technology platforms globally has in turn resulted in multibillion-dollar valuations, worth contingent on each company’s continued expansion. Contraction of even the smallest order spells investor crises, as Facebook saw in 2022, when it lost users for the first time since the company’s founding nearly 18 years earlier. The following quarter, Facebook’s perpetual-growth machine continued to chug along, adding 200 million new users. In Silicon Valley, expansion is doctrine.
The unchecked pursuit of scale translates new users into dollar amounts, which invariably leads to inequities. Users in regions considered the least valuable to these companies’ bottom lines were onboarded en masse, their personal data extracted and monetized, but they were left without the platform safety measures and investment necessary to protect them. The ensuing harms include, but are not limited to, Facebook’s complicity in genocide in Myanmar and Ethiopia, the use of social media to mark labor activists and political dissidents for assassination in the Philippines, and the exploitation of migrant workers in Amazon’s Saudi Arabian warehouses.
OpenAI now sits firmly among this cadre of Silicon Valley tech giants, recently earning a valuation of over $300 billion. While its expansion strategy at times mimics that of Meta or Google, the mechanics of its scaling, and the harms it has left in its wake, are distinct.
In 2015, OpenAI was founded as a research nonprofit. Much like its predecessors that proclaimed they were “connecting the world,” OpenAI had its own lofty mission: to develop AI that “benefits all of humanity.” In 2019, Altman, one of several cofounders of the company, stepped into the role of CEO. His previous work had been as president of Y Combinator, the most influential start-up accelerator in Silicon Valley. In that capacity, Altman helped start-ups get off the ground with seed funding and mentorship. Far from being a machine-learning expert, Altman was skilled in go-to-market strategies and glad-handing at pitch meetings. He counted some of Silicon Valley’s most influential billionaires as mentors, including Peter Thiel and OpenAI cofounder Elon Musk.
On September 15, Vice President JD Vance attacked The Nation while hosting The Charlie Kirk Show.
In a clip seen millions of times, Vance singled out The Nation in a dog whistle to his far-right followers. Predictably, a torrent of abuse followed.
Throughout our 160 years of publishing fierce, independent journalism, we’ve operated with the belief that dissent is the highest form of patriotism. We’ve been criticized by both Democratic and Republican officeholders—and we’re pleased that the White House is reading The Nation. As long as Vance is free to criticize us and we are free to criticize him, the American experiment will continue as it should.
To correct the record on Vance’s false claims about the source of our funding: The Nation is proudly reader-supported by progressives like you who support independent journalism and won’t be intimidated by those in power.
Vance and Trump administration officials also laid out their plans for widespread repression against progressive groups. Instead of calling for national healing, the administration is using Kirk’s death as pretext for a concerted attack on Trump’s enemies on the left.
Now we know The Nation is front and center on their minds.
Your support today will make our critical work possible in the months and years ahead. If you believe in the First Amendment right to maintain a free and independent press, please donate today.
With gratitude,
Bhaskar Sunkara
President, The Nation
Under Altman’s leadership, OpenAI came to equate a naked thirst for market dominance with benevolence. The company framed itself as the only organization that could be trusted to usher in unparalleled advancements in machine learning and the looming emergence of AI superintelligence. In the process, OpenAI stopped publicizing many of its model-building practices and turned its back on the peer-review process. This was not about preserving trade secrets, according to Altman and others in OpenAI’s leadership, but done as a way to ensure the “safety” of all humanity.
Years before the release of ChatGPT, Altman and other executives had resolved to improve OpenAI’s internal deep-learning techniques exponentially. Hao’s reporting shows that the company achieved this goal by pioneering the concept of “scaling laws.” Scaling laws hold that there is a predictable relationship between an AI model’s performance and three variables: the amount of training data put into a model; the amount of computational resources used to train a model; and the model’s size, or how many parameters it has. Increasing these variables will proportionally improve the model’s performance. In other words, OpenAI’s theory of development hinged on building increasingly larger, more data-hungry, and more resource-intensive models.
Scaling laws underlined OpenAI’s early decision to, in effect, scrape all of the Internet—or as much as it found at its disposal—and deal with the inevitable copyright litigation later. Rather than improve the quality of its training data, or improve the quality of the neural networks used to process that data, OpenAI chose to accumulate gigantic datasets. A common refrain in the OpenAI offices, Hao reports, was that “the secret of how our stuff works can be written on a single grain of rice.” That secret is the word scale. OpenAI was so instrumental in popularizing scaling laws across the industry that, at times in the book, Hao simply refers to them as “OpenAI’s Laws.”
The splashy arrival of ChatGPT in November 2022 proved that OpenAI’s scaling laws were working, and it turned Silicon Valley on its head. In December of that year, Google management issued an internal “code red” alert on ChatGPT, which signaled that ChatGPT threatened its core search business model. Meta made its second pivot in as many years, turning away from the uncanny virtual playgrounds of the “metaverse” and toward AI product development. That year, OpenAI completed its turn from nonprofit research to commercial product development. However, Empire of AI shows that, far from ushering in a new Silicon Valley order, OpenAI’s disruption did little to challenge scaling as a fundamental principle of Big Tech. In the generative AI boom, the gospel of scaling has simply been refurbished, and OpenAI has become its latest evangelist.
Last fall, in a blog post titled “The Intelligence Age,” Sam Altman reflected on his company’s strategy in the simplest of terms. “In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it,” he wrote. Hao calls attention to the many silences in Altman and others’ recounting of this strategy, most importantly by giving voice to the “ghost workers” that made scaling laws feasible in the first place. As the CEO of Appen, one of the vendors that has connected OpenAI with contractors in the Global South, told Hao, “The challenge with generative AI is, the inputs are the entire corpus of humanity. So you need to control the outputs.”
The work of controlling OpenAI’s outputs often falls on data workers, frequently hired remotely in Venezuela, India, the Philippines, and Kenya by way of gig-work platforms. If there was any doubt about what service these middlemen provide, the leading vendor in the industry is named Scale AI. Hao spotlights several contractors throughout the book, detailing the dire circumstances and precarity that led them to data work and how their dependence on each platform’s piecemeal assignments and payouts was exploited.
In one chapter, Hao recounts meeting Mophat Okinyi in Nairobi, Kenya. Okinyi had been contracted by Sama, which had been a leading vendor in the past for Facebook’s content moderation. In 2021, Sama workers were tasked with annotating text produced by large language models (LLMs) in an effort to train OpenAI’s moderation filters. Rather than moderating real user posts, as the workers on the Facebook projects had done, Okinyi reviewed AI-generated text produced by LLMs, including the most extreme outputs OpenAI researchers could muster.
“The details grew excruciatingly vivid: parents raping their children, kids having sex with animals,” Hao writes, explaining that Okinyi had been placed on the “sexual content” team and tasked with reviewing 15,000 pieces of content each month. His work was done before ChatGPT even became available to the public, as part of a preemptive effort to filter out the most hateful elements found in its underlying models. OpenAI had scraped the worst that the Internet has to offer and fed it into its LLMs; Okinyi was paid to clean up the toxic waste it was now spewing out.
It was invaluable work, but Okinyi was paid less than $13 a day for it. The trauma he suffered reading AI-generated fantasies of sexual violence for several hours each day sent Okinyi spiraling into a depression and contributed to his wife’s decision to leave him. “Companies pad their bottom line, while the most economically vulnerable lose out and more highly educated people become ventriloquists for chatbots,” Hao writes.
Get unlimited access: $9.50 for six months.
Parallel to scaling the training data, OpenAI also doubled down on its efforts to scale computational power. This required more specialized chips and larger data centers, warehouses filled wall-to-wall with droning computers. OpenAI’s computational infrastructure has been provided for years through its partnership with Microsoft. At times, Altman would call Microsoft CEO Satya Nadella on a daily basis to beg, saying, “I need more, I need more, I need more.”
Hao tracks this push to expand computational power to arid regions in Chile and Uruguay where AI companies, including Microsoft, have stealthily laid plans to build massive data centers to meet the skyrocketing demand. The size of these new centers exceeds mere football fields, on a par instead with university campuses. In addition to concerns about carbon emissions, AI data centers produce extreme heat that requires tremendous amounts of water to cool down. Despite this, many data centers for both Microsoft and Google have been proposed in areas facing extreme drought, including Uruguay.
“They are extractivist projects that come to the Global South to use cheap water, tax-free land, and very poorly paid jobs,” Daniel Pena, a scholar in Montevideo, told Hao. Pena sued the Uruguayan environmental ministry to force it to reveal the water-usage terms for a proposed Google data center near the city, a suit he won in 2023. That year, thousands of protesters took to the streets to oppose the government’s appeasement of these water-intensive industries. Hao recounts seeing protest graffiti scrawled on Montevideo’s city walls during her reporting trip there: “This is not drought. It’s pillage.”
Microsoft, Google, Amazon, and Meta spend more money building data centers than almost all of the other companies in the world combined. And there is little sign of the construction slowing. In January, OpenAI announced that it will lead a project to build $500 billion worth of AI infrastructure alongside partners at Softbank and Oracle, with support from Microsoft. The project, which they christened Stargate, has also been backed by the Trump administration, despite reported struggles to meet its initial building goals. Hao’s term for this type of rapacious expansion: “the mega-hyperscale.”
In 2019, Sutskever, OpenAI’s former chief scientist, suggested that one day, “I think it’s pretty likely the entire surface of the earth will be covered with solar panels and data centers.” Sutskever, conveniently, fails to mention where the people would go.
Empire of AI is a tonic for both uncritical AI hype and doomerist prophecies that AI supercomputers will soon end humanity. (If you’re not familiar with the latter, the book is also a helpful primer on effective altruism.) Hao is far from the first to make assertions about the harms of scaling AI. Her reporting builds on a body of scholarship, including writings on “data colonialism,” a concept coined back in 2019. She also pulls on threads stitched into the groundbreaking papers on AI ethics, including “On the Dangers of Stochastic Parrots,” the controversies that feature prominently in her book’s early chapters.
Hao grounds this scholarship by taking it out of the cloisters of scholarly journals and AI research conferences and touching down instead in the lives of workers, community organizers, and activists. Her reporting gives a sense of urgency to an argument that could easily feel overly conceptual. The AI industry isn’t simply re-creating familiar colonial power structures through its scaling philosophies; it is also taking an active toll on communities across the globe. In drawing a straight line between OpenAI’s “scaling laws” and these harms, Hao makes clear that if expansionist ideologies continue to underlie the AI industry’s model development, this exploitation of vulnerable workers and our environment will only worsen.
Empire of AI plainly shows that “AI safety” should not be spoken about in terms of hypotheticals. The most pressing dangers we face because of this technology do not lie in the realm of some theoretical coming singularity. The harms of AI have already arrived, and we have an obligation to oppose them. In the book’s closing pages, Hao argues that we can still change course by focusing on building smaller, “task-specific” AI models; that data workers can be paid fair wages; and that we can reap the benefits of AI without building data centers that consume our planet. If we continue on our current course, however, the inertia of hyperscaling will only increase and the expansion of OpenAI and its imitators will become that much harder to slow. At stake are the conservation of our planet, the preservation of cultural knowledge, and the dignity of workers the world over.
Andrew DeckAndrew Deck is a a staff reporter at Nieman Lab. He covers the rise of AI and its impact on journalism and the media industry.