A.I. Has a Message Problem of Its Own Making
In the early-morning hours last Friday, a Molotov cocktail-style projectile hit a gate outside of the San Francisco mansion of Sam Altman, the founder and C.E.O. of OpenAI. Soon after, the suspected assailant, a twenty-year-old Texas man named Daniel Alejandro Moreno-Gama, was detained at OpenAI headquarters after allegedly threatening to burn the office down and kill anyone inside. According to a federal affidavit, Moreno-Gama had compiled a list with the names and addresses of other A.I. executives. Online, he left a trail of anti-A.I. writings. In a January post on Substack, he wrote that “the Intelligence race is likely to lead to human extinction.” Last year, in an anti-A.I. activist chat on Discord—where he supposedly goes by the name “Butlerian Jihadist,” referencing a fictional war against intelligent machines in “Dune”—he posted, “We are close to midnight it’s time to actually act.”
Moreno-Gama is apparently not the only one harboring such beliefs. Early Sunday morning, Altman’s home was attacked again, with a round of bullets fired from the street; a twenty-five-year-old and a twenty-three-year-old were later arrested for negligent discharge of weapons. And earlier this month a person fired a gun at the front door of Ron Gibson, an Indianapolis city councilman who had recently voted to approve rezoning that would allow the construction of a local data center to power A.I.; the perpetrator left a note that read “NO DATA CENTERS.” (As of this writing, none of those arrested have entered pleas.) These are all inexcusable and counterproductive acts of violence. They are also signs that the A.I. industry is inspiring extreme levels of hostility and mistrust.
On Friday evening, Altman wrote a post on his personal blog acknowledging the incident and included a photograph of his husband and child, appealing to a shared sense of humanity. He alluded to a recent “incendiary article,” presumably The New Yorker’s investigation, by my colleagues Andrew Marantz and Ronan Farrow, exposing Altman’s pattern of deceptive leadership at OpenAI. “We should de-escalate the rhetoric and tactics,” Altman wrote. What he failed to acknowledge is that much of the heightened, sometimes glibly apocalyptic rhetoric about the powers of A.I. has come from within the industry itself and, indeed, straight from his own mouth. (To quote just one indelible line, from 2015, “I think A.I. will probably most likely lead to the end of the world, but in the meantime there’ll be great companies created with serious machine learning.”) Even in his recent blog post, Altman wrote that “the fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.” Who, exactly, does he think is to blame for stoking hysteria? If you tell people often enough that your product is going to upend their way of life, take their jobs, and very possibly pose an existential threat to humanity, they just might start to believe you. A recent Gallup survey of Gen Z found that forty-two per cent of respondents felt “anxiety” about A.I. and thirty-one per cent felt “anger.”
The messaging behind A.I. companies has always relied on a self-serving paradox: the technology under development is so potentially dangerous that the public’s only choice is to put blind faith in the handful of opaque businesses rapidly developing it. (Or, as the Onion recently put it, “Sam Altman: ‘If I Don’t End the World, Someone Far More Dangerous Will.’ ”) It’s become increasingly clear that the corporate machinations of A.I. founders influence how our economy grows, how we fight wars, and how political messaging spreads, and that the founders expect to oversee A.I.’s societal transformations with only self-determined levels of transparency. The economics writer Noah Smith recently wondered whether A.I. executives might become “de facto emperors of the world.” This month, OpenAI released an industrial-policy plan that proclaims its intention to “keep people first” in the age of A.I. The document calls for sweeping systemic changes including a public wealth fund invested in the success of A.I.; a pivot toward the “care and connection economy” to bolster jobs, such as elder care, that are less likely to become outmoded by A.I.; and social benefits that are not tied to employers (presumably because employment itself will be a less sure bet once bots become truly “agentic”). The paper’s tone is patronizing at best, professing concern that the “economic gains” from A.I. could “concentrate within a small number of firms like OpenAI,” as if that isn’t exactly what is already happening by design.
There is a persistent delusion of grandeur among those leading the A.I. charge. In his blog post, Altman wrote, without apparent irony, that the prospect of controlling artificial general intelligence was like the “ring of power” from “The Lord of the Rings”: it “makes people do crazy things.” OpenAI’s main rival, Anthropic, has marketed itself as the industry’s safety-minded good guys. Its co-founder and C.E.O., Dario Amodei, originally left OpenAI owing to safety concerns, and he recently broke with the United States Department of Defense over the military’s use of A.I. in operating fully autonomous weapons, among other issues. But Anthropic, like OpenAI, is on the verge of an astronomical I.P.O., and it can be hard to disentangle the company’s marketing hype from its genuine safety concerns. Last week, Anthropic announced that its new model, Mythos, is too powerful to be released to the public and unveiled Project Glasswing, an effort to give certain companies and organizations, including Amazon, Cisco, JPMorgan Chase, and the U.S. government, early access to Mythos as a “head start” in preparing for the cybersecurity threats that the model poses. Early tests now being made public seem to justify Anthropic’s alarm: the AI Security Institute, a British government organization, found that Mythos could autonomously “execute multi-stage attacks on vulnerable networks” which would “take human professionals days of work.” The only way to fight the threats of A.I. is with more A.I., of course: Michael Cembalest, the chair of the Market and Investment Strategy group at JPMorgan, wrote, in a blog post about Project Glasswing, that Anthropic at times “feels like an arsonist selling fire extinguishers.”
Trusting Anthropic to hold back dangerous models seems somewhat dubious, given that Amodei expressed similar worries about Anthropic’s chatbot, Claude, only to release it after OpenAI put out ChatGPT. A.I. development has become a runaway corporate arms race in which caution is secondary to competition, with the two American giants battling not just each other but rapidly improving Chinese and open-source models as well. The A.I. industry is so far subject to none of the regulations that govern other dangerous technologies, such as firearms, pharmaceuticals, and environmental chemicals. The technology has evolved more quickly than the policy that governs it, in part because OpenAI has promoted its desire for regulation even as the company quietly works to quash it, including by supporting a proposed Illinois law to shield A.I. companies from liability. (In a similar way, social media has gone largely unregulated for two decades, owing to government negligence and industry lobbying.) Does anyone still believe that billionaire tech executives can be trusted as unelected stewards of the social good? The past decade should have disabused us of that notion many times over. We’ve watched Jeff Bezos acquire the Washington Post only to politicize and then gut it; Elon Musk destroy whatever claim Twitter had to being a neutral space of public discourse; Mark Zuckerberg knowingly promote platforms that harm young users’ mental health. OpenAI itself morphed from a pure nonprofit into one of the most valuable for-profit companies in the world, and yet here is Altman, still offering us advice on good governance like he works at an independent think tank.
Perhaps in response to the growing unease, A.I. companies have lately been undertaking various other efforts to appear more high-minded. Following the lead of Anthropic, Google DeepMind recently hired an in-house philosopher, and Anthropic convened a meeting of Christian leaders to discuss its chatbot’s moral orientation. A more effective strategy might be for A.I. executives to stop appointing themselves as the only arbiters of safety, to stop asking for blind faith, and to start fostering a system of external accountability, with input and involvement from the public. Tech companies proposing ways to reshape the government is a soft form of techno-fascism that alienates citizens; if A.I. requires a new social contract or a new political hierarchy, then its shape should not be up to the corporations to determine. There is another troubling paradox behind A.I. founders’ messaging: If the technology is as formidable as they claim, then they could be leading us toward existential disaster; if the technology proves less transformative, and thus less valuable than the hype suggests, then they are merely setting us up for global economic disaster. For those of us who aren’t self-appointed heroes of the artificial-intelligence movement, neither scenario is particularly appealing. ♦