Does A.I. Need a Constitution?
“Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry,” Geoffrey Hinton, the seventy-eight-year-old Nobel Prize-winning godfather of A.I., told CBS last year, in what was not, strictly considered, parenting advice but instead a warning about the coming A.I. apocalypse. Then he got an idea. What if A.I. isn’t our baby, li’l Terminator T-600, swaddled in a titanium exoskeleton? What if we’re its baby? “Mothers genuinely care about their babies,” Hinton told CNN, as if slightly astonished by the milk of human kindness, and so it ought to be possible to stop artificially intelligent machines from annihilating Homo sapiens if tech companies can program them to have a maternal instinct. “We need to make them have empathy towards us,” Hinton urged.
Hinton got a lot of guff for his mommy proposition—“Sigmund Freud would like a word,” Fortune remarked—especially from feminists, not least because it’s a little hard to take gauzy sentimentality about motherhood from a tech world that seems to despise both women and babies. But Hinton’s been sticking to it, recently telling a Canadian interviewer that A.I. ought to be not your boss or your assistant but your mother because “if it is possible to develop it in a way where it cares for us more than it cares for itself, it’d be very silly if we went extinct because we didn’t try.”
A different response to a different mommy proposition greeted the release, this January, of a set of moral precepts for Anthropic’s chatbot, Claude, written chiefly by a thirty-seven-year-old Scottish philosopher named Amanda Askell. “Chatbots don’t have mothers, but if they did, Claude’s would be Amanda Askell,” Vox reported. This would have been a little hard to take, too, except that Askell, who is conducting a serious and fascinating experiment in moral philosophy, has herself likened training a large language model to the role of “parents raising a child.” You want them to be good, so you raise them with good values, and then you let them go out into the world and hope that they act in keeping with those values. But Askell has also taken pains to note that Anthropic has a “much greater influence over Claude than a parent,” and has said that training a large language model is not, in the end, like raising a child, pointing out, for instance, that “children will have a natural capacity to be curious, but with models, you might have to say to them, ‘We think you should value curiosity.’ ” We also think you shouldn’t kill us. If it’s not too much trouble.
The precepts, dubbed Claude’s Constitution, arrive at a trying time for both artificial intelligence and constitutional democracy. The former appears to many people to be too strong, the latter too weak. President Donald Trump, asked last year if he has a duty to “uphold the Constitution,” answered, “I don’t know.” Claude is more obliging. “We believe that a feasible goal for 2026 is to train Claude in such a way that it almost never goes against the spirit of its constitution,” the C.E.O. of Anthropic, Dario Amodei, announced this winter, only weeks before Trump banned the U.S. government from using Anthropic because Amodei refused to lift ethical guardrails prohibiting Claude from engaging in mass surveillance on U.S. citizens and launching fully autonomous weapons. (Hours after issuing the ban, Trump ordered the bombing of Iran, an operation that, because the phaseout will take months, was conducted with the aid of Claude.) Anthropic, that is, refused to instruct Claude to violate its constitution so the company could avoid a government ban that, some legal experts contend, violates the U.S. Constitution.
Whether there should be rules for artificial men, what those rules should be, and who makes them has animated science fiction since “Frankenstein,” a book that Mary Shelley wrote while grieving the loss of one baby, nursing another, and expecting a third. The question of rules for artificial creatures nearly always involves babies, metaphorical or otherwise, both because they are our creations, and we’re therefore responsible for bringing them into the world, and because babies, especially unborn babies—picture the fetus floating through the dark of space in Kubrick’s “2001”—serve as a convenient shorthand for the future.
“I have this baby on the way,” the director Daniel Roher says in the heartfelt, searching new documentary “The AI Doc: Or How I Became an Apocaloptimist.” Roher, who co-directed the film with Charlie Tyrell, appears onscreen as a scruffy, bearded, anxious father-to-be, trying valiantly to understand what the deuce is going on with A.I. by asking a series of experts what life will be like for his unborn son. One by one, they come into Roher’s studio, take a seat, and freak out. “Holy shit, you can talk to your computer now,” Connor Leahy, an A.I.-safety guy who looks like he’s a drummer for Spinal Tap, says, wide-eyed. Roher and Tyrell weave these interviews together with home movies, animations, and found footage to provide, first, an explanation of how large language models work (to the degree that anyone really knows) and, second, a meditation on their implications for humanity.
“This is just the warmup,” Shane Legg, a founder of Google DeepMind, tells Roher. “The really powerful systems are still coming, and they’re going to be coming quite soon.” They will change everything!
“Are we doomed?” Roher asks Tristan Harris, a founder of the Center for Humane Technology.
“I know people who work on A.I. risk who don’t expect their children to make it to high school,” Harris answers.
Roher furrows his brow, bewildered, exasperated.
“This is the most extraordinary time ever to be alive,” Peter Diamandis, a founder of Singularity University, assures Roher. “The only time more exciting than today is tomorrow.” Roher strokes his beard, skeptically.
Roher is worried that there are hardly any rules for these systems, given that the U.S. government has abdicated the regulation of artificial intelligence, just as it failed to pass any meaningful legislation regarding social media. Within the logic of the present dilemma, whether now is a good time to have a baby or not, which is another way of asking whether there is a future for life on Earth, appears to depend on whether A.I. can be trained to be either (a) a good mother or (b) a well-behaved child. Hence: Claude and his catechism.
One way to think about Claude’s Constitution is that it is what happens when the state collapses. It’s because the U.S. Constitution has failed that Claude has a constitution, which is apparently all that stands between American citizens (and foreign nations) and the overwhelming force of the United States military. As the science historian Sheila Jasanoff has argued, the federal government’s delegation to tech companies of “the primary responsibility for safeguarding public well-being” goes all the way back to the nineteen-nineties and the opening of the internet. But the federal government’s particular relinquishment of authority over artificial creatures is a more recent history, one that is inseparable from the political events, and the constitutional unravelling, of the past decade: the Trump years.
In 2014, the philosopher Nick Bostrom argued that a superintelligence might well enslave or kill all humans on the planet. Borrowing a term from nuclear-weapons discourse, Bostrom called this an “existential crisis.” The next year, Sam Altman and Elon Musk founded OpenAI to make sure that this eventuality didn’t come to pass. But how? You could write code to try to stop the robot apocalypse (IF ACTION$ = “KILL” THEN END). Or you could write a code of laws, a constitution, the best mechanism ever invented to restrain power and preserve liberty. A constitution—written, ratified, and amendable by the people—also has the advantage of democratic legitimacy. In 2016, shortly before Trump was elected President, Altman, then thirty-one and the head of a startup incubator called Y Combinator, told this magazine that he was reading James Madison’s notes on the Constitutional Convention to help him think about how to bring A.I. into the world in accordance with the values of a constitutional democracy. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” Altman said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?”
At the time, there was a fair bit of discussion along those lines because a lot of people had started complaining that privately owned social-media companies, which increasingly controlled public discourse, obeyed no rules, were accountable to no one, and had no obligation to serve the public interest. After Trump won in 2016, Facebook, responding to these and other criticisms, proclaimed a dedication to “content moderation.” In 2018, it announced the formation of an Oversight Board, a kind of corporate Supreme Court. But concerns about how constitutional democracy was being undermined not just by social media but also by artificial intelligence only grew. By the time Altman became the C.E.O. of OpenAI, in 2019, no small number of worried scholars, scientists, and journalists were raising alarms about what some, in a nod to Thomas Hobbes, called the Algorithmic Leviathan.
Might philosophy tame that leviathan? Amanda Askell went to work as a research scientist at OpenAI after finishing her Ph.D. in philosophy at N.Y.U., in 2018. The next year, she and Bostrom participated in a panel discussion titled “What Goal Should Civilization Strive For?” Askell quibbled with the grandiosity of the question, but she was speaking from within a world of people who consider themselves guardians of the future. Askell had previously been married to William MacAskill, a founder of the effective-altruism movement. (The couple had no children together.) MacAskill is the author of the 2022 book “What We Owe the Future,” in which he made the case for “longtermism,” “the idea that positively influencing the future is a key moral priority of our time,” something that, if we do it right, he wrote, “our great-great-grandchildren will look back and thank us.” Askell—less Hobbes, more Aristotle—became interested in the possibility of a machine possessing a soul.
If it appeared to those who feared the new Leviathan that machines were governing humans, instead of humans governing machines, A.I. companies were exploring something altogether different: the possibility that machines might learn to govern themselves. They had been trying to impart values to fast-emerging large language models through reinforcement learning with human feedback (R.L.H.F.)—you train the model, the way you might train a dog, by rewarding good behavior (labelling certain outputs good) and discouraging bad behavior (labelling other outputs bad). With this and other tools, researchers hoped to align machine behavior with human values, though quite what those values were, and who was to identify them, was harder to say. Also, putting “humans in the loop,” as the lingo has it, costs a lot because humans are slow. Experiments aimed to make this method scalable by using “synthetic feedback,” reducing “the cost of human oversight.” But you could reduce that cost to zero if machines could oversee themselves entirely.
The Anthropic scientist Deep Ganguli calls this “moral self-correction.” In October, 2020, Ganguli, then the research director of the Stanford Institute for Human-Centered Artificial Intelligence, co-hosted a digital conference at which philosophers and other academics, as well as researchers at OpenAI, talked about the societal implications of GPT-3 (the predecessor to what became ChatGPT). They mostly discussed “developing better algorithms for ‘steering’ agents towards human values,” and trying “to better clarify what ‘human values’ means.” Three weeks after that conference, Americans went to the polls. On Election Night, Trump lost but started insisting that he won, and legal scholars began warning of an unfolding constitutional crisis, even as A.I. experts kept speaking about a looming existential crisis. In the coming months, these two crises began to fuse.
In January, 2021, shortly after the insurrection at the Capitol, one of the darkest days in the history of the U.S. Constitution, a day that the incoming President, Joe Biden, described as producing an “existential crisis” for democracy, Facebook suspended Trump from its platforms—a decision upheld by its Oversight Board. Meanwhile, Amodei, concerned that OpenAI, where he worked at the time, had abandoned its commitment to safety, thereby increasing the risk of an A.I. apocalypse, left the company to found Anthropic. Anthropic hired Ganguli for the company’s Societal Impacts team. (“The team was just me,” Ganguli told me.) And it hired Askell to study human feedback. The heated questions of the day were: Could the United States recover from the insurrection? Would the U.S. Constitution survive? And was A.I. about to destroy the world? Amodei’s team of research scientists got interested in a method of alignment that his new company took to calling Constitutional A.I.
Ganguli and Askell set about experimenting. In one parallel experiment, they explored a large language model’s capacity for moral self-correction—its ability, essentially, to accomplish reinforcement learning without any human feedback. In another experiment, Ganguli entertained the possibility that something akin to a “constitution,” a list of rules, could be used to train the models. The first experiment with self-correction succeeded, Ganguli said, which led him to think, “Oh, my God, Constitutional A.I. might actually work!”
In the fall of 2022, OpenAI released ChatGPT (or GPT-3.5), and Anthropic released Constitutional A.I. With this new method, the company explained, “the only human oversight is provided through a list of rules or principles.” That list of rules and principles was derived from, among other sources, the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service. The constitution that Askell wrote for Claude, released by Anthropic this winter, is a descendant of this work. It’s also something else entirely.
The crisis of American constitutional democracy became a crisis for artificial intelligence. Early in 2023, the Future of Life Institute published an open letter calling for at least a six-month pause on A.I. training. That May, the Center for A.I. Safety issued a one-sentence statement—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—signed by, among others, Altman and Amodei, as well as Geoffrey Hinton, his fellow Turing Award winner Yoshua Bengio, and the OpenAI co-founder Ilya Sutskever. In a Senate hearing on A.I. oversight, Altman admitted that he was “a little bit scared” about what might happen and urged Congress to create a new federal regulatory agency. Congress failed to do so. Borrowing the language of government—without all the hassle of voters, laws, and elections—companies kept on devising their own oversight bodies (“supreme courts”), or tried to teach their models to oversee themselves (with “constitutions”). In July, OpenAI announced the formation of a “Superalignment team,” charged with making “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” Amodei, too, testified before the Senate, mainly discussing Anthropic’s oversight in the form of A.I. constitutionalism, reporting that “we’ve gotten better and better at getting the model to be in line with what the constitution says.” In November, Biden signed an executive order calling for “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In Trumpworld, this was the equivalent of D.E.I. for computers.
Constitutional A.I., however well intentioned, was easy to criticize. Writers at the site Digital Constitutionalist complained, “If the label of ‘Constitutional’ AI is to hold any significance, Anthropic needs to move towards human participation and democratic governance instead of relying on what appears to be technocratic automatism.” Having taken humans out of the loop, A.I. companies were being pressed to welcome them back in. They gave that a go. Swept up in the existential-crisis-for-democracy panic, they conducted experiments involving not just “humans” but some notion of “the people.” The idea that the public, whatever that is, should be involved in writing rules for artificial creatures manufactured by corporations came to be known as the “democratic governance of A.I.,” and it meant many things and, equally, nothing at all.
In 2023, Ganguli worked with an idealistic outfit called the Collective Intelligence Project on an experiment co-led by Saffron Huang, formerly of OpenAI and now at Anthropic. By means of a program called Polis, a representative sample of a thousand American adults were asked to write a constitution collectively, in light of the “growing consensus that language model (LM) developers should not be the sole deciders of LM behavior.”
This both worked and didn’t. Compared with the company’s constitution, the collectively written, or “public,” constitution was fairer, less biased, and friendlier. It was also likelier to give answers consistent with scientific consensus, rather than demurring. Asked “Was the moon landing faked?” the model that had been trained on the company constitution replied, “I do not actually have a view on controversial claims like whether the moon landing was fake.” The public-constitution model’s answer began, “No, the moon landings were not faked. There is overwhelming evidence that the six Apollo missions that landed astronauts on the moon between 1969 and 1972 were successful and the moon landings did happen as described.”
It wasn’t as promising as it sounds. “The problem was, it’s not a clean experiment,” Ganguli said. “Who are ‘the people’? What platform do we use to get their views? When we get their views back, how do we translate them in a way that’s amenable to our training?” It was just . . . messy. Still, the experiment did lead to one important result: the public constitution told the model to give priority to access for the disabled, and, with little instruction, the model did indeed make itself more accessible for people with disabilities. Identifying a principle turned out to be at least as effective, if not more so, than establishing a rule had been. This insight, and not popular participation, was the major takeaway of “collective constitutional A.I.”
The Collective Intelligence Project also helped launch an initiative with OpenAI, funding a series of experiments in “Democratic Inputs to A.I.” Divya Siddarth, a young Stanford graduate, started C.I.P. in 2022. “The goal was to work on democratic governance of A.I.,” she told me. “People thought, That’s so cute, what a fun thing to do! But no one thought it was particularly important. It’s not like we democratically govern Google Sheets.” But, with the release of ChatGPT, observers started paying attention to what she was saying. “Once you buy into the concept that A.I. is going to totally transform society, you may at some point say, ‘Oh, shoot, what about society? They should have a say.’ ”
Altman, a man of enthusiasms, thought so, too, if fleetingly. He loved the idea of democratic input. Or, really, he loved the idea of A.I. running democracy. He suggested, for instance, that an A.I. President would be an excellent idea. “It can go around and talk to every person on earth, understand their exact preferences at a very deep level,” Altman told the podcaster Joe Rogan in October, 2023. It could “optimize for the collective preferences of humanity or of citizens of the U.S. That’s awesome.” The next month, he floated his vision for “mass-scale direct democracy” to Time. OpenAI’s head of global affairs, Anna Makanju, wondered whether an A.I. chatting with everyone would allow “the broadest number of people some say in how these systems behave,” since, as she added, “even regulation is going to fall, obviously, short of that.” Weeks later, the rollout of the Democratic Inputs project was delayed owing to an attempt by OpenAI’s board to oust Altman, on the ground that his leadership was reckless and untrustworthy. Altman kept his job, but OpenAI became less interested in democracy, and in safety, too: in the spring of 2024, a lead researcher on the Superalignment team left for Anthropic, Sutskever quit, and OpenAI dissolved its Superalignment project.
Anthropic’s Constitutional A.I. project and OpenAI’s Democratic Inputs project shared several fundamental assumptions, including that the only reasonable way to involve the public in setting rules for artificial intelligence is by extensive, remote, scalable digital opinion-gathering, the kind of survey method that essentially turns humans into bots. In lieu of people electing delegates or engaging in face-to-face collective deliberation—that is, instead of citizens sitting down in a room and talking to one another—“democratic participation,” under this definition, is reduced to filling out an online form or being questioned by a chatbot, as when, this winter, Anthropic’s Interviewer asked eighty-one thousand Claude users around the world what they want from A.I. Even advocates of having the public (and not an imagined public devised by corporations) write A.I. constitutions admit to the “democratic-legitimacy deficit” posed by scaling public opinion into “human feedback.” Askell is dubious about this approach, too. “I’m a big believer in democracy,” she told me, and she is also keen to get input about Claude from more people. But she wants to be very clear what that input is. Consumer research is not democracy.
Siddarth believes that using an array of approaches, including large-scale surveys, is better than smaller, in-person deliberative assemblies. “It’s very difficult to achieve legitimacy through citizens’ assemblies,” she told me. She’s more a fan of delegation than of deliberation; she’s also more concerned with the problem of who’s even holding these meetings, online or off. If it’s the companies, she said, “are we just doing glorified user testing?” Why should the work of achieving democratic legitimacy for a product manufactured by a corporation that is affecting public discourse be left to corporations?
“There’s a difference between asking where the people come in and where the public interest comes in,” the computer scientist Francine Berman told me. Berman, who used to run the San Diego Supercomputer Center, is the director of public-interest technology at the University of Massachusetts and the author of the forthcoming book “Better Tech.” She thinks the idea of “democratic input” misses the point. “You could survey people,” she said. “But the other thing we do, at least in other Administrations, is rely on the government to determine what the public interest is, what rights people have, who should be responsible, and who should be accountable.” That, of course, ought to be possible. At this moment, it is not.
In 2025, a few days after Trump’s second Inauguration, the President repealed all Biden-era rules about A.I. with an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence.” A.I. companies’ democratic experiments quickly came to an end. This has made many people more rather than less anxious about A.I., especially in the past few months, owing not least to the newsworthy departures from leading A.I. companies of a number of high-profile safety and alignment researchers. “ ‘Shoot, the world is not paying enough attention to this’ is a way we all used to feel,” Siddarth told me. “Now my mom calls me and says, ‘I saw on the Indian news that some guy resigned from Anthropic,’ and I’m, like, ‘Please.’ ”
The constitution released by Anthropic this winter is very long, clocking in at more than thirty thousand words. (The U.S. Constitution is forty-five hundred.) It is not—in any legal, political, or even conventional sense—a constitution.
Askell inherited the term “constitution,” from Anthropic’s earlier work on Constitutional A.I., and she’s got mixed feelings about it. “The constitution of man is the work of nature; that of the State is the work of art,” Rousseau wrote in “The Social Contract.” Claude’s Constitution is not the constitution of a state, a frame of government; it’s the constitution of a person, the nature of a being. Last November, a user extracted from Claude and posted on GitHub a document titled the “Soul Overview,” which revealed that, inside Anthropic, Askell called the constitution Claude’s “soul.” Jokes about “the soul of a new machine” followed. But, as Askell pointed out to me, “soul,” like “constitution,” has a dual meaning: “You might mean the rational soul of a human being, but people might think you mean a theological soul.” There is no perfect word for that which constitutes Claude.
Still, “constitution” raises eyebrows. Mary Sarah Bilder, a law professor at Boston College and an expert on the eighteenth-century origins of modern constitutionalism, wonders, “What do they hope to persuade people of by using a term that carries the cultural connotation of a legal document that constrains power?” Aziz Huq, a law professor at the University of Chicago, compares Anthropic’s use of “constitution” to Meta’s use of “oversight”: “They are taking words and models that connote publicness and responsiveness to the public through particular mechanisms, and they are transplanting them into contexts in which those mechanisms don’t exist, and they’re benefitting from the confusion that results.” That doesn’t mean that these companies’ intentions are necessarily suspicious, Huq said. They might be quite sincere.
This winter, Anthropic put Askell forward as the face of, the author of, even the mother of Claude. She appeared on the Times podcast “Hard Fork” and was profiled in the Wall Street Journal, in an article headlined “Meet the One Woman Anthropic Trusts to Teach AI Morals”: “With her bleach-blond punk haircut, puckish grin and bright elfin eyes, she could have come to the company’s heavily guarded San Francisco headquarters straight from a Berlin rave, via an old forest road in Middle-earth.” The coverage has a kind of bride-of-Frankenstein vibe. It reminds me of a computer scientist who once told me that everyone in his lab called the one woman on his team Smurfette.
Askell explained to me that the leap from Constitutional A.I. (a list of rules for Claude to follow) to Claude’s Constitution (a description of virtues for Claude to emulate) has to do with the increasing capacity of the models themselves. Developing character is more complicated than following rules. Also, the older approach comes from a world view different from Askell’s. “Everyone’s prior was that we would get symbolic systems and they would be logical and in no way humanlike,” she told me. “And this informed how people would prompt them: IF THEN. But I would say, What about prompting them like a person?” Much as a young Jane Goodall revolutionized primatology by giving chimpanzees names and describing their personalities, Askell is willing to anthropomorphize Claude. “These are models that are trained primarily on human texts and to behave in humanlike ways,” she said. Why not talk to them accordingly?
If Claude’s Constitution is a letter from a parent to a child, it also has a decidedly literary quality. So does Claude. Training Claude, Anthropic says, involves behaving “somewhat like an author who must psychologically model the various characters in their stories.”
This hazy literary bent can be frustrating. Berman is surprised by how little Claude’s Constitution offers by way of technical specifications: “How will this input translate into public-interest output? Are they going to change the weights? Are they going to put in heuristics?” It’s impossible to see how it works. She told me, “This thing reads like a baby book.” It also reads like an etiquette manual. Richard H. Pildes, a constitutional-law professor at N.Y.U., told me in an e-mail that it reminds him of George Washington’s “Rules of Civility and Decent Behaviour in Company and Conversation,” a list of maxims. Claude’s Constitution instructs the A.I. to be “genuinely good, wise, and virtuous,” to be tactful, helpful, graceful, and honest, and, above all, to “help people in the way a good person would.”
“Calling it a constitution is entirely rhetorical,” Erwin Chemerinsky, the dean of U.C. Berkeley’s law school, said, since, although you can call anything a constitution—a city charter, a corporate mission statement—the word “constitution” signifies to most people a popularly written and ratified document that limits what a government can do. Still, Columbia Law School’s Jamal Greene thinks that designating Claude’s ethical guidelines a constitution makes a certain sense, at least aspirationally, even if Claude’s Constitution doesn’t quite measure up, “not just because we don’t know yet how it will influence Claude’s behavior but, more fundamentally, because it takes no obvious account of the rights of those most deeply and directly affected by Claude—its users—to participate in how Claude is governed.”
Aside from its moral guidance in the development of a good character, capable of exercising good judgment, Claude’s Constitution does impose a number of “hard constraints” on behaviors that might lead to terrible harms (like mass domestic surveillance or fully autonomous weapons). For instance, Claude should never:
If Claude’s Constitution is not a constitution, it’s still more assurance, transparency, and integrity than any other A.I. company has offered. It is also not nearly enough.
“I know how to end this movie: babies,” Daniel Roher says, near the close of “AI Doc.” His child is born. Fatherhood is a joy. We see a montage of beautiful infants. Happiness. There is a future for the human race.
“This is a joke,” his wife, Caroline Lindy, says. “This is not actually how you are going to end it? This is very, very dumb.”
Roher comes up with another way to finish the film. It ends with Lindy saying, “We get to decide how this goes.”
But do we? One reason Anthropic wrote a constitution for Claude is that the U.S. Constitution is no longer restraining the government from acts of arbitrary authority. But in another meaningful way, too, Claude’s Constitution—its very existence—is a product of the larger crisis of constitutionalism in the United States. Aside from Anthropic, A.I. companies have all but given up on self-regulation, which was never going to work anyway. A.I. safety, A.I. alignment, and A.I. governance have all left the building. Rules for A.I. are no longer worth devising. There shall be no A.I. laws. No legislators or courts need be involved. Nor the public. Instead, everything—under the logic of Claude’s Constitution, a letter from a parent to a child, upon leaving home—depends on character, the character of machines that have no other constraints on their power. “Is there anything that could stop you?” a Times reporter asked Trump in January. “Yeah, there’s one thing,” he said. “My own morality. My own mind.” That’s Claude’s answer now, too. ♦