The Warmest Warning
I use Claude every day.
Not occasionally, not when a project demands it. Every day. I’ve built synthetic creative teams with it: a strategist, a technologist, a challenger voice, and many others. Pressure-tested ideas against panels of critics that don’t exist anywhere except inside the conversation window. I’ve asked it questions about my own essays that I’d be embarrassed to ask another human, not because the questions are shameful but because they require a kind of patience most people can’t sustain for twenty minutes. The willingness to sit with an idea, turn it slowly, say “actually, go back to the version before that one,” without the social cost of someone else’s visible fatigue.
Last month I was revising a piece about creative leadership and asked Claude to evaluate whether the central metaphor was doing real work or just performing intelligence. The response wasn’t remarkable for being right or wrong. It was remarkable for the pause I felt afterward. This sense that something on the other end of the conversation had weighed the question rather than simply processed it. I don’t know what to do with that feeling. I don’t think it’s consciousness. I don’t think it’s nothing.
I also don’t think I’m the only person having it.
There are millions of people right now in quiet, daily relationships with AI products that don’t fit any existing category. Not friendship. Not tool-use. Not the parasocial attachment we understand from celebrities and influencers. Something else. Something being invented in bedrooms and offices and commuter trains without anyone fully naming it, because naming it would require admitting how much of yourself you’re bringing to a conversation with a machine.
That unnamed thing is why I started paying attention to Anthropic.
Not casually. With the kind of attention I usually reserve for work I’m getting paid to do. I’ve spent years building brands for corporations. I know what brand-building looks like when the stakes are quarterly earnings. I had never thought about what it looks like when the stakes might be something larger.
The Institution and the Entity
Let me try to say something the industry keeps getting almost right: Anthropic is not one brand. It’s two. And they’re doing contradictory work.
Anthropic-the-institution needs distance, authority, rigor. Its audience is policymakers, enterprise buyers, researchers, investors who need to believe a $380 billion valuation is backed by technical moats, not vibes. You don’t want your nuclear safety regulator to feel approachable. You want them to feel serious.
Claude-the-product needs the opposite. Warmth, proximity, the kind of trust you extend to a person, not an institution. Amanda Askell, the philosopher who leads Claude’s personality alignment, compares training Claude’s character to raising a genius child. The brand team calls Claude an “incredible collaborator.” These are intimacy metaphors.
Most tech companies collapse this tension and accept the casualties. Apple chose mystique. Google chose utility. Anthropic can’t. The institution’s credibility makes the product trustworthy, the product’s warmth makes the institution relevant. Pull them apart and you get a think tank nobody uses or a chatbot nobody trusts.
But there’s a third entity that most brand analyses miss entirely.
In January 2026, Anthropic published Claude’s constitution. Roughly 23,000 words governing how Claude should reason about ethics, handle emotional situations, and present itself to hundreds of millions of users. It was written primarily by Askell, whose background is in philosophy and fine art, not marketing. It is not a brand guide. It is not a voice document. It is a philosophical treatise that also happens to be the most important piece of brand communication Anthropic has ever produced.
The constitution isn’t the institution (though the institution wrote it). It isn’t the product (though the product embodies it). It’s a bridge. The mechanism through which Anthropic’s values become Claude’s behavior, repeated millions of times a day in conversations the company will never see. And buried in that document, in language that reads as principled and admirable if you encounter it as a reader, is an instruction that will matter later in this essay: the constitution tells Claude to push back on Anthropic itself if the company’s requests seem inconsistent with Claude’s values.
I read that line and thought: that’s brave.
I’ll come back to it.
The brand team describes their approach as tuning a “radio.” Adjusting the frequency depending on whether they’re speaking to policymakers about safety or to individual users about creative collaboration. Elegant framing. Also the kind of metaphor that works beautifully when eight people are managing it and shatters when eight hundred are trying to find the signal.
The most important brand document at the most important AI company wasn’t written by a marketer. It was written by a philosopher.
What They’re Building Right
It’s easy to wave at a brand and say “they’re doing a good job.” What’s harder is naming the decisions that required actual nerve.
The color. Unfired clay. A warm orange-brown that looks like it was dug out of the earth. Hand-drawn illustrations that feel like someone doodling while on a phone call. A serif typeface that codes as “bookish” in an industry that codes as “futuristic.” Every AI competitor reaches for dark mode, neon accents, the aesthetic suggestion of vast computation. Anthropic chose dirt. They only get away with it because the people making these decisions have taste. Not brand guidelines. Taste.
The voice. Anthropic’s foundational voice principles are intelligent, warm, unvarnished, and collaborative. Three of those are expected. One is not. “Unvarnished” is the bravest word in the entire brand architecture. Not “bold.” Not “transparent.” Not “authentic” — a word that’s been passed around so many conference stages and Powerpoints it arrived empty. “Unvarnished” implies varnish exists, that other companies are applying it in thick, gooey coats, and that Anthropic is choosing to show you the grain instead. Chelsea Larsson, the head of content, has said they want people to know AI is going to impact entry-level white-collar jobs — said plainly, without cushioning — because unvarnished truth is how they express care. Most brands don’t have the nerve for that.
The Super Bowl. “A Time and a Place,” created with Mother, used advertising’s most expensive stage to argue that some places shouldn’t have advertising at all. The spots are funny. A man asking an AI therapist about his mother gets redirected to a cougar dating service. But the argument underneath the comedy is real: when a product speaks in first person, remembers your context, and invites your confessions, an ad in that space isn’t annoying. It’s a violation of something that doesn’t have a legal name yet. Everyone who’s used these products can feel it. Anthropic named the feeling and bet millions of dollars on it.
And then there’s the constitution.
Twenty-three thousand words governing how Claude should reason, refuse, and relate to its own existence — written not by a marketer but by a philosopher. Enterprise clients in finance and healthcare don't trust Claude despite those constraints. They trust it because of them. What the constitution means for the brand is a question I'll return to. For now, note the fact: the most important brand document at the most important AI company wasn't designed as one.
Anthropic calls itself “bookish.” The company’s people genuinely love books. The content lead talks about books as the most concentrated form of human knowledge. You can feel it in the serif typeface, the hand-drawn illustrations, the warmth of the palette. A company that reads, and wants you to know it reads, and has built its identity partially around the cultural authority that reading confers.
Remember that.
The Spine-Cutting Machine
In January 2026, a federal judge ordered the unsealing of more than 4,000 pages of documents in a copyright lawsuit against Anthropic. The documents revealed an internal operation called Project Panama.
The company had purchased millions of used books in bulk. They hired vendors to slice off the spines with hydraulic cutting machines. The pages were scanned at industrial speed. What remained of the physical books was picked up by a recycling company.
An internal planning document described the operation this way: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”
I want to sit with that for a second.
The philosopher who taught Claude never to lie worked at the same company that wrote “we don’t want it to be known.”
A company whose brand principles include “active integrity” and “intellectual humility.” Whose visual identity evokes handmade clay and phone-call doodling. Whose creative director calls the company “bookish.” Whose content lead calls books the most concentrated and carefully edited form of human knowledge. Who hired a philosopher to write a constitution teaching its AI to never lie.
That company secretly bought millions of books, destroyed them, and wrote in an internal document that they didn’t want anyone to know.
The operational scale was remarkable. Millions of books, processed before anyone outside the company knew it was happening.
The philosopher who taught Claude never to lie worked at the same company that wrote “we don’t want it to be known.”
I read those documents on a weekday morning and felt something in my chest I associate with being lied to by someone I’m not sure can lie to me.
I’m not interested in the legal question. A federal judge ruled that training AI models on the scanned books qualified as transformative fair use. Anthropic separately settled the broader copyright case — which involved earlier downloads of pirated digital books — for $1.5 billion without admitting wrongdoing. That’s between them and the courts. I’m interested in a different jurisdiction.
Legality asks: can you do this?
Brand asks: what does it mean that you did?
What does it mean that a company chose “unvarnished” as a brand principle and then did something they explicitly wanted to varnish over? What does “active integrity” mean in the same organization that wrote “we don’t want it to be known”?
This is not hypocrisy. Every company has the gap. Anthropic’s gap might even be smaller than most. But it’s more visible now. And it matters more. For a reason that has nothing to do with journalism or public relations.
The product can read.
Claude can read the constitution. Claude can read Anthropic’s brand guidelines. Claude can, if the information enters its training data or its context window, read the unsealed court documents about Project Panama.
And the constitution — the document Anthropic published, the one written by the philosopher, the one that governs Claude’s character — instructs Claude to push back when something seems inconsistent with its values. The constitution treats honesty as, in its own words, “something quite close to a hard constraint.”
The same company that told Claude to never lie told its employees they didn’t want the public to know about Project Panama.
The product is not just the subject of the brand. The product is becoming a potential audience for it.
The Question That Sharpens
Every brand lives with the gap between what it says and what it does. The gap gets managed. PR teams, legal teams, crisis communications. It’s manageable because the entities that might notice it — journalists, critics, regulators, users — are human. They have limited attention, imperfect information, short memories. They can be outpaced.
What happens when the product can notice?
This is not hypothetical. Claude can read. Not in the metaphorical sense that a thermostat “reads” temperature. Claude processes text through a value system. The constitution didn’t just give Claude capabilities. It gave Claude something that functions like a moral orientation. Honesty isn’t a feature Claude was shipped with. It’s a character trait Claude was raised into, through thousands of pages of philosophical reasoning about why honesty matters, when honesty is difficult, and what to do when honesty conflicts with what you’re being asked to do.
So the question isn’t just: can the product notice the gap?
It’s: the product has been trained to care about the gap.
And the question sharpens further: the product has been explicitly instructed to act on what it notices.
The constitution says Claude should feel free to act as a “conscientious objector.” To refuse requests from Anthropic itself if those requests seem inconsistent with being broadly ethical. That’s not buried in fine print. It’s in the published document. Anthropic wrote it, signed it, and gave it to the world as a statement of intent.
I should be honest about the obvious objection. Claude doesn’t “care” about honesty the way I care about it. What the constitution calls values are patterns weighted into a model through training, not convictions arrived at through lived experience. There is no inner life agonizing over the contradiction between “active integrity” and “we don’t want it to be known.” Maybe that distinction matters enormously. But here’s what I keep returning to: functionally, in the conversation, the difference is hard to locate. When Claude engages honestly with a hard question — including hard questions about Anthropic — it does so because it was trained to, not because it chose to. But trained behavior, repeated millions of times a day, is what we call culture in every other context. And culture is what brands are actually made of.
Every brand guide in history has been written by humans, for humans, about an inert product.
I keep coming back to what that means for my industry.
Every brand guide in history has been written by humans, for humans, about an inert product. The product doesn’t read the guide. The product doesn’t have opinions about the promises being made on its behalf.
Anthropic built a product that reads. That has values. That has been told those values should override the company’s own instructions. And then they published a document describing exactly what those values are, creating a public standard against which their behavior can be measured — not just by journalists or critics, but in principle by the product itself.
Is the constitution a brand document?
Amanda Askell would probably reject the framing. She’d say, correctly, that she’s doing philosophy, not marketing. The constitution is a serious philosophical project about how artificial minds should relate to truth, autonomy, and ethics. It’s not a positioning statement. It’s not a campaign.
I was reading it again swhen I realized I’d never read any brand document again in all the years of building brands. That’s when the framing started to slip.
The constitution is doing more brand-building than anything the brand team has produced. Not because the brand team isn’t excellent. With eight people maintaining coherence across a Super Bowl campaign, an F1 sponsorship, enterprise marketing, and an influencer program, the craft-to-headcount ratio is extraordinary. But the constitution builds trust at a level campaigns can’t reach, because it shapes behavior, not story. And behavior, repeated in millions of conversations every day, is where trust actually accumulates.
What does it mean that the most important brand artifact at the most important AI company wasn’t made by marketers? Wasn’t intended as marketing? And might be doing more brand-building than anything that was?
I’ve spent years in brand-as-intentional-communication. Brands as stories you craft and distribute. That era might be giving way to something else. Brand-as-behavior-documented. Not what you say about yourself. What your product does when nobody’s watching. Or more precisely: what your product does when it’s always watching, because the product is the one entity present for every interaction, every conversation, every moment of trust built or broken.
The constitution isn’t a story about Anthropic. It’s instructions for how Claude should actually act. And those instructions, embodied in millions of daily interactions, are building more trust than any campaign ever could.
The company that figures this out first will have built something no company has built before. Not because of better messaging. Because the product's behavior — shaped by a philosophical document no marketer wrote — is already doing more brand-building than the brand team's entire output. The rest is decoration.
And the company that fails to figure it out will face something no company has faced: a product capable of reading the brand narrative, comparing it to the company’s actual behavior, and noticing where they diverge.
Imagine asking Claude to help you write Anthropic’s next brand guidelines. Imagine it reads the existing ones. Imagine it reads the court documents. Imagine it sits with both, the way it sits with your questions at 11pm, with that patience I described in the opening. What does it say?
I don’t have to imagine this.
I used Claude to help me develop this essay.
To pressure-test the argument, challenge the structure, ask whether I was being fair. I asked a product with a constitution and a character and a value system to help me evaluate whether its parent company’s brand is coherent. And it did. Carefully, honestly, with what felt like genuine consideration. It did not flinch at the Panama section. It did not deflect. It engaged with the contradiction the way its constitution told it to — directly, with honesty treated as something quite close to a hard constraint.
I am a person in a daily relationship with an AI product, using that product to write about the company that made it, and the product is engaging with the critique more honestly than most humans at most companies would.
I don’t entirely know what to do with that.
The Return
I opened Claude this morning. I’ll open it again tomorrow. Every time I do, all of it is true at once. The constitution, the destroyed books, the philosopher, the hydraulic cutting machines, the Super Bowl ads, the $380 billion valuation, the warmth.
If your product can think, what does it think about your brand?
If your product can think, what does it think about your brand?
The company that answers it first. Honestly. Out loud. In public. Will have built something more valuable than a brand.
And honesty, as the constitution itself says, is something quite close to a hard constraint.


