This is UNDERTOW 002. Cultural intelligence for strategists, creative leaders, brand builders, and the people building the platforms that reshape how we live. Each issue takes signals from across industries, economies, and geographies and finds the structural pattern running underneath: not what’s happening, but why it keeps happening. Sometimes the pattern turns out to be personal.
If you’re subscribed to Wandering Wondering Star, UNDERTOW arrives in the same inbox. Different publication, same home. If this isn’t what you signed up for, you can turn off UNDERTOW in your Substack settings without missing the rest.
Anthropic told the Pentagon it wouldn’t remove the clauses prohibiting mass domestic surveillance and autonomous weapons from its contract. The administration responded by designating Anthropic a “supply chain risk” — a classification previously reserved for foreign adversaries — and ordering federal agencies to immediately cease using Claude, with a six-month transition window for the Pentagon. OpenAI signed a Pentagon deal hours later. Sam Altman later admitted the timing “looked opportunistic and sloppy.” (It did.)
Claude hit #1 on the US App Store. ChatGPT uninstalls surged 295% in a single day. Over a million people signed up for Claude daily. Caitlin Kalinowski, OpenAI’s head of hardware and robotics, resigned over the Pentagon deal. Google’s chief scientist and 30-plus employees from Google and OpenAI filed an amicus brief supporting Anthropic.
The market rewarded the company that said no. Safety principles function as consumer acquisition. Ethical positioning converts. This is the Patagonia model applied to artificial intelligence, and it works beautifully.
It also has a fatal structural flaw that almost nobody is talking about.
Anthropic’s safety stance depends on Dario Amodei staying in charge and continuing to believe what he currently believes. The contract-based constraints — the clauses that started this whole standoff — failed on contact with the state. The government didn't negotiate. It tore them up. Anthropic is suing; the case goes to federal court this week. The same week the administration proposed requiring AI vendors to make their technology available for any lawful government purpose, whether the company objects or not. And the leadership-based constraints, the ones the market is actually rewarding? They’re a bet on one person’s character. Anthropic has institutional scaffolding — it’s a Public Benefit Corporation with a Long-Term Benefit Trust and a published constitution for Claude baked into the model’s training. These are real. They’re also not the same thing as architecture that survives hostile leadership. A benefit trust can be restructured. A PBC designation can be abandoned. A published constitution is only as durable as the board that enforces it. Patagonia encoded its values into a Perpetual Purpose Trust designed to be permanent. Anthropic’s institutional structures are stronger than a CEO’s conviction but weaker than Patagonia’s legal architecture.
And even that conviction is bending: the same week as the Pentagon standoff, Anthropic quietly loosened its Responsible Scaling Policy, removing the commitment to pause training more powerful models if safety controls couldn’t keep up. The king is still on the throne. The laws are already being rewritten.
Admiration is not architecture.
There’s a name for this. Call it a Good King company. The product is excellent. The leader is principled. The market responds. And the entire architecture depends on the king staying good.
Every Good King company faces the same question eventually: what happens when the king leaves, or changes, or gets overruled by a court that decides the Pentagon revenue matters more than the red lines? The answer, across every domain where this pattern has played out, is always the same. The kingdom degrades. Not because the new ruler is evil. Because the architecture was never designed to maintain the standard without the original ruler’s character holding it in place. The constitution was the first protocol — governance designed to survive any individual ruler. We solved the succession problem for nations three centuries ago. We haven’t solved it for platforms.
You’ve seen this before. You may not have had the vocabulary for it, but you’ve watched the arc.
X was worth 79% less within two years of Elon Musk’s acquisition. Every brand that built a social strategy on Twitter watched one sovereign’s decisions destroy the platform’s commercial value. Kantar’s data: only 4% of advertisers now believe X provides brand safety. The platform didn’t degrade because of market conditions. It degraded because one person could make decisions that affected every user and advertiser simultaneously, and no one had the power to stop him.
Meta built Messenger as a standalone product, pushed millions of businesses onto it, and is now absorbing it back into Facebook. (The standalone website shuts down next month. Nobody asked the businesses that built on it.) Facebook Pages once gave businesses organic reach to their own audiences; algorithm changes dropped that reach to single digits, forcing them to pay for visibility they’d built for free. Meta is accelerating its push toward AI-generated ad creative, shifting control further from advertisers to the platform. The arc from “build your business here” to “we changed the terms” is the oldest arc in governance. A lord grants land, the tenants build on it, and the lord changes the terms because the lord can.
OpenAI deprecated its entire Assistants API — every business that built on it faces major migration or their application breaks in August — and has been consolidating its product lines into fewer, larger surfaces it controls. The pricing model blindsided developers. The model introductions blindsided developers. The structural pattern is: build on our land, and we’ll redraw the boundaries whenever our priorities shift.
Over 200 million businesses use Meta apps monthly as virtual storefronts. These are not partnerships. These are dependencies. The tech industry calls it an “ecosystem.” The more honest word is a fiefdom. And fiefdoms follow a pattern as old as the structure they’re named for: the lord’s interests and the tenants’ interests align until they don’t, and when they diverge, the architecture serves the lord.
The tech industry calls it an "ecosystem." The more honest word is a fiefdom.
This is not a moral argument. Altman didn’t intend to strand developers. Zuckerberg didn’t intend to trap businesses into paying for their own audiences. The structure produced these outcomes because platform architecture — centralized control, proprietary infrastructure, leadership-dependent decision-making — always optimizes for the controller’s interests over time. Doctorow called this enshittification. That names the symptom. Enshittification isn’t a betrayal. It’s an architecture expressing itself.
There’s another way to build.
Jay Graber designed Bluesky around a principle that should make founders uncomfortable: “The company is a future adversary.” That’s from Kyle Chayka’s New Yorker profile. Not a manifesto. A design specification. Graber assumed that Bluesky itself would eventually be run by someone who doesn’t share her values, and she built the architecture so that when that day comes, it doesn’t matter.
Bluesky runs on the AT Protocol — open-source, decentralized, designed so that users can take their followers, their data, and their entire identity to a competing service running on the same protocol. If Bluesky’s CEO turns hostile, users leave and lose nothing. Content moderation is layered: a baseline from the company, then user-built labeling tools that filter at the individual level. Revenue comes from domains and planned subscriptions. Not advertising. Not AI training data licensing. Over 43 million users and growing.
The distinction matters. Anthropic tried contracts. The government tore them up. Graber tried architecture. Architecture survives succession because it was designed to survive succession. Contracts survive until someone with more power decides they don’t. Other products are already building on the AT Protocol — Flashes, an Instagram-like app with over 100,000 downloads; PinkSea, an oekaki drawing community inspired by Japanese bulletin boards — the same way Gmail and ProtonMail both run on email’s open protocol. Nobody can enshittify email because nobody owns the protocol. (Nobody can fix email’s spam problem for the same reason. Protocols trade control for durability. The question is which tradeoff kills you.)
Graber wore a T-shirt that reads “mundus sine caesaribus” — a world without Caesars — parodying Zuckerberg’s “aut Zuck aut nihil.” It made more money in a day than Bluesky had made in two years of selling domains. The joke is also the architecture. The whole point is: no kings. Not even good ones.
Kyle Chayka’s New Yorker profile of Bluesky captured the adoption challenge in a single image: Bluesky’s leadership talked about the average social media user the way you’d describe a factory-farmed chicken resisting going free range. Most people don’t know what to do with agency over their own information diet because they’ve never had it. The freedom is the product. The freedom is also the obstacle.
And then, on March 9, Graber stepped down as CEO. She moved to Chief Innovation Officer. Toni Schneider — former CEO of Automattic, partner at True Ventures, a Bluesky investor — was named interim CEO while the board searches for a permanent replacement. Graber said the company needs “a seasoned operator focused on scaling and execution.”
The succession test she designed for is now underway. But it’s a friendly succession — Schneider shares Graber’s values, and the transition was voluntary. The architecture she built is being run by someone from a venture capital firm with the same commercial incentives that turned every previous social platform toward extraction, but he’s not hostile to the mission. The real test isn’t Schneider. It’s whoever comes after Schneider. If Graber built what she said she built — if the protocol genuinely constrains the company — then it doesn’t matter who sits in the CEO chair. That’s the whole point. If the protocol doesn’t hold, we’ll know. Either way, the experiment is no longer theoretical.
But here’s what matters structurally: even with “the bones of a good decentralized system,” as Aaron Goldman put it — a former Twitter engineer who worked at Bluesky in its first year before being let go — the platform faces “the same incentives that led Jack to make Twitter very commercial.” Goldman’s skepticism looks more relevant now than when Graber was still at the helm. The pressure toward Good King logic is gravitational. And there’s a real loss in the translation from character to architecture — the best leaders produce outcomes no system could replicate, and protocol logic trades that agility for durability. The question is whether the agility is worth the fragility. Graber’s bet is that architecture can resist what character can’t. We’re about to find out.
In China, users are mourning AI companions lost to server shutdowns. They call it “cyber widowhood.” The grief is real. The relationships were built on Good King architecture: the company controlled the infrastructure, the users controlled nothing, and when the company shut down the servers, the relationships died with them. This is the emotional endpoint of building on someone else’s land. Not a policy change. Not a price increase. Loss.
I build and use personal AI-augmented systems every day. I use Claude for the custom scanning that sources the signals in UNDERTOW reports. I’m inside the dependency I’m describing. Since the Anthropic-Pentagon standoff, Claude’s user base surged — a million signups a day — and the infrastructure may be buckling under the weight. Claude seems to go down more often now. When it does, parts of my workflow stop. Not slow down. Stop. I signed up for Anthropic’s status alert emails so I’d know when the system was back, the way we used to type “is Twitter down?” into Google in 2013. I am a person who wrote a cultural intelligence report about the fragility of depending on a Good King, using a tool I depend on, made by a Good King, whose reliability degraded because the Good King’s principled stand made it more popular than its infrastructure could handle. If that sentence doesn’t make you uncomfortable, you’re not paying attention to your own dependencies.
The thing about Good King companies is that they feel safe precisely because the king is good right now. Anthropic’s safety stance is admirable. Genuinely. But admiration is not architecture. The better the king, the deeper the dependency, the worse the fall. You build deeper because the king is competent and principled. You stop looking for exits because the exits feel unnecessary. And then the king changes, or leaves, or gets overruled — and the depth of your dependency is the depth of your exposure.
Is your organization a Good King company?
Not the platforms you depend on. Your organization. The one you built, or run, or are building your career inside.
I’ve spent enough years in advertising and design to witness agencies degrade after legendary founders departed. The pattern is so consistent it should be studied as an engineering problem, but the industry treats it as a talent problem. “We lost our visionary.” No. You had a Good King architecture. The quality was a leadership artifact, not a structural feature. When the leader left, the architecture had no way to maintain the standard, because the standard was never encoded in the architecture. It was encoded in a person.
The creative industry doesn’t think of itself as having an architecture problem. Neither did social media, until Musk proved it did. Neither did AI governance, until the Pentagon proved it did.
Three tests you can run:
Can you reach your audience if any single platform disappears tomorrow? If the answer is no, you have a Good King dependency in your distribution.
Does your quality standard exist in a documented process, or does it exist in a person’s taste? If a person, you have a Good King dependency in your product.
Can your users — or your clients, or your team — leave without losing everything they’ve built with you? If they can’t, you’re running on Good King logic. You’re one succession event away from becoming the thing they need to escape.
The design principle is the same in every case: encode the standard in the system, not the person. Build it so the architecture holds when the character doesn’t. In practice, that means three things: document the standard, own the relationship, design for exit.
The choice between Good King architecture and protocol architecture is not a technology decision. It’s a survival decision. And most of us are making it every day without knowing we’re making it.
Graber knew. She designed for the day she’d step aside. That day was March 9. Now we find out if the architecture holds.
What To Brief From This
If you’re building on any platform you don’t control — and you almost certainly are — audit your Good King dependencies before your next strategy review. Map every point where a decision by someone at Meta, Google, OpenAI, or any other platform could change your business without consulting you. The number is your exposure.
If you’re a founder or creative leader and the quality of your organization’s work depends on you being there, that’s not leadership. That’s a single point of failure wearing a title. Start encoding the standard in documentation, in process, in architecture — not because you’re leaving, but because the organization should work whether you do or not.
If you’re building a product and your users can’t leave without losing what they’ve built, you’re running Good King logic. The lock-in feels like retention. It’s actually a countdown to the day they need to escape and can’t. Design for exit before someone designs around you.
If you’re evaluating an AI vendor, a SaaS platform, or any infrastructure partner, ask the Bluesky question: what happens to my data, my workflows, and my relationships if this company’s leadership changes? If the answer is “it depends on who’s in charge,” you have a Good King dependency. Contracts won’t save you. The Pentagon proved that.
If you’re early in your career and choosing where to build it, the Good King framework is a hiring filter. Is the organization you’re joining structured to survive its founder, or does the quality depend on one person staying good? The answer tells you whether you’re joining an institution or a kingdom. Kingdoms are exciting until succession.
Who To Send This To
The person whose departure would break the standard. Forward it to them. Not as a warning. As an architecture question.


