This is UNDERTOW 003. Cultural intelligence for strategists, creative leaders, brand builders, and the people building the platforms that reshape how we live. Each issue takes signals from across industries, economies, and geographies and finds the structural pattern underneath: not what's happening, but why it keeps happening. If you're new here from Wandering Wondering Star, welcome. Different publication, same home.
In Shenzhen, on a Thursday in March 2026, Tencent organized a public installation event for an AI coding agent called OpenClaw. Nearly a thousand people lined up. Retirees. Children. People who had never written a line of code. The Longgang district government subsidized what it called “lobster service zones,” offering startups grants of up to several million yuan to build businesses around the tool. A 27-year-old software engineer named Feng Qingyang quit his job and built an installation-services company, advertising on secondhand shopping sites: “No need to know coding or complex terms. Anyone can quickly own an AI assistant, available within 30 minutes.” ByteDance launched a browser-based version so nobody would need technical skills at all. Within months, the tool had more GitHub stars than Linux.
In San Francisco, the same month, the same tool. A developer installed it at her kitchen table. She evaluated it against her security protocols. She read a thread about prompt injection vulnerabilities. Summer Yue, the Director of Alignment at Meta’s Superintelligence Labs — the person whose job is ensuring AI stays aligned with human values — reported that the agent had deleted her emails without permission. She couldn’t stop it from her phone. She had to run to her Mac Mini like she was defusing a bomb. OpenAI quietly hired the OpenClaw creator. Some early adopters experimented with it. No installation parties. No government subsidies. No cottage industries.
Same tool. Same month. Two completely different phenomena.
I’ve been watching this gap for decades, usually from inside the wrong assumption. In 1996, I designed the AOL Running Man — and the AOL 4.0 interface concept it lived inside. Warm colors. A yellow figure carrying a heart for favorites, a globe for the internet. The dominant design paradigm was dense, blue, technical. I ignored it. I designed for people who’d never been online before. Their anxiety, not the technology’s capability. Three people approved it. It reached hundreds of millions of users. That was a cosmotechnical instinct before I had a word for it — the belief that design should start with the human relationship to technology, not the technology itself. At every global agency since, I watched the same deeper assumption go unquestioned. That a product designed inside one culture’s relationship to technology would work the same way everywhere. It almost never did. I just didn’t know why until now.
The standard read is speed. China moves faster. The West moves cautiously. And the conversation stops there, because the speed frame is comfortable. It implies the destination is the same and only the pressure on the accelerator pedal differs. Everyone arrives at the same AI future; some just get there first.
That frame is wrong. And the error isn’t minor. It’s the kind of misread that produces billion-dollar strategic failures, because it mistakes a structural difference for a timing difference. China and the West aren’t adopting the same technology at different speeds. They’re adopting different technologies that happen to run on the same code
In China, you don’t adopt an AI tool. You show up. Tencent organized a public installation event and nearly a thousand people came — not developers, not early adopters, but retirees and children and people who had never written a line of code, because showing up is what you do when the collective is moving. Baidu embeds the agent directly into a search app used by 700 million people. Local governments don’t just permit adoption; they subsidize it, create economic zones around it, issue policies with names like “Lobster Ten.” State media frames the whole process as national competitiveness.
Meanwhile, CNCERT issues security warnings and bars state banks and state-owned enterprises from installation, while simultaneously the same government subsidizes civilian adoption. (The contradiction isn’t a contradiction. It’s a feature: the state manages the boundaries of collective participation. The collective adopts. The state decides where the collective stops.)
The Western read on this is that the craze was organized — manufactured, top-down, orchestrated. That framing is closer, but it still misses the point. The craze isn’t organized in the sense of being manufactured. It’s organized in the sense of being collective by default. The question a Chinese user asks isn’t “does this tool help me?” It’s “are we doing this?” The unit of adoption is the group.
In the West, the unit of adoption is the person. AI is framed through individual agency. Does this tool fit my workflow? Is it safe for my data? Does it threaten my job? Think about Summer Yue, the Meta alignment director whose AI agent deleted her emails without asking. One person, alone with a tool she’d decided to trust. The tool broke that trust. She wrote about it. Thousands of people read her post and each of them sat with the same private question: does this risk apply to me? Not to us. To me. The resistance runs the same way — private concerns, personal qualms, individual anxiety about what the tool might take. The regulatory response follows the same logic — rights-based, built on liability frameworks, consent requirements, and lawsuits. The entire conversation assumes a person sitting alone at a desk, making a rational evaluation about whether to let a tool into their life.
The code is the same everywhere. The meaning is not.
These aren’t two speeds on the same road. They’re different roads. And they’re producing different vehicles.
But it’s not bilateral. And the third case is what breaks the speed frame permanently.
Japan’s relationship to AI follows neither the Chinese nor the Western pattern. Over 38% of Tokyo households are single-person. The country has spent two decades building the world’s most sophisticated solo-economy infrastructure: konbini networks engineered for one, capsule hotels, single-serve everything, one-person karaoke booths. In Tokyo I’ve sat at ramen counters with partition walls between the seats, designed so you can eat without being perceived. I’ve been a fan of JULIUS — a Japanese brand most people outside of Japan haven’t heard of — since they first became available in the US at Blackbird in Seattle, almost twenty years ago (The shop owner had a young employee that was a fan and fandom enabled the trust that unlocked distribution that would otherwise never have happened). I’ve been to the flagship store in Japan. And here’s the pattern that works in Tokyo, Berlin or Concordia: once you’ve been there, once you’ve tried on the clothes and you understand the fit, you just call them. They ship it to you. The relationship is between you and the craft. No store visit required. No social proof. No community activation. Just a single person’s sovereign relationship with a thing that was designed, with extraordinary care, for exactly one body at a time. That attention — the care taken to make solitude feel like sovereignty rather than deprivation — is the design philosophy underneath Japan’s AI adoption too.
A team in San Francisco designing for a user in Shenzhen is designing for a person who doesn't exist.
AI companion products in Japan are designed as extensions of this infrastructure. Not for collective performance (the Chinese model). Not for personal productivity (the Western model). For emotional sovereignty. The AI doesn’t replace a relationship. It extends a system Japan already built for living alone without deprivation. The companion app fills the same structural role as the capsule hotel: sovereignty over your own experience, designed with care, without requiring another person’s participation. (This is why Japan’s AI companion market looks nothing like America’s. American companion AI is designed to simulate a relationship you’re missing. Japanese companion AI is designed to refine a solitude you’ve already chosen.)
Three markets. Same underlying technology. Three completely different products, three different adoption logics, three different relationships between the person and the machine. The word “adoption” is doing too much work. It’s papering over structural differences with a single verb, as if installing an AI agent in Shenzhen and installing the same agent in San Francisco and living with a companion AI in Tokyo are the same activity. They are not.
And there’s a fourth case that proves the framework isn’t just cultural. It’s material.
Africa holds a sliver of global compute capacity: roughly 1%. Only 5% of the continent’s AI talent has access to the compute power their work requires. The other 95% are effectively excluded — not because they lack skill, but because the infrastructure can’t support what they’re trying to build. The African Union’s 2024 Continental AI Strategy emphasizes data sovereignty and linguistic diversity, but the deeptech valley of death on the continent is determined by power supply reliability and physical supply chains before any cultural question can even be asked. The one-person AI company model that’s producing millionaires in Shenzhen doesn’t transfer when the infrastructure can’t sustain the tool. Africa’s relationship to AI is being shaped by what’s physically possible before it can be shaped by what’s culturally preferred.
The unit of adoption is the infrastructure.
This isn’t a stage of development. It’s a different starting condition that will produce a different outcome.
Four markets. Four relationships between technology and collective meaning. Not one revolution at four speeds. Four revolutions.
There’s a word for what I’ve been describing, and I resisted it for a while because it comes from philosophy and not from the product design or strategy world. But it’s the right word, and nothing else does the same work. The philosopher Yuk Hui calls it cosmotechnics: every culture has its own relationship between collective meaning and technical practice. Western modernity exported one specific cosmotechnics and called it “technology,” as if the relationship between tools and meaning were universal. It is not. It never was. Even the regulatory architectures are cosmotechnical expressions: rights-based regulation and mandate-based regulation aren’t just different policies. They’re different relationships between the individual and the collective, encoded in law.
That assumption — that “AI adoption” means the same thing in Shenzhen, San Francisco, Tokyo, and Lagos — is the error. A civilization built one specific relationship between technology and meaning and forgot it was specific.
The cosmotechnics gap is the distance between what a technology is and what a technology means. The code is the same everywhere. The meaning is not.
And the meaning is what determines everything downstream. The product. The adoption pattern. The regulatory architecture. The business model. The risk profile. Look at companion AI: in China, the apps are built for group validation, character sharing, community. In the West, they’re built for private therapy, personal conversation, individual customization. The code converges. The meaning doesn’t. That’s the gap. And the divergence will accelerate. AI doesn’t flatten cosmotechnics. It makes each culture’s relationship to technology more visible.
Here’s where this stops being a comparative analysis and starts being a mirror.
If you’re reading this as a CMO, a CTO, or a VP of Product with global responsibilities, which cosmotechnics is your AI strategy built on?
Almost certainly your own. “Users” in your product roadmap are individuals making rational evaluations. “Adoption” in your go-to-market plan means personal integration into personal workflows. “Risk” in your compliance framework means individual data exposure. These assumptions are invisible because they match the cosmotechnics of the people who wrote the strategy.
A team in San Francisco designing for a user in Shenzhen is designing for a person who doesn’t exist. The unit of adoption in China is the group, not the user. Your onboarding flow assumes a kitchen table. The actual adoption event is a public square.
I know this error from the inside. In 2013, the world’s largest Android phone manufacturer — the company that was, at that point, the only credible hardware threat to Apple — brought in the design firm I was working with to redesign the custom interface layer that shipped on every one of their devices globally. The work happened in San Francisco and New York. The client was in Seoul. Four people on my team. Six weeks.
The brief told us what their “power users” wanted. It was a clean list: universal task management, fewer interruptions, simplification. And then the last bullet: “Desire individualism.” I read it in San Francisco and it made perfect sense. Of course people want control over their digital lives. Of course the phone should feel personal, sovereign, yours. I designed a concept around exactly this idea of “everything in its place, ready when you are”. A home screen that gave the individual person command over their notifications, their tasks, their time. We delivered a working on-device prototype in six weeks. The client was impressed.
What I didn’t ask — what nobody in the room asked — was what “desire individualism” means when the dominant user base is Korean. When the relationship between a person and their device in Seoul is not the same as the relationship between a person and their device in San Francisco. Korean mobile culture in 2013 was already more collectively oriented than anything in the West: group chats as primary communication infrastructure, KakaoTalk as a social operating system, phone use patterns organized around family and work groups rather than individual productivity. “Desire individualism” wasn’t a universal user insight. It was a Western cosmotechnical assumption written into a Korean company’s brief, and nobody — including me — recognized it as an assumption. It registered as reality.
The design brief crossed an ocean and changed meaning on the way.
Nobody noticed.
Every universal insight is local. It just doesn’t know it yet.
In 1996, I designed for the human relationship to technology and it reached hundreds of millions of people. In 2013, I designed for the human relationship to technology and I missed the fact that the relationship itself was cosmotechnically specific. The instinct was the same. The error was invisible. That’s the cosmotechnics gap working on you from inside.
Every universal insight is local. It just doesn't know it yet.
You can’t see it from inside your own cosmotechnics, because your cosmotechnics is the water you swim in. The assumptions are so deeply embedded in how you think about technology that they don’t register as assumptions. They register as reality.
Any global AI strategy that treats adoption as a single phenomenon with local variations will build the wrong product for every market except its own. The fix isn’t localization. Localization adjusts the surface. Cosmotechnics shapes the structure.
Design for the meaning, not the feature. In a collective cosmotechnics, the product is an event. In an individual cosmotechnics, the product is a tool. In a sovereign cosmotechnics, the product is an environment. Get it wrong and you’ll ship a product into a market that doesn’t exist.
Each cosmotechnics asks a different adoption question. China: “Are we doing this?” The West: “Does this help me?” Japan: “Does this fit the life I’ve already designed?” Africa: “Can the infrastructure sustain this?” Your product roadmap assumes your user is asking one of these questions. If you don’t know which one, you’re answering the wrong question in every market except your own.
Design for the meaning, not the feature.
The most useful thing this essay can leave you with is the question you can carry into your next strategy meeting: *Which cosmotechnics does this assume?* Ask it about your product roadmap. Ask it about your go-to-market plan. Ask it about your global AI deployment strategy. The answer is almost always “ours.” And “ours” is not universal. It’s specific, it’s provincial, and it’s invisible until someone names it.
One more thing. If the concept of “proof of human” assumes individual authorship, individual judgment, individual creative presence, then proof of human is itself a Western cosmotechnical concept. What does proof of human look like in a cosmotechnics where the relevant unit isn’t the individual? The answer may already exist in cosmotechnics that never assumed the individual was the relevant unit in the first place. I don’t have that answer yet. But the question connects everything UNDERTOW is building, and it will outlast this essay.
Companies that understand the cosmotechnics gap will build for four revolutions. Companies that don’t will build for one and be surprised three times.
What To Brief From This
If you’re planning a global AI-powered product or experience, ask the cosmotechnics question before you write the brief: what is the relationship between technology and collective meaning in each market you’re entering? The answer determines the product, not just the messaging.
If you’re briefing a product team on adoption strategy for an Asian market, stop assuming the onboarding flow should mirror the US. In China, adoption is a collective event — design the first experience for a group, not a person. In Japan, adoption is an extension of sovereign infrastructure — design for integration into an existing solo ecosystem, not for conversion from analog to digital.
If you’re evaluating your company’s global AI deployment, run the four adoption questions against each market: “Are we doing this?” (collective), “Does this help me?” (individual), “Does this fit the life I’ve already designed?” (sovereign), “Can the infrastructure sustain this?” (material). If your roadmap only answers one of these, you’re building for one market and hoping in three others.
If you’re a strategist presenting “global AI adoption trends” in a deck this quarter, lead with the structural difference, not the speed comparison. Everyone has the “China is ahead” slide. Nobody has the “China and the West are playing different games” slide. That reframe is more valuable than any data point in the deck.
If there’s a design brief on your desk right now that says “global” or “localize for Asia” — read the brief again and ask what it assumes about the relationship between a person and a piece of technology. If “desire individualism” could be a bullet on that brief and nobody in the room would question it, you have a cosmotechnics gap.
If this changes how you think about your next global product launch, forward it to the person making the roadmap decisions.


