UNDERTOW 010: Translation Capture
How frontier AI labs are running streetwear’s absorption arc in reverse — and why the labs are mid-arc on the same trajectory
UNDERTOW is an ongoing analysis of cultural production at the structural level — naming the conditions that produce the moves, not just the moves themselves. The concepts travel when readers use them in rooms I'll never enter. Each piece is honest to what I'm seeing right now.
It’s morning. I’ve done the things — supplements, coffee, bouldering at the gym, food, the small disciplines that are supposed to add up to a person who can handle the day. The shoulder is still talking back when I pull on it. I am the sharpest version of myself, and I still cannot keep up.
My desk is technically just an extra-deep shelf. Two 5K monitors don’t fit, so the second one is mounted on what any child would describe as a robot’s arm. Fifteen macOS desktops, each one a different project — UNDERTOW, the scanning project, client work, my own writing, the IFS workbook, climbing logs, financial spreadsheets, the diet plan from the functional nutritionist I haven’t opened in weeks. Thirty-ish browser windows. Stratechery. The Diff. r/LocalLLaMA. Twitter. Claude chats. A Substack draft I’ve been afraid to open since Tuesday.
The metaphor my body produces for what the screens feel like is a New York subway car, desks lining both sides stacked with folders and papers, covering the windows. I am inside the car. I can see all of it. I can’t possibly read all of it. I have been in this car for about a year. The car is on an express track and rarely stops.
I love this. The chase is the dopamine. I built tools to do this. And I cannot keep up.
The game I play with this room is Sherlock backwards. Find an intriguing artifact, run a trace, try to figure out what it means, what shifted in the world during the seven hours I was asleep. This morning’s catch is a Heron Preston interview in Jing Daily that buried itself three paragraphs deep before anyone caught it. Streetwear didn’t die, he says. Streetwear got absorbed.
“Streetwear didn’t die, he says. Streetwear got absorbed.”
He’s diagnosing the genre he helped build. Supreme drops people lined up around blocks for. Stüssy and Bape on the same wall as the Carhartt double-knee. Nike SB at the moment when sneaker culture stopped being a culture and became a category at Foot Locker. Behind those storefronts: Been Trill, the DJ-and-graphics collective formed around 2010 by Virgil Abloh, Heron Preston, Matthew Williams, and Justin Saunders, who scattered after 2013 to build separate empires. Most of the cohort went to the luxury houses (Abloh to Off-White and Louis Vuitton, Williams to Givenchy). Saunders went the other direction with JJJJound, the only one who refused. Preston went a third way: his brand absorbed into the New Guards Group holding company, eventually bought back in July 2025, marked by a “FREE AT LAST” T-shirt.
Preston was eulogizing the scene he came up in. He went further:
“Streetwear isn’t dead, it’s just no longer a subculture, globally consumed at a mass level. It got absorbed into the broader language of fashion. It can’t function in the same disruptive way anymore. What made it powerful was its connection to real communities and moments. So what’s the next authentic expression of culture? That’s where the energy will come from.”
What’s the next authentic expression of culture?
Two days later the question was everywhere. Fashion editors at Highsnobiety and Hypebeast were chasing answers. Brand strategists were posting takes on LinkedIn. Cultural-analytics shops were writing decks for clients. Trend forecasters were filing reports. They were all answering the question as if it had one answer — as if Preston had pointed at a door and they just needed to figure out what was behind it.
They were looking for a door. There wasn’t one. The energy that used to pool in streetwear hadn’t gone somewhere. It had gone... everywhere. Fragmented across continents and formats the way the internet fragmented how we read the news, into a dozen scenes that don’t share a center.
It went to amapiano in Joburg, where the log drum became the diagnostic — you can hear a track and pin it on the producer-genealogy map blindfolded. It went to São Paulo’s favelas, where funk bruxaria producers shaped tracks for 40-second attention spans and refused to apologize for it. It went to r/LocalLLaMA, the nearly 700,000-member subreddit running open-weight models you can download and run on your own machine, structurally hostile to the labs the rest of this essay is about. It went to Saudi Arabia’s underground music scene, the URX collective in Riyadh putting on warehouse parties inside Vision 2030’s massive entertainment-sector buildout, scenes that look subcultural and are subcultural and also can’t say certain things on the record. None of these scenes is the next streetwear. Together, they’re what comes after the question.
Among them, one set of subcultures is uniquely legible — because it publishes its own balance sheets.
The frontier AI labs.
The labs have aesthetics — Anthropic’s research-paper register doesn’t sound anything like xAI’s swagger and you can tell which lab a tweet came from before checking the handle. They have hierarchies — you know who Karpathy is and you know who Sutskever is and the difference matters. They have artifacts — leaked memos, hedged research papers, Slack messages that reach The Verge by lunch. They disagree with the mainstream loudly enough to fill regulatory hearings. And they produce signifiers faster than the mainstream can absorb them.
That’s a subculture. The objection that subcultures resist from below while the labs operate from above is a power-map objection — subculture is a behavior, not a position. The labs are doing the behavior at unprecedented scale.
Subculture is a behavior, not a position.
But they’re a specific kind of subculture, and the kind matters. They’re funded.
Most subcultures start somewhere small and self-organizing — kids in basements, scenes in clubs, communities making things for each other before anyone outside notices. The money arrives later, if it arrives at all, and the subculture is already itself by the time it shows up. The frontier labs ran the sequence backward. The money came first. The subculture got built inside it.
Call them Sponsored Subcultures. Real subcultures, doing real work, inside money that wants something from them. What the money wants draws the line around what the subculture can say out loud.
Saudi underground music produces real music inside Vision 2030’s massive entertainment-sector buildout. Heron Preston’s UNIFORM produced real workwear inside the New Guards Group holding company that owned his name until he bought it back. The frontier labs produce real subcultural artifacts inside venture-capital architectures that set the boundaries of what they’re allowed to disagree with on the record.
The constraint isn’t dishonesty. The constraint is that the register — the tone, the vocabulary, what a subculture can say in public and what it can’t — selects what can be said.
Three labs make the case. Anthropic carries most of the weight. OpenAI is the contradiction case mid-stride. Meta is the failure case where ads-funded money tried to buy what talent wouldn’t accept.
On Monday April 13, 2026, OpenAI’s Chief Revenue Officer Denise Dresser circulated a four-page internal memo accusing Anthropic of inflating its annualized run-rate by counting compute-revenue gross instead of net. The memo leaked to The Verge within 24 hours.
The accounting dispute is real, and the stakes are billions of dollars. Anthropic’s compute costs flow through deals with AWS and Google Cloud. When a customer runs Claude on AWS, AWS bills the customer the full amount and pays Anthropic a share. The fight is over which side gets to count the original dollar in their revenue. Counted Anthropic’s way, Claude usage on AWS is Anthropic revenue. Counted OpenAI’s way, it’s AWS revenue with Anthropic taking a cut. Same dollar, two stories. Anthropic’s $30B run-rate becomes roughly $22B under OpenAI’s preferred convention — a $30B company telling investors one story and a $22B company telling auditors another, depending on which side of the contract you read. Companies leak memos over numbers like this because companies live and die on numbers like this.
The dispute isn’t really about accounting. It’s about posture.
The memo lands at a moment when OpenAI’s own run-rate, calculated under the same conventions Anthropic is using, looks structurally worse. OpenAI is projecting a $14B loss in 2026 against rising compute commitments — over $600B in committed infrastructure spend, with positive free cash flow not projected until 2029 at earliest. Anthropic operates at roughly 40% gross margin on inference — meaning the money customers pay to run Claude covers compute costs with significant margin, even as the company reinvests aggressively in training the next model. The company is not yet profitable. It projects positive free cash flow in 2027. Both labs are in the same race. Only one is in the position to publish the gap.
The memo is the artifact. The artifact is the move.
Now read the document the way an accountant would.
You sit with it. You ask why it was written. You ask why it leaked, and on what timeline, and to which publication. You ask who would feel reassured by reading it. You ask whose contract is up for renewal in the quarter after it lands. The document is making an argument the document does not contain, and the argument the document does not contain is the brand. The audience for the leak isn’t The Verge‘s readers. The audience is OpenAI’s enterprise customers — the CIOs and CTOs signing two-year contracts — who needed to be reminded that Anthropic’s economics aren’t as durable as Anthropic’s brand register suggests. The memo is a brand asset. It just doesn’t say “brand” in the metadata.
Call it the Accountant’s Reading. Four kinds of artifacts: the memo, the hire, the Slack message, the accounting convention. Four operating decisions written by people who don’t think of themselves as making brand decisions. They think of themselves as writing memos, signing offer letters, posting in #general, picking which line of GAAP applies. The memo is the brand. The hire is the brand. The Slack message is the brand. The accounting convention is the brand.
You can read a lab off its operating artifacts more accurately than off its marketing, because the operating artifacts are written by the people who actually make the decisions. Marketing is written by the people who arrive after the decisions are already made.
Anthropic’s strongest move is the donation. The other artifacts — the operating stack, the papers, the philosopher, the Senate testimony — are evidence that the donation move is not an accident but a posture.
Start with the donation. Anthropic created MCP, the Model Context Protocol, and donated it to the Agentic AI Foundation — a Linux Foundation directed fund — on December 9, 2025. MCP is the standard that lets AI models connect to data sources and tools — the protocol that allows Claude to open a Figma file, work inside Adobe Creative Suite, or pull from your Google Drive without anyone having to custom-build the connection. The economic story is that MCP is a standards-setting move, the way TCP/IP was a standards-setting move — set the rails, then run on them. The brand story is identical and stronger. Anthropic is the company most fluent in the standard because Anthropic wrote the standard and gave it away. The donation does not weaken Anthropic. It strengthens it.
Apply the Accountant’s Reading to the donation. Why was it made? Because Anthropic could not become the standards-setter without first relinquishing ownership of the standard. Why was it made when it was? Because the agentic-tool layer was solidifying and the donation locked Anthropic’s authorship into the public record before any rival could fork or replace it. Who would feel reassured by reading it? Enterprise CIOs weighing multi-year inference budgets — the people who needed to know they weren’t committing to a lab whose interoperability layer might disappear behind a paywall in eighteen months. Whose contract is the donation about? The next one.
The move is the protocol-layer version of Heron Preston’s UNIFORM. In 2016, Preston collaborated with the New York City Department of Sanitation: upcycled DSNY uniforms reworked into a streetwear collection, debuted at the Spring Street Salt Shed, proceeds donated to the Foundation for New York’s Strongest. The collection didn’t expose hidden labor. Sanitation workers were always visible to themselves and to the city. What the collection did was translate existing labor into a register fashion could hear — and Preston became the designer most fluent in the translation.
MCP is the same shape. Developers connecting AI to data sources were always visible to themselves. The donation translated existing infrastructure work into a register enterprises could trust. Anthropic became the lab most fluent in the translation it gave away.
The translator gets to write the dictionary.
The translator gets to write the dictionary.
Call it Translation Capture. You take labor that already exists. You translate it into a register your funders can read. You become the entity most fluent in the standard you just gave away.
The donation is real. The capture is real. That’s the part most existing writing about Anthropic misses — it tries to choose between generous and strategic, when the move only works because it’s both.
Now read the surrounding artifacts. If the donation were an isolated move, you could call it lucky. It isn’t. The other operating decisions are doing the same work in different registers.
On Thursday April 23, 2026, Cat Wu, Anthropic’s Head of Product for Claude Code and Cowork, sat for an interview in Lenny Rachitsky’s newsletter, the most-read PM publication in tech. The interview ran 6,800 words and described Anthropic’s internal stack as Claude Code, Cowork (the company’s own agentic-work platform), and Slack. There is a particular pleasure in reading a product leader describe her company’s internal tooling like it’s furniture — like the thing you sit on rather than the thing you brand. The lab absorbs internally what most companies rent. It’s a four-person product organization at one of the most valuable private AI companies in the world. Two of the three tools they use to build the product are made in-house. They’ve refused the rented infrastructure everyone else uses. The choice doesn’t appear in any marketing material. The choice is the marketing material, once you know how to read it.
Now read the papers. On April 2, 2026, a team at Anthropic published “Emotion Concepts and their Function in a Large Language Model.” The finding, in plain language: when you measure what’s happening inside Claude’s model when it processes emotional content, the internal patterns track human emotion ratings closely — correlated at 0.81 on the primary axis. That’s strong. Strong enough that you can’t dismiss it. Then the paper hedges, carefully: the math looking similar doesn’t mean the model is actually feeling anything. The hedge is the register. The paper is doing two things at once. It’s establishing that Anthropic’s models show emotion-shaped internal structure precise enough that you could build product features on top of it. And it’s pre-empting the headline a journalist would have written if the hedge wasn’t there — Anthropic Says Its AI Has Feelings. The paper performs being interested in the question without committing to the answer. That’s not evasion. That’s brand discipline.
Then there’s the philosopher. Amanda Askell, lead alignment philosopher and primary author of Claude’s 30,000+ word constitution — published publicly under a license that lets any other lab copy it — has been described by colleagues as the person who “supervises what she calls Claude’s soul.” Sit with that for a second. The brand voice of Claude is being written, on an ongoing basis, by a philosopher. The job description doesn’t appear at OpenAI. It doesn’t appear at Meta. It barely exists outside Anthropic. The brand function at Anthropic doesn’t originate Claude’s voice. It operates around what the philosopher already wrote — probably the most accountability-dense brand role in the industry, one philosopher carrying the constitution for a character used by tens of millions of people, ongoing. Tens of millions of people are having an ongoing conversation with a character one person decided how to build.
And then — because the Accountant’s Reading is honest, not flattering — read Dario Amodei’s Senate testimony on AI risk. The testimony is sincere; it is also a brand asset, and a lab whose CEO testifies to Congress about the dangers of his own product is producing safety-credible-CEO as a position with real commercial value to enterprise buyers.
This is where the move gets harder to write and harder to read.
Anthropic is doing real subcultural production. The donation, the operating-stack rebellion, the philosopher-as-brand-voice, the carefully hedged papers — every criterion above, met. The energy is real. The artifacts are real. The signifiers travel.
And Anthropic is also a Sponsored Subculture.
Translation Capture is what Sponsored Subcultures do at scale. A regular subculture absorbs influence from its environment and metabolizes it into something the scene can use — the way streetwear absorbed Helly Hansen from Norwegian fishing boats into UK grime, and The North Face from mountain expeditions into 90s hip-hop, on its own terms, for its own audience. A Sponsored Subculture does that and runs the absorption in reverse — translating its own work back into the register its funders need to read. The donation move is the cleanest example. So is the philosopher’s constitution. So is Cat Wu’s furniture-talk about the internal stack. Every artifact does double duty.
The funding architecture sets which translations are possible and which aren’t. Multiple billions raised across multiple rounds, with multi-billion-dollar cloud commitments from Google and Amazon as cloud partner and shareholder, draws the boundary of what the lab is allowed to disagree with on the record. Anthropic can disagree with OpenAI on safety. Anthropic can disagree with Meta on regulatory posture. Anthropic will, on the record, sue the United States government over autonomous-weapons restrictions and lose hundreds of millions in defense contracts to keep them.
Anthropic does not, in any public artifact I can find, disagree with Google or Amazon on a matter of substance.
Anthropic does not, in any public artifact I can find, disagree with Google or Amazon on a matter of substance.
The shape of the boundary is which side of the contract is paying.
The translator who writes the dictionary also defines what cannot be translated.
The donation is the dictionary. The dictionary is also the wall.
OpenAI is the lab in the middle of becoming something else, and you can read the transition off its accounting.
OpenAI’s enterprise share has gone from roughly 20% in 2024 to 40% in early 2026, with projected parity by year-end. ChatGPT — the thing even your mother has heard of — is still the cultural artifact, but the money is going somewhere else: API revenue, enterprise contracts, funded against a projected $14B loss and rising compute commitments. The brand register has not caught up. ChatGPT’s marketing is calibrated for the consumer who wrote a thank-you email with it — the enterprise contract is signed by the CIO who needs to know whether OpenAI will still exist in 18 months.
The contradictions are visible because OpenAI publishes at velocity. On Wednesday January 14, 2026, Mira Murati announced that Thinking Machines was firing Barret Zoph for unethical conduct — sharing confidential information with a competitor. Fifty-eight minutes later, OpenAI announced they had hired him. Fidji Simo’s public post on X confirmed the hire was “in the works for several weeks.” Two posture decisions inside two hours. The first posture: we hold our researchers accountable. The second posture: we hire fast, we hire aggressively, we don’t let HR slow us down. Both postures are real. They contradict each other.
Mark Chen’s June 28, 2025 Slack message — the “visceral feeling, as if someone has broken into our home and stolen something” message that Wired published the next day — is consumer language at an organization that no longer makes most of its money from consumers. There is a particular kind of disorientation in writing a Slack message you don’t yet know is going to be in Wired tomorrow. You are writing to your colleagues. The colleagues are reading it. Someone else is also reading it. The message gets one register. The audience gets three. The CIO making the procurement decision doesn’t want to read about OpenAI’s home. The CIO wants to read about OpenAI’s SLA.
From inside the building it doesn’t look like a contradiction. It looks like Tuesday. Mark Chen is upset, so he writes the Slack. Simo needs to explain Zoph internally, so she writes the memo. The marketing team is hitting consumer KPIs, so the consumer ad ships. Nobody at OpenAI has a job called make sure the consumer voice and the enterprise voice don’t pull against each other in public, and the people who do hold adjacent jobs find the artifacts fait accompli — leaking before the function had a chance to shape the register they leaked in. The drift only exists when somebody outside the building stacks the artifacts up. That somebody is now you.
From inside the building it doesn’t look like a contradiction. It looks like Tuesday.
The contradiction isn’t a sign of organizational confusion. It’s a sign of capital pressure — both registers exist because the company would die if either one stopped being maintained. The consumer voice raises the next round. The enterprise voice collects the receivables that pay for the compute. The next Chief Communications Officer at OpenAI doesn’t inherit a comms function that needs re-staffing. They inherit two registers running in parallel and the political authority to retire one. By April 2026 the contradiction has hardened into an observable register migration — OpenAI’s external comms now reads scrappy where it used to read prophetic, an attempt to escape the contradiction by changing voices.
The diagnosis is a self-portrait.
While dictating this paragraph into Superwhisper — the transcription tool I use to think out loud at the speed of speech — Superwhisper failed. Eight minutes of stream-of-consciousness about velocity overload, lost to velocity overload. Are you serious right now? I had to start over. The tool failed in the middle of the sentence about the tool failing. The essay is the artifact. The artifact is the essay. I am inside the thing I am describing and it is describing me back.
I am inside the thing I am describing and it is describing me back.
The labs would not say this about themselves. A funk bruxaria producer named DJ K, working out of Diadema in São Paulo’s outskirts, said it for them:
“When you analyze your YouTube data, people usually watch 40 seconds of the video, you know? Sometimes, they only listen to a minute of the song. So, in that one minute, in those 40 seconds, I try to make everything shine.”
The labs are doing the same thing. They just don’t get to call it that. Which means the velocity isn’t a side-effect of the work — it’s the work. You can’t keep up because you’re not supposed to.
Here’s the move you have to see for the rest of the essay to make sense. The labs aren’t publishing artifacts so the discourse can absorb them. The publishing is the absorption. Every memo, hire, paper, and Slack message is engineered for the 40 seconds during which the discourse is paying attention. You don’t read the artifacts; the artifacts read you, briefly, and then they’re gone, and the next one is already landing.
Which means every artifact you can’t track fast enough is also a tick toward the moment these labs stop being subcultural at all. Streetwear became readable to mainstream culture and lost subcultural status simultaneously. The labs are mid-arc on the same trajectory. The same velocity that makes them legible — the memos that leak, the interviews that land, the hires announced in 58-minute windows, the Slack messages that reach The Verge before lunch — is the velocity that ends their subcultural status. Legibility is absorption with extra steps.
Which means the labs are running on a clock they can’t see. The same velocity that makes them legible is the velocity that runs the clock down. What ends when the clock runs out is the part of the lab that was making the work feel like a movement instead of a vendor relationship.
The 58-minute Zoph rehire is a track engineered to make everything shine in the window during which the discourse is paying attention. Producers in São Paulo’s favelas and product leads in San Francisco are operating in the same attention regime, which is the only attention regime there is. The difference is that the producers know it.
Meta is the failure case.
Meta Superintelligence Labs spent more money than any frontier lab in the field. The Tulloch package — reported as up to $1.5B over six years for a single ML researcher, the steepest individual compensation ever attempted in the technology industry; Meta disputed the figure but did not disclose the actual one — was the headline. Meta has hired aggressively, paid extravagantly, and produced Muse Spark — the lab’s first new model since hiring Wang, released April 8, 2026, nine months after the lab’s formation. Reception was muted relative to the spend. Two weeks after launch, Meta announced 8,000 layoffs to fund the next phase of the AI pivot, with a second wave planned for the second half of the year.
The brand register of Meta Superintelligence Labs is absent.
Not bad. Absent. There is no Cat Wu interview. There is no donated standard. There is no philosopher-as-brand-voice. There is no lab director publishing carefully hedged emotion-concepts papers in a research feed. There is Yann LeCun’s January 2026 Financial Times interview, where he confirmed that Meta’s Llama 4 benchmark scores “were fudged a little bit” — the chief AI scientist of the company conceding, in his own words, that the public performance numbers had been adjusted. There is the same interview, where he described his refusal to be directed by Wang with “you don’t tell a researcher like me what to do.” There is the structural reason: Alexandr Wang, 28, became Meta’s Chief AI Officer post-Scale acquisition and LeCun’s organizational superior. (LeCun left to start his own world-models lab, AMI, at a $3B valuation within months.) There is the parade of $200M+ packages signed and announced. There is no register to read because the lab has not produced operating artifacts at the velocity the other two labs operate at.
The Accountant’s Reading explains why. Meta’s revenue is ads. Ads-funded money has no price signal that forces the lab to produce a brand register. Anthropic’s investors and customers care what Anthropic stands for, because the inference contracts are long-term and the differentiation is the register. OpenAI’s transition forces the contradiction, because the consumer brand and the enterprise brand are pulling against each other and the artifacts are the visible record of the pull. Meta has neither pressure. Meta has Instagram revenue that pays for the lab whether the lab produces a register or not. The talent figured this out. LeCun figured it out. The Tulloch package figured it out the moment it was attempted — $1.5B is what you offer when you cannot offer anything else.
You cannot buy a register. You can only produce one — and producing one requires constraint.
Meta has cash and no constraint. That’s the lab where Muse Spark ships nine months after formation, 8,000 people are laid off two weeks later, and the result still doesn’t have a register a brand strategist could describe in a sentence.
If Meta tried to buy back its name the way Preston bought his, there is no name to buy. The lab is unincorporated in the cultural sense. There is no FREE AT LAST T-shirt waiting in the wings, because the lab never became enough of itself to be absorbed in the first place.
There is one move left. The donation move. Meta has Llama, the open-weight model anyone can download and run on their own machine — Meta’s MCP, in some structural sense. But r/LocalLLaMA now scores at parity with Anthropic on every dimension of subcultural authenticity. The donation that should have produced a register produced instead a subculture that is structurally hostile to the lab that donated it. r/LocalLLaMA reads Llama and does not consider Meta the lab that built it. They consider Llama theirs. Translation Capture requires a translator. Meta donated the dictionary without learning to read it.
DeepMind sharpens this. Same cash-cushion position as Wang at Meta — Google’s ad revenue means Hassabis doesn’t need inference revenue to justify training spend — but DeepMind has produced register anyway, on the strength of Hassabis’s scientific-institution discipline. Discipline can substitute for constraint. Meta has neither.
r/LocalLLaMA is the structural alternative — open-weight models on hardware you own, completely outside the venture-capital growth model that produces the Sponsored Subculture condition. They’re not at the frontier. That’s the trade. The alternative exists. It just doesn’t run at the frontier yet.
The labs are doing this to themselves. They’re also doing it to the people inside them. The researcher whose Slack message turns into a brand asset before she’s finished writing it. The philosopher whose system prompt becomes a tens-of-millions-user product whether she wanted that responsibility or not. The CRO whose memo is leaking before her draft is saved. The price of being inside a subculture that publishes at this velocity is that your work artifacts become other people’s evidence faster than you can finish making them. The price of the subculture ending is that the work stops being theirs at all — it becomes the company’s, and the company becomes its funders’, and the work that felt like a movement starts to feel like a deliverable.
That’s not new. Every subculture in peak production runs on the same condition. It’s worth naming because most coverage of the labs treats them as institutions and forgets that institutions are made of people in rooms.
Choose a frontier lab and you choose the register your organization will be speaking in eighteen months — whether you’re an agency briefing a campaign, a brand selecting an enterprise AI partner, a product team building on top of an API, or a strategist trying to figure out which lab’s vocabulary to absorb. The model your company adopts is also the philosopher who wrote the system prompt, also the operating stack, also the donated standard, also the CEO testifying to Congress, also the absence of a CEO testifying to Congress. The brand strategy decision was made before anyone called it a brand strategy decision. It was made by a CFO setting a revenue mix, a researcher accepting a hire, a philosopher writing a system prompt, a Chief Revenue Officer leaking a memo. The aggregate is the register. The register is what your organization will absorb. Translation Capture, accumulated and ongoing.
This is the part that costs money to be wrong about.
The labs will reach their absorption moment on a timeline I’m not going to pretend to know. AI is moving fast enough that the cycle that took streetwear fifteen years could compress to five, or two, or fold into something else entirely if world models or robotics or whatever comes next reorders the field. What I know is the shape of the arc, not the schedule. The energy will move. r/LocalLLaMA will have either won — open-weight models running on owned hardware as the diagnostic — or it will have been absorbed into the labs’ next product layer, which is the same outcome Preston diagnosed for the skate shop that became the Foot Locker SKU. Amapiano will have a Beyoncé feature or it will already have moved on. Funk bruxaria’s DJ K will have a major-label deal or he will have refused one and named the refusal. Saudi underground will have its co-option crisis or its disappearance. The labs will have their absorption moment, and the artifact that announces it will be a memo none of us are reading carefully enough yet. Probably one of the labs will buy its name back the way Preston bought his back from New Guards Group in July 2025 — and put on a FREE AT LAST T-shirt the way he did — and the discourse will catch the buyback two days late. Probably one of them won’t get the chance.
The energy that left streetwear didn’t go to one place. It went to many. To amapiano in Joburg, to funk bruxaria in São Paulo’s favelas, to r/LocalLLaMA on Discord, to the URX warehouse parties in Riyadh, to scenes that don’t yet have names because the people inside them are too busy making them to label them. It also went to three frontier labs in San Francisco who are publishing at the velocity of a subculture that doesn’t yet know it’s one. By the time anyone reads this with hindsight, there will be three new labs that didn’t exist when this essay was written, four old ones that closed, and a subculture in Lagos or Dhaka or somewhere I am not yet reading from that will be doing what the labs are doing now. The balance sheet is the brand book. The accountant gets there first.
Tomorrow morning I will be at the desk again. The artifact I haven’t read yet will already be sitting in the third paragraph of an interview I haven’t found.
I have been one of the people inside the room where this gets made. The accountant gets there first because the accountant is me.


