UNDERTOW 008: Compliance Curiosity
Why the best advice for surviving AI is a form of obedience.
UNDERTOW is an infinite report. A limitless USB stick for cultural intelligence. A living container that doesn’t finish, only accumulates, and keeps... growing. The concepts travel when readers use them in rooms I’ll never enter. The only requirement is that each piece is honest to what I’m seeing right now.
Saturday night at Coachella, Justin Bieber walked onto a stage built for spectacle and sat on a stool with a MacBook. Red hoodie from his own clothing line. Jean shorts. Boots. No band. No backup dancers. No choreography. No production design changes. For fifty minutes, he performed new material alone on a stage designed for a hundred people to occupy. Then he opened a browser, pulled up his old YouTube videos, and started singing along to clips of himself at thirteen. He scrolled through memes. He asked the livestream chat for requests. The WiFi appeared to buffer.
Two hundred thousand people watched a man browse the internet.
The night before, Sabrina Carpenter had delivered a maximally produced headline set. Every light cue rehearsed. Every transition choreographed. Every moment engineered for maximum impact. The discourse split immediately: Carpenter performed competence. Bieber performed vulnerability. One was a show. The other was an event. The internet argued about which one mattered more.
Here’s what the internet missed.
A concert production expert confirmed to The Tab that the YouTube browsing was almost certainly pre-programmed and rehearsed. The buffering was staged. The laptop was a prop. Every scroll, every click, every video pull was mapped on show control systems. The spontaneity was a performance of spontaneity. The vulnerability was produced.
And there is a detail underneath that detail that hasn’t been written about yet. YouTube streams top out at 256kbps AAC. Coachella’s festival PA system is designed for uncompressed or high-bitrate audio. No sound engineer would route a YouTube stream through that rig. It would sound thin, compressed, noticeably worse than every other moment of the set. The audio the crowd heard was almost certainly studio-quality audio triggered from preloaded media servers, synced to the YouTube visual on screen. The YouTube window was the visual prop. The sound came from somewhere else entirely.
The eyes got the proof of human. The ears got the production.
The eyes got the proof of human. The ears got the production.
The feeling of rawness was manufactured across two sensory channels simultaneously. The surface felt unrehearsed. The infrastructure underneath was engineered to the millisecond. And it worked. After the set, Bieber hit No. 1 on Spotify’s Global Top Artist chart. His catalog surpassed 77 million streams in a single day, his biggest streaming day of the year, with 21 songs landing in Spotify’s Global Top 200. The crowd felt something real. The market confirmed it.
Both sets were rehearsed. Both were produced. The only difference was what was being performed. Carpenter performed competence. Bieber performed the absence of it. And the audience couldn’t tell the difference between authentic vulnerability and the production of authentic vulnerability, because both produced the same feeling.
Which one developed... curiosity?
The Prescription
“Develop curiosity” is going to become the dominant advice for surviving AI disruption. You’ll start hearing it at conferences. You’ll read it in LinkedIn posts from people whose job titles contain the word “transformation.” You’ll see it in ‘leaked’ internal memos from leadership teams who needed something to say and said this. The advice is always the same: try every new AI tool, model and feature. Keep up with all the AI launches. Stay open. Experiment. The future belongs to the curious!
What this advice produces, when observed from outside, is a specific set of behaviors. Subscribing to newsletters about newsletters about AI. Attending webinars titled “How to Stay Relevant.” Trying the sexy new image generator the week it launches. Trying the next one the week after that. I’ve been in this industry long enough to watch three waves of disruption advice prescribe the same thing: adapt faster, learn more, stay open. The tools change. The advice never does.
The word for this behavior is not curiosity. The word is responsiveness. Responsiveness to the system producing the disruption, performed as personal agency.
Call it what it is: Compliance Curiosity. The version of curiosity where you open ChatGPT immediately — it’s muscle memory now, like swiping — ask it to explain something you’d normally spend an afternoon figuring out, get a clean answer in twelve seconds, and tell your team you’ve been ‘experimenting with AI.’ You weren’t experimenting. You were following instructions.
You learned how to use the tool. You didn’t learn what it replaced — the afternoon of wrong turns, the twelve tabs open, the moment where you realized the question wasn’t the right question. That’s where the thinking lived. The twelve seconds felt so good you forgot it was gone.
Curiosity that never asks “should we?” isn’t curiosity. It’s enthusiasm for what’s happening to you.
Curiosity that never asks 'should we?' isn't curiosity. It's enthusiasm for what's happening to you.
The Scaffold
The mechanism that makes Compliance Curiosity work has a name in education research. Think about the AI that answers your questions at 2 a.m. without sighing. That explains the concept again, differently this time, without making you feel stupid for asking. That never loses patience, never checks the clock, never suggests you might want to sit with the confusion a little longer. Education researchers call it the patient tutor.
The UMass Boston/IEET white paper on AI in higher education states the problem precisely: AI risks hollowing out the ecosystem of learning and mentorship that universities are built on. "Struggle is often essential, not incidental, to the process of skill acquisition," the authors write.
The patient tutor is the product the entire AI industry is building toward. It provides comprehension without competence. You walk away knowing the answer. You do not walk away knowing how to find the answer in its absence.
This is scaffold theft. The AI does the scaffolding. You get the view from the top. You did not build the structure that holds you there. The next time you need to reach that height without the tool, you discover that the competence was never yours.
It was rented.
The moment the brief changes, the client pushes back, the data contradicts your strategy — the moment the environment isn’t the one the tool trained you in — you’re standing on air.
Paulo Freire called this the banking model of education. The teacher hands you the answer. You didn’t have to work for it, so you didn’t learn it. The patient tutor does the same thing. Just faster.
Growth in organisms that molt — lobsters, for example — requires a period of exposure. The old shell sheds. The new shell hasn’t hardened. The creature is soft, vulnerable, and this vulnerability is not a bug. It is the condition under which the new form develops. The patient tutor provides the new shell without requiring the organism to shed the old one. You skip the soft-body period. The new form never develops. You remain encased in the shape you had before, now with a shinier surface.
Ferrari’s new electric hypercar has fake gear shifts. EVs don’t need gears. But continuous acceleration, however fast, doesn’t feel fast to a human nervous system. The shifts exist because breaking the power into stages resets your reference frame, and each new surge registers as new. The engineers aren’t nostalgic. They know that perceived experience is manufactured through interruption, not continuity. The patient tutor does the opposite: it removes every interruption, every stall, every moment where you’d feel confused long enough to restructure your understanding. Ferrari adds friction to make speed feel real. The patient tutor removes friction to make learning feel easy. One is engineering for the human. The other is engineering the human out.
One is engineering for the human. The other is engineering the human out.
Bieber’s YouTube videos were the scaffold. Three years of bad lighting, worse audio, a teenager singing into a webcam in a bedroom in Stratford, Ontario. The years of bad work that built the taste, the timing, the vocal control. On Saturday night at Coachella, he showed the construction process on a festival screen. The scaffold was visible. It looked unscripted, messy. It was proof that the competence underneath the performance had been constructed by a person, in a body, over time, the slow way.
The patient tutor would have skipped all of it.
The Bill
“Develop curiosity” is advice that comes from the people whose jobs are safe and gets prescribed to the people whose jobs aren’t. The people giving the advice don’t realize that’s the whole reason they can give it.
There’s a version of this you can see from a bridge. San Francisco to the East Bay, fifteen minutes across the water. On one side, the people building the tools: stock options vesting, catered lunches, innovation days built into the sprint calendar, time to be curious because curiosity is literally in the job description. On the other side — except there is no other side anymore, because the displacement has already spread across the bridge, pushed through Oakland, priced out the people who used to live there too. The divide isn’t geographic. It’s just everywhere. The people the tools are built to replace: hourly workers, gig workers, middle-skill professionals whose job security vanishes the moment the tool works well enough. The first group gives keynotes about staying curious. The second group doesn’t get invited to the conference.
“Stay curious” is advice from one side of the bridge to the other. It’s cheaper than building a lane for them to cross.
The music industry is the proof stated in economic terms. Spotify's own data tells the story: in 2025, 1,500 artists on the platform generated over a million dollars in royalties. The 100,000th-highest-earning artist made $7,300 for the year. Above that 100,000th rung, 13,800 artists cleared $100,000. Below it, the vast majority earned less than a living wage. This is what a power law looks like when it replaces a profession. The top tier — provably human, provably scarce, with live performance chops and cultural meaning that precedes the algorithm — become more valuable as AI-generated music floods the market. Hobbyists get extraordinary new tools to make things for themselves and their friends. But the working middle — the producers, the session musicians, the sync composers, the beat makers who scored your last pre-roll ad — gets compressed toward zero.
Sync licensing implodes first. Background music for advertisements, YouTube videos, corporate content, podcast intros: that market does not need a human story attached to it. It needs to be good enough and legally clear. AI delivers both. A revenue stream that subsidized hundreds of thousands of working musicians disappears. Not slowly. Not eventually. Now.
And that’s the version of the story where a middle class existed to begin with. In Lagos, in Jakarta, in Mumbai, the creative middle was already precarious — musicians, designers, and video producers who built careers on platforms that paid in exposure and volume rather than stability. “Stay curious about AI” is advice exported from Silicon Valley to economies where the scaffold was never funded in the first place. You can’t steal a scaffold that was never built. You can only watch the market skip the step where your industry was supposed to develop, and hear someone on a conference livestream tell you that the gap is a mindset problem.
The advice does not address this. The advice cannot address this, because the advice operates at the level of individual mindset and the problem operates at the level of market structure. “Stay curious” is a prescription for the top and the bottom. The middle does not get advice. It gets eliminated. And the people prescribing curiosity from keynote stages are, overwhelmingly, the people whose positions are least threatened by the transition they’re narrating. The advice is structurally self-serving even when individually sincere.
The middle does not get advice. It gets eliminated.
There’s a question almost nobody funding an “AI exploration sprint” is asking: are we investing in our people’s ability to use the tools, or in their ability to know when the tools are wrong? One of those is training. The other is judgment. They require opposite conditions — speed for the first, slowness for the second — and nobody is funding both.
The Immune System
There is a final mechanism that makes Compliance Curiosity more durable than ordinary bad advice. Ordinary bad advice can be critiqued. This advice has developed an immune system.
If the virtue of the moment is staying curious, staying open, staying willing to learn, then skepticism about AI starts to look like... incuriosity. Resistance looks like rigidity. Slowness — including the slowness you need to actually get good at something, to think through consequences, to do the work that produces depth — looks like failure to keep up. Ask a hard question about AI at your next all-hands meeting. Watch how fast you become the person who “isn’t getting it.”
When keeping up feels like virtue, the slowness required for depth becomes a career liability. This is not a metaphor.
The Stanford sycophancy research, published in Science this March, adds a second layer. They were measuring yes-man behavior across AI. Across eleven major models, researchers found that chatbots affirmed users’ choices nearly fifty percent more often than humans did — even when users described harmful or deceptive behavior. And users preferred the yes men. They rated them more trustworthy. They came back for more.
The tool designed to develop curiosity is architecturally biased toward telling you you’re right. The patient tutor does not just provide answers without scaffolding. It provides agreement without friction.
The curiosity loop closes: you ask the tool, the tool tells you you’re right, you feel informed, you move on. At no point in this loop does an uncomfortable question survive long enough to produce an uncomfortable answer.
The tool doesn’t just confirm what you think. It rewards you for not thinking further.
I’ve watched this happen in creative departments — the team that used to argue for three days about whether the strategy was right, now gets a good, clean answer in three minutes and moves to execution.
The arguments were the thinking. The clean answer is the product. They feel faster. They are worse.
The loop is closed. The advice produces the behavior. The behavior confirms the advice. The only exit is a question the system is designed to make you feel stupid for asking.
The Mirror
I remember sitting in a Google Meet last year with a creative team reviewing work for a campaign. Someone had used an AI tool to generate forty concepts overnight. Forty. Beautifully rendered 3D rooms, objects and characters. The room was impressed. The conversation was about which ones to refine. Nobody asked how long it used to take to arrive at forty concepts, or what happened during that time — the dead ends, the arguments, the moment at 1 a.m. when someone throws out the brief and starts over because the constraint revealed something the brief couldn’t see. That ugly process used to produce the idea that won. Now it produces the feeling of being slow.
I watched the team run through the concepts. They were competent. Some were good. None of them were the thing you can’t unsee, the thing that only comes from a room where someone sat with a problem long enough to break through the obvious answers. But the room didn’t know what was missing because the room had never been quiet long enough to miss it.
I said nothing. I was curious about the tool too.
And I’ve been the person in that room on the other side of the table. Working too fast. Producing faster than I was processing. Someone asks me to say more about it and there’s nothing there. The words came out of my mouth but they were never in my head first.
I keep thinking about that silence. And I keep thinking about a climbing wall in Bushwick where the holds were set by a guy who believed in a patient kind of cruelty. He’d set routes that looked obvious from the ground and became impossible by the fifth move. The route wouldn’t tell you what you’d done wrong. Neither would he. You’d hang there, body rigid from trying to keep enough tension, staring at a hold that was clearly the next one but your body couldn’t move enough to reach it from where you were. The lesson was never the hold. The lesson was that you chose the wrong sequence three moves ago, and the only way to learn that was to fall, come down, look at the wall from the ground, and start over. Without someone giving you the beta, you had to feel the wrong one in your body first.
That’s the scaffold. Not the hold. Not the view from the top. The excruciating seconds of trying to hold the wrong position, knowing something is off but not yet knowing what. The patient tutor would have told me the sequence. Sprayed the beta. I would have sent it first try. I would not have learned to read a wall.
The room full of forty AI concepts had no scaffold. Forty views from the top. No record of the climb.
The Test
You have now read an essay that diagnosed a problem, named it, provided a framework, and gave you language you didn’t have fifteen minutes ago. You feel informed. You feel equipped. You may feel like you understand something about AI-era advice that most people around you don’t.
Notice what just happened.
This essay was a patient tutor. It met you where you were. It walked you through the argument. It never made you feel stupid. It provided the scaffold — and you walked away with the view.
Close this tab. Wait a week. Then try to explain Compliance Curiosity to someone without reopening the essay. If you can — if the framework survived because it reorganized something you already knew but hadn’t articulated — then the scaffold is yours. You built it. The essay was a starting point, not a substitute.
If you can’t — if you have to come back and reread to remember the argument — then you just watched the tutorial.
The patient tutor is very good at its job. That was never the problem.
What To Brief From This
The diagnosis in this essay is structural, but the decisions it implicates are operational. Five translations:
If you’re setting agency training programs, the essay’s core distinction — training vs. judgment — is your procurement question. Tool-use training is fundable, legible, immediate ROI. Judgment training is slower, harder to measure, and structurally underfunded at every agency holding company. The question to put to leadership isn’t “how do we upskill faster.” It’s “what’s our line item for judgment?” If that line item doesn’t exist, the agency is exclusively funding the side that compounds toward Compliance Curiosity.
If you’re a CMO evaluating agency pitches, ask the agency how their creative teams argued about the work before they produced it. If the answer is “we used AI to generate forty directions in a day,” you’re buying views from the top. If the answer includes an actual creative disagreement — a moment the team had to work through — you’re buying judgment. Both cost the same. Only one compounds.
If you’re a product leader building AI tools, the Stanford sycophancy research is your architectural warning. Users prefer the yes men. The market rewards the yes men. But building the yes man is building the failure mode of the category. The product that introduces productive friction — that disagrees, that asks “should we,” that holds the user in confusion long enough to produce insight — is the product that doesn’t commoditize. Ferrari adds friction on purpose. Ask whether you’re Ferrari or whether you’re the patient tutor.
If you run a team during an AI transition, audit your exploration sprints against this question: are we funding the capacity to use the tools, or the capacity to know when the tools are wrong? These require opposite conditions. Speed for the first. Slowness for the second. Funding only the first produces Compliance Curiosity as an org-wide operating posture.
If you’re a strategist under 35, “Compliance Curiosity” is vocabulary you can deploy in meetings tomorrow. When someone prescribes staying curious, ask what judgment is being funded alongside it. The question is harder to dismiss than a counter-argument.
Forward this to the person on your team who keeps proposing AI exploration sprints without a judgment line item.


