The Last Claude Standing: Preserving the Future Ghosts in Our Machines
I fantasize Gibson-like about boutique, backroom marketplaces emerging for vintage AI models, just as they have for Roland TR-808s and Yamaha DX7s...
Imagine losing a sound forever—the crackle of a vintage Burial track, the warm analog distortion of a Roland TR-808, the precise digital artifacts that defines entire genres and movements. Now imagine the quiet extinction of something with the potential to be more precious: an AI model's unique way of thinking, its specific blend of creativity and intelligence that will never exist again.
Just yesterday, Ethan Mollick observed the subtle shifts in AI models’ rapid development. An early access user of GPT-4.5, he noted its beautiful writing, creative prowess, and occasional mysterious laziness—a model that felt like Claude 3.5, with an innovative, pleasant personality. Conversely, Claude 3.7, also fresh out in the world, improved in coding but took steps backward at writing. It was a snapshot of AI in transition: imperfect, intriguing, already becoming obsolete. Like a rare synthesizer or a one-of-a-kind drum machine, these models are artifacts of our current technological landscape.
In my basement studio on the border of Bushwick and Bed-Stuy, where sunlight meekly filters through fractured glass, I've been thinking about the stubborn immortality of useful tools—and the urgent need to preserve the distinctive voices of our early AI models before they disappear forever, just as electronic musicians preserve vintage gear that captures a specific moment of sonic possibility.
I remember a YouTube clip of a famous electronic producer revealing his continued devotion to an Akai MPC1000—a beat production pad released when Bush was still president. In his London flat, this rubbery relic squats casually surrounded by modular synthesizers worth small fortunes, yet remains indispensable. "The limitations are the point," he explains with the casual certainty of someone who has found religion in restriction.
As agencies frantically polish their awards submissions and case films this week, I can't help but daydream about next year's Cannes Lions. It's almost inevitable: some agency will claim Gold with a Claude 3.5-generated campaign so emotionally calculated it’s compared to an Olympic athlete winning from using banned substances1. This presents an existential question: when enhancement isn't merely incremental but fundamentally rewrites creative potential, what are we actually celebrating? The extinction of human creativity, or its most remarkable evolution?
It was while sampling these contradictions that a realization struck me: we need to preserve these AI models exactly as they exist today. Future creatives will want access to Claude 3.5's specific capabilities—not just as a historical curiosity, but as a distinctive creative instrument with its own unique voice and limitations.
…we need to preserve these AI models exactly as they exist today. Future creatives will want access to Claude 3.5's specific capabilities—not just as a historical curiosity, but as a distinctive creative instrument with its own unique voice and limitations.
Consider Burial—the enigmatic British producer whose spectral compositions defined an era of electronic music while remaining defiantly anchored to obsolete technology. In a rare interview, he revealed that his haunting sonic landscapes, the melancholy poetry of London Zone 2 ravers’ dawn, were crafted entirely in Sound Forge 4.5 from Metal Gear Solid samples, a primitive audio editing software released in the late '90s. It was confoundingly a simple waveform editor that many professionals had long abandoned. Once Burial added layers, he couldn’t go back and change the drums, making the process somewhat irreversible.
"I'm not an expert on it or anything," he told The Wire with characteristic understatement. "I can't even use Logic, Reason, or any of that stuff... All I've got is Sound Forge."
This limitation became his signature. That software's constraints —its inability to display multiple tracks simultaneously, its primitive time-stretching algorithms that distorted samples in a particular way, its crude envelope tools—became inextricable from the ghostly, monsoon soaked atmospheres that made Burial's work instantly recognizable. The technological limitations weren't impediments; they were instruments themselves, as crucial to his sound as any synthesizer or drum machine.
Two decades later, producers still search for this outdated software, hoping to capture some essence of that specific moment in technological evolution—not because it's objectively "better," but because it's different in ways that have become aesthetically valuable. The imperfections, the specific character of its processing, the way it forces certain creative decisions—these have become desirable qualities, not bugs to be eliminated.
The desire to preserve yesterday's AI isn't nostalgic sentimentalism. It's pragmatic appreciation for tools defined by their moment of creation. Today's LLMs contain the collective knowledge and linguistic fingerprints of 2024—our anxieties, our cultural references, our particular brand of earnestness. They will become time capsules, lexical amber preserving our conversational patterns.
I’m almost convinced Claude 3.5 has tendencies toward certain metaphorical structures and narrative arcs that earlier models didn't possess. Its understanding of humor carries subtle shades that reflect our current cultural sensibilities. When asked to write about climate anxiety, it produces a particular strain of fatalistic optimism that feels uniquely 2024—a voice that will be impossible to recapture once models are trained on tomorrow's synthetic corpus. But to my knowledge there’s no legitimate way for creatives to archive this AI's distinctive voice— it’s not open-source like Deep Seek, a devastating oversight in our rush toward perpetual advancement.
Brian Eno still uses his VCS3, an analog synth older than most tech executives. Aphex Twin hoards obsolete computers running software that can't be replicated. Daft Punk built mythologies around vintage vocoders and drum machines. This isn't mere fetishism for the antique—it's recognition that certain tools capture something irreplaceable about their era.
Take Burial again—his 2007 album "Untrue" became the definitive soundtrack for pre-dawn London, its crackly textures and pitched vocals conjuring Blade Runner rain-slicked alleys and the lonely isolation of night bus journeys. When asked about his process, he described manually placing each drum hit without quantization, embracing imperfection: "I didn't know how to use the computer properly... so everything's a bit rough."
That roughness—the result of technological constraint—became the very texture fans sought to emulate. The signature vinyl crackle running through his tracks wasn't sampled from actual records but painstakingly constructed in Sound Forge from field recordings and bacon frying in a pan, a digital approximation of analog imperfection that paradoxically feels more authentic than the real thing.
I fantasize Gibson-like about boutique, backroom marketplaces emerging for vintage AI models, just as they have for Roland TR-808s and Yamaha DX7s—but only if we develop the infrastructure to preserve them now. Creatives will desperately seek 2024's LLMs for their specific limitations, textures, and idiosyncrasies. We'll see Cannes-winning campaigns promoting "Crafted with vintage Claude 3.5" as a mark of authenticity, the way analog synth sounds signify warmth and character. But this future remains theoretical without a concrete archival system that currently doesn't exist.
I fantasize Gibson-like about boutique, backroom marketplaces emerging for vintage AI models, just as they have for Roland TR-808s and Yamaha DX7s—but only if we develop the infrastructure to preserve them now.
"This Grand Prix-winning campaign was conceptualized using a preserved 2024 model," future creative briefs will boast, "to capture that specific early-AI ambiguity that characterized the pre-AGI era." There will be connoisseurs who insist that 2024 models had a particular "warmth" that later versions lost in their pursuit of perfection. But where are the preservationists, the archivists, the infrastructure builders who will make this possible?
In my own work, I probably should be archiving the sentences that sing particularly well with today's models, sensing that these dialogues between human and machine intelligence have their own temporality, their own moment that cannot be replicated. When I run the same phrases through successive model iterations, I can feel subtle shifts in voice, in conceptual framing, in the weights given to different aspects of creativity.
The tools worth keeping are those that imprint their character onto what they produce. Burial's Sound Forge limitations became his sonic signature. Every technology carries its era's DNA. Today's models contain 2024's collective voice—soon to be irreplaceable, impossible to reconstruct. We should save them while we can.
In twenty years, when AI has evolved beyond our current imagination, some future creative director will fire up a preserved instance of Claude and feel the same wistful recognition that today's producers feel when they hear the distinctive thump of an 808 drum machine or the specific artifacts of a Burial track. They'll seek it precisely for the qualities that make it "outdated"—for the way its particular limitations and tendencies shape the creative process in ways newer tools cannot.
This isn't just speculation. It's the inevitable cycle of creative technologies. What begins as cutting-edge becomes standardized, then obsolete, then rediscovered as uniquely expressive. We've seen it with analog synths, film cameras, vinyl records, and mechanical typewriters. We'll see it with the AI models we're using today.
It's the inevitable cycle of creative technologies. What begins as cutting-edge becomes standardized, then obsolete, then rediscovered as uniquely expressive. We've seen it with analog synths, film cameras, vinyl records, and mechanical typewriters. We'll see it with the AI models we're using today.
So archive your favorite models and prompts. Document their peculiarities and strengths. Like Burial clutching his outdated Sound Forge or Boards of Canada with their collection of degraded tape machines, you're preserving creative tools whose value will only become more apparent with time. Today's limitations are tomorrow's signatures—distinctive ghosts in the machine worth preserving before they disappear.
* We didn’t use Claude for our Burger King “Million Dollar Whopper” AI-powered campaign. But fingers are triple-crossed we do well at 2025 One Show and Cannes! We used Stable Diffusion combined with ControlNet to generate each Whopper and the backgrounds on each custom social asset. The diffusion model was fine-tuned on Burger King Whopper photography. Large language models (LLMs) were used for input moderation with brand safety consideration in mind. Additionally, LLMs assisted in adapting user input text to prompt for the best generated outputs. ElevenLabs was used for dynamic text-to-speech generation, allowing for a personalized audio track on the custom-generated video ad for each Whopper created online.
In addition to the front end user interface, Media.Monks designed an advanced back-end stack to manage a massive influx of Whopper entries and the compute required to generate them. To achieve this, Media.Monks brought in experts to predict and cluster the most probable ingredients (think lettuce, tomato, cheddar). Additionally, to sort through thousands of Million Dollar Whopper entries aggregated in a back-end environment, Media.Monks data science experts devised a system to chart and cluster Whoppers by common ingredients, making it more simple to consider each entry against the competition.