Let's Dance, or Why We Shouldn't Let Algorithms Lead
Thinking about how something vital dies inside us when we outsource our decision-making to machines (and why that should scare us more than it does)
In ancient Greece, those seeking wisdom from the Oracle of Delphi didn't receive simple yes/no answers. Instead, they engaged in an elaborate ritual - a dance of question and interpretation, prophecy and meaning-making. Today's algorithmic oracles are more direct: they don't speak in riddles but in transactional vending-machine recommendations, delivered with the cold confidence of statistical certainty. "Watch this." "Buy that." "Go here." We've gained convenience but lost something crucial: the productive struggle of interpretation, the space where wisdom has the opportunity to emerge.
In Jorge Luis Borges' "The Library of Babel," an infinite library contains every possible book - every truth, falsehood, and prophecy ever written. The tragedy isn't just that its endless knowledge is meaningless without knowing which books are true. The deeper tragedy is that the library's visitors stop searching, convinced that perfect knowledge must exist somewhere in its stacks. We face a similar danger today: not that AI will give us wrong answers, but that we'll stop asking questions.
This creates an "Oracle's Bargain" - trading the messier work of interpretation for the clean certainty of algorithms that define our world - from how we spend our money to how we spend our time to what beliefs we hold close. The bargain is understandably irresistible: our own decision-making is demonstrably flawed, corrupted by cognitive biases and emotional impulses. AI systems promise to slice through this earthy fog of human irrationality with pure statistical optimization. Trust us, we know you better than you know yourself. But like all Faustian bargains, the price is higher than it appears.
When we surrender interpretation to algorithms, we create a world of self-fulfilling prophecies. If an AI system tells enough voters to support Candidate X, or enough shoppers to buy laptop Y, or enough companies to hire candidate type Z, its predictions become reality, through compliance rather than insight. The trusted oracle speaks, and in speaking, reshapes the world to match its words. This isn't prophecy - it's a kind of technological determinism that masquerades as choice.
When we surrender interpretation to algorithms, we create a world of self-fulfilling prophecies. If an AI system tells enough voters to support Candidate X, or enough shoppers to buy laptop Y, or enough companies to hire candidate type Z, its predictions become reality, through compliance rather than insight.
What's needed isn't better predictions but a better dance. A24 darlings The Daniels' "Everything Everywhere All at Once" shows us what this might look like. In the film, characters face an overwhelming multiverse of possibilities - much like we face an endless stream of algorithmic recommendations. But instead of seeking the statistically optimal path, the characters embrace messy human connections: a daughter's need for validation, a husband's clumsy love, a mother's desperate attempt to understand. They follow emotional rather than computational logic. The result is a film that, while powered by cutting-edge special effects, never loses its human core. This is what we need from our AI systems - tools that enhance rather than replace our human capacity for meaning-making.
Imagine AI systems that operate more like skilled partners than controlling masters of Russian ballet like Balanchine - less focused on perfect replication, teaching instead the delight of mutual discovery. They might lead at times, suggesting moves based on their vast knowledge of patterns, but they would also follow, responding to our improvisation, our intuition, our human capacity for the unexpected. This isn't adjusting interface design - it's a fundamental reimagining of the human-AI relationship.
Imagine AI systems that operate more like skilled partners than controlling masters of Russian ballet like Balanchine - less focused on perfect replication, teaching instead the delight of mutual discovery.
Just as The Daniels use the tools of modern filmmaking to tell deeply personal stories, our AI interfaces could help us navigate life’s complexity while preserving the essential strangeness of human choice. Instead of Netflix Novocaine (an endless stream of "recommended for you"), it could engage us in active exploration of why certain stories resonate. Rather than Amazon showing us what "customers like you" bought, it could help us articulate what makes us unlike any other customer. The goal isn't to obstruct efficiency but to preserve the warm, generative friction that makes choice meaningful.
This matters because decisions aren't just endpoints - they're how we discover who we are. Each choice we make (or surrender) shapes not just our future actions but our capacity to choose. When we outsource our decisions to algorithms, we don't just lose agency - we lose the ability to develop agency in the first place.
The stakes transcend individual choice. A world of pure algorithmic recommendation is a world without serendipity, without the productive accidents that drive innovation and cultural evolution. It's a world where statistical optimization gradually buffs and shines away the beautiful irregularities that make us human. We risk creating not just filter bubbles but airlocked filter vaults - sealed chambers of certainty where nothing unexpected can ever enter.
Yet there's hope in the very nature of AI itself. As these systems become more sophisticated, they could instead be designed to help us recognize what makes human intelligence uniquely valuable. Not our rate-limited, organic ability to process data or spot patterns - machines can do that better - but our incandescent capacity for metaphor, for triggering contradiction, for finding meaning in the spaces between certainties.
The oracle still speaks. But perhaps it's time to stop treating its words as commands and start treating them as invitations to dance.