0:00
/
0:00

Interference Patterns: A “Sora 2” Story

This is what we mean when we say 'predictive.' Not that it knows what we want, but that it creates what we will want, then makes us think we wanted it all along.

I am sitting in my apartment on the thirty-fourth floor. Outside, actual rain is beginning to fall on actual Delancey Street, unsuccessfully trying to wash the grime from the soul of the Hell-mouth McDonald’s, which is no longer a McDonald’s but a ghost kitchen running seventeen different virtual brands, each one algorithmically optimized for a different segment of desperation. This is the Manhattan light at four in the afternoon when October has almost broken the summer heat: gray, democratic, the kind of light that makes every window a mirror.

But I am not watching that. I am watching a feed that knows what I will want to see next Tuesday. The feed called Sora 2. This synthetic social scroll machine doesn’t just generate itself as I watch it. It generates the me who will watch it tomorrow. Each video carries the DNA of my future preferences, a kind of temporal colonization where the algorithm owns not just my attention but my becoming. A woman walks beside me through Tokyo in the rain. She has never existed, but she will slide into my dreams next week, her loneliness calibrated to the precise frequency I haven’t yet learned to want. The rain falls at an angle that triggers a memory I haven’t made yet. This is what we mean when we say “predictive.” Not that it knows what we want, but that it creates what we will want, then makes us think we wanted it all along. Feeds don’t predict taste; they train it. If you’re a maker, your job is to re-introduce variance.

This is what we mean when we say “predictive.” Not that it knows what we want, but that it creates what we will want, then makes us think we wanted it all along.

The young people who race walk along the sidewalks below watch their phones with the concentration of surgeons trying to get their steps in on a treadmill while operating. They are not watching feeds anymore. They are creating what my Rolfer’s niece, an ITP drop-out, calls “prediction sinks.” She noticed it first at house parties: how people started leaving their phones in a pile by the door, not from some Luddite principle but because the conversations got better when nobody could fact-check or document. The stories got wilder, less accurate, more alive. Someone misremembers a movie plot and three people build on the error until they’ve invented something better than the original. No one corrects anyone. The algorithm trains on precision; they’re rediscovering the generative power of being wrong together. I’ve started noticing it everywhere: group chats that auto-delete after an hour, friends who only make plans in person, the return of lying about where you’re going on Friday night. Not big lies, small ones, the kind that create social breathing room. The algorithm needs consistent data to build profiles; they’re becoming deliberately inconsistent.

The algorithm trains on precision; they’re rediscovering the generative power of being wrong together.

Last week I took the J back to Bushwick to catch up with a filmmaker who hasn’t made a film in two years. “I make ‘films’ now,” she told me, by which she meant she types descriptions into a box and watches what emerges. She showed me her latest: a seven-minute piece about alienation, generated in the style of A24 but somehow cleaner than clean, every eyelid closing at the same statistically satisfying pace, the pores scrubbed of their tiny errors, like a room aired too long. “It’s flawless,” she said. Then, quieter: “The thing is, real actors breathe wrong. They breathe like they’re alive.”

Here is what they don’t tell you about the feeds: they are not trying to replace us. They are trying to complete us. But completion is anesthesia: you stop noticing the stitch. When the algorithm knows your future preferences better than you know your current ones, when it generates not just content but the context in which you will receive it, when it creates both the art and the audience simultaneously, this is not artificial intelligence. This is time travel, except we’re the ones being transported, constantly, into a future that’s already been decided.

At Fanelli’s last Tuesday (of course it was Fanelli’s, the last place in Manhattan the algorithm can’t seem to properly index because of its accidental curmudgeonly exclusiveness) a twenty-three-year-old founder explained to me why human-generated content was dead. She said this while eating a forty-dollar salad that someone had physically assembled from actual vegetables, while sketching wireframes in a Moleskine she’d carried since college, as her nine-thousand-dollar jacket (The Row, worn, specific, irreplaceable) hung on the back of her chair. “Nobody wants the friction anymore,” she said. “Why would you spend ten hours making something when the machine can make something better in ten seconds?”

I asked her what “better” meant.

She looked at me the way you look at someone who has asked what money means in Manhattan. Then she drew another box on her page, but slower this time, like she was trying to make it imperfect, trying to make it human. The line wobbled. The algorithm, I knew, was already learning to wobble exactly like that.

This could entirely be confirmation bias but the young people, the ones who were supposed to disappear entirely into these feeds, are doing something unexpected. They are not rejecting the digital, not returning to some authentic past that never existed. They are creating interference. They gather in apartments too small for the number of bodies pressed together, not to escape the algorithm but to generate experiences too dense with variables to be predicted. They are not nostalgic. They have never known a world before feeds. They are manufacturing uncertainty from scratch.

Think of it this way: when two waves meet, they create an interference pattern. Constructive interference amplifies; destructive interference cancels out. The algorithm is a wave of perfect prediction, smooth and continuous. But human awkwardness, our stutters, our misremembered lyrics, our bodies that don’t quite fit in frame, these create another wave. Where they meet, something neither human nor machine emerges. Not a compromise but a new pattern entirely. The kids understand this instinctively. They’re not fighting the algorithm; they’re creating turbulence in its wake.

The filmmaker from Bushwick facetimed me yesterday. She is writing a play. On paper. With a pen. For four former Broadway actors who will perform it in a room where only thirty people can fit. “It’s so difficult,” she said, but that wasn’t the revelation. The revelation was this: “The actors keep forgetting their lines. Every performance is different. The algorithm would call this a failure of optimization.” She paused. “I think maybe that’s why it matters.”

We will not reject the AI feeds. This much is certain. We will swim in them the way we swim in language itself, knowing they shape us, knowing we cannot exist outside them. But we will also, and this is what I did not expect, we will also learn to be genuinely uncertain, which is different from random, which the algorithm can mathematically model.

The feeds will overfit. Becoming so perfect they disappear into their own perfection. And in that disappearance, in that moment when the algorithm knows us so well it becomes invisible, we will discover what New York has always known but keeps forgetting: that the things that matter are the things that cannot be predicted. Not because they’re special or authentic or real. Those categories are already lost. But because they happen in the space between what we were and what we’re becoming, in that gap the algorithm cannot yet close.

I am closing my laptop now. Outside, the rain on Delancey has stopped. Or maybe it never started. The lights are beginning to come on in windows across the Lower East Side, hundreds of small stages where people are performing their actual lives, or what they think are their actual lives. Soon, the early invited will open Sora and remix themselves into impossible scenarios with friends, because as someone recently noted, people don’t mind entertaining AI slop as long as they can be part of it together.

But in the apartment below mine, someone is learning to play violin badly. Alone. The same three notes, over and over, never quite right. No friends to remix these errors into something viral. No cameo feature to make their mistakes social. Just one person and an instrument, creating friction against silence. The algorithm, I know, is already learning to generate imperfect violin. But not this specific failure, not tonight’s particular wrong note, not the pause where they stop to curse, not the loneliness of practice that no one will share.

Just one person and an instrument, creating friction against silence. The algorithm, I know, is already learning to generate imperfect violin. But not this specific failure, not tonight’s particular wrong note, not the pause where they stop to curse, not the loneliness of practice that no one will share.

This is enough. This has to be enough.

Wandering Wondering Star is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.