Visitors view large-scale AI poem projection in gallery space

Before the algorithm, there was the sonnet

In the lobby of Manhattan’s Museum of Modern Art, just past the entrance on West Fifty-Third Street, a large screen mounted on the wall glowed in shifting fields of green, pink and blue. Text appeared, in multiple fonts, with a cursor blinking between phrases. Some of the words were blocky and pixelated, like the readout of an old terminal; others were rendered in what appeared to be cursive handwriting, except that, on closer inspection, the loops and flourishes resolve into strings of ones and zeros. The words accumulated, then dissolved, then began again in a different arrangement, as though the poem could not quite make up its mind. It cannot, in a sense: no human decided what it would say today, or the day before.

Somewhere in the lobby, if you scanned a QR code with your phone, a voice would enter your earbuds: Sasha Stiles, a poet who has spent the past six years teaching artificial intelligence to write like herself, whispering fragments of earlier versions of the same poem over a low wash of ambient sound.

The installation, which closed at MoMA in March and opens in April at ZKM, the media arts center in Karlsruhe, Germany, is called “A Living Poem.“ It rewrites itself completely every sixty minutes, drawing on a custom language model, fragments from MoMA’s collection of text-based art and a dataset built from years of Stiles’s own writing.

Stiles is making a different argument with her work: that language models are not merely efficiency tools with artistic side effects, but an extension of the oldest information technology humans ever invented, language itself. The most consequential question about AI, in her view, is not what it can replace, but what it might finally allow us to say to one another.

“Poetry is a technology that we created in order to communicate something that’s otherwise very difficult or maybe even impossible to say,” Stiles told IBM Think in an interview, suggesting that today’s language models may expand that expressive range in ways we are only beginning to grasp. “Before we had written language as a way to archive stories and memories externally, we had to rely on those poetic devices as a way of storing information, storing our human data.”

A childhood built for tech

Most writers who have grappled seriously with AI arrived at the technology through alarm. Stiles arrived through something closer to inevitability. Her father produced Carl Sagan’s PBS series Cosmos; her mother worked alongside him, making science documentaries. Growing up, she tagged along to shoots at the Jet Propulsion Laboratory and meetings at the Planetary Society, absorbing science as ambient atmosphere rather than foreign subject. She studied modernism and postmodernism in school and read Wired and Ars Technica for pleasure. She read authors like Arthur C. Clarke and Philip K. Dick alongside Ursula K. Le Guin and Octavia Butler. She followed tech philosophers Ray Kurzweil and Nick Bostrom the way other literary people followed the quarterlies.

“I grew up really interested in science fiction, and a lot of what I was gravitating toward was maybe speculative in nature, or was dealing with topics like alternative intelligences or parallel realities,” she said. “It’s just been a lifelong interest.”

But when the Cornell University research paper “Attention Is All You Need” circulated in 2017, introducing the transformer architecture underpinning modern language models, Stiles said she felt something shift. She had no computer science training and no background in machine learning. What she had was a poet’s sensitivity to language and a decades-long habit of reading across the line between the sciences and the arts.

“As someone who’s both really interested in language and literature, as a wordsmith and as someone who’s really interested in the poetics of technology,” she said, “both of these things were kind of colliding in this space of natural language processing.”

Alongside curiosity, the collision produced fear. She was not naive about what it might mean for a writer to encounter a system that could produce text that sounded like her. She sat with that discomfort rather than resolving it quickly, and eventually decided that the only productive response was close engagement. Many of her peers reached the opposite conclusion.

While Stiles was feeding her notebooks into GPT-2, twelve thousand writers were walking off the job. The Writers Guild of America went on strike in May 2023, and SAG-AFTRA actors joined in in July of that year in the first joint walkout between the two unions since 1960. AI was among the unions’ central demands: the WGA’s final agreement with the studios established that no form of AI could be considered a writer, and that written material produced by AI could not count as literary material for purposes of credit and compensation. Actors secured protections for their digital likenesses. The strikes halted production across Hollywood for much of the year.

Stiles is not indifferent to these disputes. She has been working with AI since before most of the lawsuits were filed, and has thought carefully about what it means for a poet to feed her own work into a system and ask it to generate something new. She takes the legal and economic concerns seriously, but does not think they tell the whole story. For her, the more important question is how these systems are changing what it means to create and say something new.

“I mean, I definitely see both possibilities,” she said, when asked about the argument that AI dilutes creativity. “I see both things happening in real time. So it’s not necessarily one or the other.”

Fine-tuning the self

The curiosity eventually outran the fear. In 2019, she carried two hundred pages of draft poems to a machine and asked for analysis. One of her early inputs was the line “Are you ready for the future?”, run repeatedly through the system with different parameters. The results, she has said, ranged from sublime to misogynistic, the full spectrum of what the internet had deposited in the training data. She eventually curated thirty of hundreds of outputs into a small poetry cycle.

“I essentially approached the process of working with a fine-tuned LLM as taking all of my notes and ideas and drafts that were kind of spilling around in my head and putting them in one place so I could access all of them in a different way,” she said.

What she built was not a ghostwriter, or a replacement intelligence, but something closer to a unified field for her own thinking: a place where she could take threads scattered across notebooks and hard drives and access them simultaneously, combine them in different ways, examine them from angles she had not previously considered. She calls it a “prosthetic imagination.” The phrase is precise in a way that the cruder versions of the AI-creativity debate tend not to be: a prosthesis does not replace a capacity; it restores or augments one that was limited. It does not write for you; it changes what writing feels like.

“I really like tapping into the collectivity of a system and opening up space to be surprised by what comes back at me, allowing room for the outputs do things that I would never have done myself,” she said. “It’s calibrating the space between using the systems to write in a way that still feels true to me as a human writer, but then also leaving room to explore what it means to be writing with this collective mind at my fingertips.”

She currently works most frequently with OpenAI’s tools, a relationship that traces back to those early GPT-2 experiments, and with Gemini, through a residency with Google’s Arts and Culture Lab in Paris. She uses voice-cloning technology to perform spoken-word pieces, and text-to-image and text-to-video tools to translate poetic metaphors into visual form.

For “A Living Poem,” Stiles built her dataset from two sources: her own body of work, accumulated and refined over years of collaboration with her AI alter ego Technelegy, a GPT-based system she has been fine-tuning on her poetry since 2018; and metadata drawn from MoMA’s collection of language- and code-based works. The system processes those seeds through layers of custom prompting and p5.js code, and GPT-4 surfaces them on the screen in the Agnes Gund Garden Lobby every hour in what Stiles calls a “transhuman epic.”

Stiles published a prompt manual to accompany the work. One section reads: “Your task: Express what cannot be said, the ache between syllables. Slow the scroll, return the reader to breath.”

The audio component layers Stiles’s own whispered voice over an ambient soundscape. The visual track moves through a fixed palette of glowing greens, pinks and blues, with text shifting between standard fonts and Stiles’s own invention, Cursive Binary, a typeface that uses her handwritten cursive loops to form ones and zeros, collapsing the distance between the hand-drawn mark and the machine’s base language.

Not everyone has been persuaded. A critic writing in the Brooklyn Rail earlier this year argued that the work is constrained, almost to the point of stillness—that its color palette and fonts are fixed, that the changes between one cycle and the next are “barely enough for the casual viewer to notice the underlying technological choreography,” and that the blinking cursor simulates thought rather than demonstrating it. The piece, the reviewer concluded, is “utterly devoid of the linguistic and imaginative liveliness implied in its title.”

Those are precisely the terms the debate about AI and creativity tends to get stuck in: whether the machine is really thinking, whether the blinking cursor is evidence of intelligence or its convincing imitation. Stiles’s interests lie elsewhere, shaped by an earlier collaboration that gave her a more technical vantage point. Working with BINA48, a humanoid robot developed by the Terasem Movement Foundation, she helped the development team create visualizations of how information moved through the robot’s neural networks.

“It gave me a really interesting look at the way that information kind of exists and moves in high dimensional space in the human mind, because it’s obviously what we’re trying to recreate with these AI systems,” she said.

Those visualizations sharpened a conviction she had been developing for years: that large language models (LLMs) are not alien intelligences, but mirrors held up to the entire archive of stories, observation and memories that human beings have amassed over thousands of years, compressed and made navigable for the first time.

“What we’re talking about when we talk about these large language models is systems that have ingested just quantities of human history and stories and memories and observations,” she said. “These models are sort of parsing all of that noise and helping to distill bits of wisdom that I think would pass us by otherwise.”

Beyond optimization

The standard enterprise argument for AI runs on the language of efficiency: tasks completed faster, costs reduced, headcount spared. Stiles, who spent years in brand strategy and communications before her art career took over and now advises companies on AI and innovation, finds that framing too small.

She does not dismiss AI’s utility for routine work. What she questions is whether optimization describes the ceiling of what the technology can do. “Once [the low-level tasks are] off your plate and you have bandwidth and headspace and mental energy to think about and do other things, what are you able to do that you weren’t able to do when you were bogged down by all the other stuff before?”

The deeper ambition, for Stiles, is empathy. AI, in her view, is a language technology before anything else, and language technologies serve, at their best, as technologies of human connection.

“We’re talking and we’re creating text, we’re creating all this noise, and we’re still so profoundly misunderstanding one another,” she said. “How can we use AI to get beyond the faulty interface of language and begin to deeply connect with and relate to one another in ways that are hopefully going to move us in a better direction?”

Whether AI can carry that weight is genuinely uncertain. The same technology Stiles deploys to meditate on human consciousness also powers the influence operations and automated disinformation campaigns that have made the noise problem measurably worse. The lawsuits that have filled federal dockets since 2023 suggest that the creative industries are not waiting for a philosophical resolution: they are drawing lines, negotiating contracts and demanding payment.

Stiles acknowledges all outcomes and does not claim that making art with a language model inoculates anyone against its misuse. What she argues is that the artistic frame reveals something the business and policy conversation tends to miss.

“All the writers I’ve really been drawn to my whole life have always wanted to break the conventions of literature at the time and do something a little different,” she said. “They wanted to use language in a way that it hadn’t really been used before, so they could say something different or evoke different kinds of experience. That’s what I’d like to be able to do with these tools.”

In Karlsruhe, the screen will glow again in its shifting greens and pinks and blues. The cursor will blink. The poem will begin.

Sascha Brodsky

Staff Writer

IBM

Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
AI for developers

Move your applications from prototype to production with the help of our AI development solutions.

Explore AI development tools
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

  1. Explore watsonx Orchestrate
  2. Explore watsonx.ai