You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
WORDS TO LIVE BY

AI and the End of the Human Writer

If a computer can write like a person, what does that say about the nature of our own creativity?

The most nauseating, addictive thing about writing is the uncertainty—and I don’t mean the is-anyone-reading? or will-I-make-rent? kind. The uncertainty I’m talking about dogs the very act. This business of writing an essay, for instance: Which of ten thousand possible openings to choose—and how to ignore the sweaty sense that the unseen, unconceptualized ten thousand and first is the real keeper? Which threads to tug at, without knowing where they lead, and which to leave alone? Which ideas to pick up along the way, to fondle and polish and present to an unknown reader? How to know what sentence best comes next, or even what word? A shrewd observer will note that I am complaining about the very essence of writing itself, but that has been the long-held privilege of writers—and they enjoyed it in the secure comfort of their uniqueness. Who else was going to do the writing, if not the writers who grouse about writing?

Now along come these language engines, with suspiciously casual or mythopoeic names like ChatGPT or Bard, that suffer not an iota of writerly uncertainty. In what can only be called acts of emesis, they can pour out user manuals, short stories, college essays, sonnets, screenplays, propaganda, or op-eds within seconds of being requested for them. Already, as Naomi S. Baron points out in her book Who Wrote This?, readers aren’t always able to tell if a slab of text came out of a human torturing herself over syntax or a machine’s frictionless innards. (William Blake, it turns out, sounds human, but Gertrude Stein does not.) This unsettles Baron, a linguist who has been writing about the fate of reading for decades now. And it appears to be no lasting consolation that, in some tests, people still correctly recognize an author as artificial. Inexorably, version after version, the AIs will improve. At some point, we must presume, they will so thoroughly master Blakean scansion and a chorus of other voices that their output—the mechanistic term is only appropriate—will feel indistinguishable from ours.

Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing
by Naomi S. Baron
Stanford University Press, 344 pp., $30.00

Naturally, this perplexes us. If a computer can write like a person, what does that say about the nature of our own creativity? What, if anything, sets us apart? And if AI does indeed supplant human writing, what will humans—both readers and writers—lose? The stakes feel tremendous, dwarfing any previous wave of automation. Written expression changed us as a civilization; we recognize that so well that we use the invention of writing to demarcate the past into prehistory and history. The erosion of writing promises to be equally momentous.

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
W.W. Norton & Company, 176 pp., $22.00

In an abysmally simplified way, leaving out all mentions of vector spaces and transformer architecture, here’s how a modern large language model, or LLM, works. Since the LLM hasn’t been out on the streets to see cars halting at traffic signals, it cannot latch on to any experiential truth in the sentence, “The BMW stopped at the traffic light.” But it has been fed reams and reams of written material—300 billion words, in the case of ChatGPT 3.5—and trained to notice patterns. It has also been programmed to play a silent mathematical game, trying to predict the next word in a sentence of a source text, and either correcting or reinforcing its guesses as it progresses through the text. If the LLM plays the game long enough, over 300 billion or so words, it simulates something like understanding for itself: enough to determine that a BMW is a kind of car, that “traffic light” is a synonym for “traffic signal,” and that the sentence is more correct, as far the real world goes, than “The BMW danced at the traffic light.” Using the same prediction algorithms, the LLM spits out plausible sentences of its own—the words or phrases or ideas chosen based on how frequently they occur near one another in its corpus. Everything is pattern-matching. Everything—even poetry—is mathematics.

We still don’t know precisely how humans grasp language, although it isn’t the LLM way; no infant that I know of consumed 300 billion words before saying “Mama.” But in his slim new book, Literary Theory for Robots, Dennis Yi Tenen, an associate professor of English at Columbia University, proposes that the way we use language to create works bears some similarities to the machines. “Thinking and writing happen through time, in dialogue with a crowd,” Tenen maintains. “Paradoxically, we create new art by imitating and riffing off each other.” Subconsciously or otherwise, a writer milks inspiration out of libraries and conversations, and draws assistance from dictionaries, thesauruses, and style guides. “We think with our bodies, with tools, with texts, within environments, and with other people.” A writer relies in less calculating fashion on the books she has ingested than an AI does, but they’ve made her into a writer all the same. It was always an error, Tenen writes, “to imagine intelligence in a vat of private exceptional achievement”—to buy into the fable of the writer in her lonely garret, manufacturing words and ideas de novo.

In this notion of distributed intelligence, there is something both democratizing and destabilizing—a sneaky but egalitarian mode of murdering the author. Tenen insists, though, that we shouldn’t agonize too much over the source of intelligence. Who cares if our thinking is closer to the synthesis of LLMs, rather than the divinely ordained originality held dear by the Romantics, as long as we have an effect upon the world? Certainly not Aristotle. “In the Aristotelian model,” Tenen writes, “intelligence is the GOAL of thought.” (The caps lock letters are Tenen’s, not mine or Aristotle’s.) It’s Plato who held intelligence to lie within the department of the interior—a private, nebulous thing that occasionally led to enlightenment. Pick your philosopher.

Even at the summit of literary creation, fiction writers yielded to the seeming inevitability of recombination. Tenen’s potted history of authorial hacks, the richest section of his book, begins with Georges Polti, an enterprising Frenchman who in 1895 published a book called The Thirty-Six Dramatic Situations, to help dramatists write new plays. Once you’d eliminated supplication, deliverance, vengeance, pursuit, disaster, revolt, and the other 30 symptoms of the human condition, he implied, what else was left? (Polti wasn’t afraid to get specific: Among the subtypes of the “pursuit” situation were “pursuit for a fault of love” and “a pseudo-madman struggling against an Iago-like alienist.”) “They will accuse me of killing imagination,” Polti wrote, but in fact, his primer aspired to free playwrights from the pursuit of mere novelty, so they could devote themselves to truth and beauty. Mark Twain invented a self-gumming scrapbook for authors, into which they might paste notes, newspaper snippets, and images, for subsequent inspiration. (His secretary once filled six scrapbooks with clips about the Tichborne trial in London, involving a no-name butcher who claimed the title to an English peerage. Twain concluded that the tale was too wild to be of use to a “fiction artist”—but it did form the basis of Zadie Smith’s latest novelThe Fraud.) Companies sold devices like the Chautauqua Literary File and the Phillips Automatic Plot File Collector, into which writers stuffed their reference materials, so that they could later pluck out a setting, a character, or the seed of a plot. It was ever thus, Tenen implies—the magpie approach to thinking, the collage as the modus operandi of writing. Why are we unnerved by LLMs following those same principles?

When I reached this juncture in Literary Theory for Robots, I let out a silent, screaming plea for our species. The art of the novel doesn’t lie in the combine-harvesting of details and plotlines. It lies in how a writer selectively filters some of them through her own consciousness—her deliberations, the sum of her life, the din of her thoughts—to devise something altogether different and more profound. This, and only this, makes any piece of writing meaningful to those who read it. The AIs of the future may meet other yardsticks for creativity. They may, say, grow aware of themselves as creators, satisfying the neurosurgeon Geoffrey Jefferson’s dictum that a machine will equal the brain when it not only writes a sonnet but also knows that it has written it. Their cogitations may seem as bleary and inscrutable as those of humans. (Already we are hard-pressed to say how precisely some hallucinations emerge from AIs.) But they will never have experiences the way we have experiences, I quarreled with myself. They can’t lose a friend to suicide, or feel the pain of a twisted ankle, or delight at their first glimpse of the rolling Caucasus, or grow frustrated in a job, or become curious about Dutch art. (And that was just my 2023.) Any texts they furnish will be intrinsically hollow; they will fail to hold us, like planets without gravity. Or so I contended.

But not very far into Baron’s Who Wrote This?, I realized I was being defensive—that I was arguing for a special exemption for writing and language because I consider them such immutable aspects of the mind, and of being human. Baron, with the dry eyes of an actuary, sets about deromanticizing writing. She presents classifications of creativity—ranging from the “mini c” creativity of personal satisfaction, where you tweak the recipe of a peach cobbler at Thanksgiving, through the “little c” rung of winning a county fair ribbon for said recipe, up to the cobbler-less “Pro C” of professional creations like the Harry Potter series and the “Big C” league of Shakespeare and Steve Jobs.

Baron invokes these distinctions in part to understand human creativity. But she is particularly interested in whether AI imperils the Big C. She points out that the high art of literary writing is merely a sliver of all writing turned out by humanity. Much of the rest is “everyday writing by everyday people,” and it includes grocery lists, birdwatching journals, emails, social media status updates, and office memos. Another subset—Baron loves her taxonomies—consists of writing for professional or financial gain. Here rest advertising copy, chemistry primers, white papers, earnings reports, and business case studies—texts to which we rarely look for deep meaning, “Big C” creativity, or personal connection. Not only will AIs be capable of producing these artifacts of writing, but a reader will feel no acute sense of loss in discovering where they came from. Tenen would note that, even today, such texts already repurpose previous writing to a large extent. To resent AIs for similarly relying on the work of others would be as fatuous as dismissing a novelist who employs a spellchecker to correct his usage of “who” and “whom.”

Both Tenen and Baron are cautious boosters of AI, saluting its potential to relieve us of many “lesser” forms of writing. But they also predict that more literary writing—Big C writing—will resist the encroachments of the machines. “It’s simply that, however effective or powerful, a muscular artifice for the sake of artifice isn’t that intelligent or interesting to me,” Tenen says. For truly human writing, an AI needs to gain a wider sense of the world, he adds. “But it cannot, if words are all it has to go by.” A machine cannot (as yet) watch a film to review it, and it cannot (also as yet; one must cover one’s rear) interview legislators to write a political feature. Anything that it produces in these genres must be confected out of reviews and interviews that have already been written. That lack of originality, Tenen would contend, will forever keep true creativity beyond the reach of AI.

Still, I remained unsure. One might argue that it is always the audience that creates meaning out of a text—that a book is merely a jumble of words until it provokes responses in a reader, that the act of reading summons the book into being. In doing so, we wouldn’t just be going back half a century, to reader-response theory and Roland Barthes’s essay “The Death of the Author.” More than a millennium ago, the Indian philosopher Bhatta Nayaka, in a literary treatise called Mirror of the Heart, reasoned that rasa—the Sanskrit notion of aesthetic flavor—resides not in the characters of a play but in the reader or spectator. “Rasa thus became entirely a matter of response,” the Sanskrit scholar Sheldon Pollock wrote in A Rasa Reader, “and the only remaining question was what precisely that response consists of.”

Bhatta Nayaka today, digesting the relationship between our AIs and us, would ask us an uncomfortable question. If, in a blind taste test, some readers are moved by a poem or a short story by ChatGPT, will we continue to prize their experience, and hold their response to be more important than anything else? It’s bound to happen, at some point—and the computers don’t even need to be sentient to get there. Alan Turing knew it. In his 1950 paper, when he proposed an inquiry into the question “Can machines think?” Turing swerved quickly into the question of whether machines could play the imitation game—whether they could merely fool human beings into concluding that they were thinking. The outcome, for all practical purposes, is the same—and the difference between moving us and fooling us isn’t as great as we’d like to believe. 

So much for readers. But what of writers? The twentieth century is cluttered with the vacated chairs and discarded uniforms of workers whose jobs have been automated. Human hands once stuffed sausages, riveted cars together, and transferred calls in telephone exchanges. Once again, it is tempting to claim an exemption for writing. “Because mind and language are special to us, we like to pretend they are exempt from labor history,” Tenen notes. But “intellect requires artifice, and therefore labor.” In the commercial sphere, a lot of writing is not so far removed from sausage-making—and the machines have already begun to encroach. Realtors use ChatGPT to pump out listings of houses. The Associated Press turns to AI models to generate reports on corporate earnings. Context, a tool owned by LexisNexis, reads judicial decisions and then offers lawyers their “most persuasive argument, using the exact language and opinions your judge cites most frequently.” When you consider that some judgments are now drafted by AI as well, the legal profession seems to be on the cusp of machines debating each other to decide the fate of human beings.

It won’t do to be snobbish and describe these kinds of writing work as thankless, because they have occupied people who have been thankful for the income. Roughly 13 percent of American jobs are writing-intensive, and they earn more than $675 billion a year. Many of these jobs are likely to evaporate, but when this is aired as a concern, the champions of automation have a standard lexicon of liberation. “Freed from the bondage of erudition, today’s scribes and scholars can challenge themselves with more creative tasks,” Tenen writes. If he’d been speaking that sentence, perhaps he’d have ended it with an upward, hopeful lilt? Because little about the modern economy suggests that it wishes to support even the creative writers who already live within it, let alone the thousands on the verge of being emancipated by AI.

However, there is supposedly freedom on offer for novelists and poets as well. In one of Baron’s scenarios, AI tools provide the divine spark: “Think of jumpstarting a car battery.” But cars start the same way every time, and they really just need to reach their destinations. For writers, trite as it sounds, it’s about the origin and the journey. In the cautionary parable of Jennifer Lepp, as narrated by Baron, the writer is cold-shouldered out of her own writing. Lepp, a one-woman cottage industry turning out a new paranormal cozy mystery every nine weeks, recruited an AI model called Sudowrite as an assistant. At first, Sudowrite helped her with brief descriptions, but gradually, as she let it do more and more, “she no longer felt immersed in her characters and plots. She no longer dreamt about them,” Baron writes. Lepp told The Verge: “It didn’t feel like mine anymore. It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”

Here, at last, is the grisly crux: that AI threatens to ruin for us—for many more of us than we might suppose—not the benefits of reading but those of writing. We don’t all paint or make music, but we all formulate language in some way, and plenty of it is through writing. Even the most basic scraps of writing we do—lessons in cursive, text messages, marginal jottings, postcards, all the paltry offcuts of our minds—improve us. Learning the correct spellings of words, according to many research studies, makes us better readers. Writing by hand impresses new information into the brain and sets off more ideas (again: several studies). And sustained writing of any kind—with chalk on a rock face, or a foot-long novelty pencil, or indeed a laptop—abets contemplation. An entire half-page of Baron’s book is filled with variations of this single sentiment, ranging from Horace Walpole’s “I never understand anything until I have written about it” to Joan Didion’s “I write entirely to find out what I’m thinking, what I’m looking at, what I see and what it means.” Sometimes even that is prologue. We also write to reach out, to convey the squalls and scuffles in our souls, so that others may see us better and see themselves through us. The difficulty of writing—the cursed, nerve-shredding, fingernail-yanking uncertainty of it—is what forces the discovery of anything that is meaningful to writers or to their readers. To have AI strip all that away would be to render us wordless, thoughtless, self-less. Give me the shredded nerves and yanked fingernails any day.