fbpx
>

Exploring the art of prose

Menu

Show, Don’t Tell: What AI Can’t Do

Image is a photograph of an old computer; title card for the new craft essay, "Show Don't Tell: What AI Can't Do," by Laura Hartenberger.

 

By Laura Hartenberger

Teaching writing at university sometimes makes me feel like an academic imposter. Compared to my students’ other college courses, with their weighty textbooks, weekly quizzes, and the expectation of all-nighters, my writing classes, I fear, seem too exploratory to be rigorous. Their focus on craft and process, not on content that can be memorized and tested via multiple choice, perhaps encourages students to think they’re not learning anything very substantial.

Generative AI tools like ChatGPT have amplified the impulse to turn my writing classes into arguments for the value of their own existence. Not only do I need to make the case that writing classes offer comparable value per credit hour to something like organic chemistry, but I also need to show students that writing via human brain, rather than by algorithm, has real worth. 

I begin with a simple task: Write a description of a place, using syntactical choices to add and echo meaning. 

Then, prompt ChatGPT to replicate the piece as closely as possible. 

In class, we discuss: What did you capture in your own work that the AI missed? What exactly made your version better?   

One student complains the program kept adding interpretation to the scene when she wanted it to stick to flat description. Another says he couldn’t get it to stop sounding like a salesman—the AI was promoting the idea of the place instead of simply describing it. Some report that it took longer to tweak their prompts than it did to write their own pieces. All express frustration with the tone of the AI’s output: it was cold, flat, boring, creepy.   

“But what makes it flat?” I ask. “What makes it boring?” 


“Show, don’t tell” is one of the few standard lessons taught in almost every introductory creative writing class. Both showing and telling are required for effective storytelling, but nuanced narratives tend to avoid articulating the significance of their scenes in explicit terms. Instead, they show the reader a scene and allow its meaning to be drawn out from clues embedded in the description, dialogue, and action. Rather than telling the reader, “The woman was secretly sad,” skilled writers convey the emotion through action: “The woman hid her tears behind sunglasses.” 

We advise student writers to minimize exposition, put plot points into dynamic scenes, and weave key contextual details into narratives rather than inserting long stretches of backstory. The writer should know why a character behaves a certain way, but the reader should be given space to figure it out for themselves without too much explanatory voice-over from the writer. 

Programs like ChatGPT excel at summary and exposition. One of ChatGPT’s most impressive features is its ability to synthesize long texts into concise takeaways. It can differentiate key points from minor ones fairly well, distill multiple texts into a coherent summary, and capture the essence of an expansive piece with decent accuracy in just a few sentences. If fed a long series of events, it can recap what happened in a succinct paraphrase. 

But it falls short in its ability to create detailed, layered scenes that convey meaning in subtler, less explicit ways. When I ask ChatGPT to “describe a peaceful day at a pond,” it begins: 

A peaceful day at a pond is a serene and tranquil experience, where nature’s beauty unfolds before your eyes. As you approach the pond, you feel a gentle breeze carrying the scent of fresh water and blooming flowers. The sun shines brightly overhead, casting a warm and inviting glow on the surroundings.

The first line summarizes the feelings one might have at the pond rather than describing the scene itself, omitting specific sensory details in favor of generalizations. If ChatGPT were a human student, a writing teacher would probably say, “Tell me more: What makes the day peaceful, serene, and tranquil? What exactly is so beautiful about the nature you’re witnessing?”

The third sentence also slips into summary, explaining what effect the sun shining overhead had on the surroundings rather than allowing readers to see it for themselves through the writing. A reader can’t picture the surroundings because they’re simply “surroundings”— what is it the sun is shining on? A gifted human writer might show us how the sun’s beams fall in terms that characterize the type of warmth they cast, exploring new ways to capture something that happens every day through specificity of observation. 

The second sentence does a better job of placing the reader into the physical scene, although obvious, overly familiar phrases—“gentle breeze,” “blooming flowers”—detract from its impact. We expect breezes to be gentle and flowers to bloom; a more engaging line might choose more specific, surprising, or incongruous adjectives—a conciliatory breeze; arrogant flowers.

It’s nearly impossible to avoid overly familiar phrases when using generative AI given the algorithm’s design, which works by predicting commonly occurring phrase patterns based on its enormous database of training texts. When ChatGPT writes description, it remixes and regurgitates what has already been described, so it will rarely produce truly unexpected, ultranuanced, pleasantly divergent, or potently obtuse word pairings. “Gentle breeze” is so generic as to be empty, a line that could describe practically any breeze, in any setting, no more descriptive than “a breeze-like breeze.” Even when ChatGPT tries to show, rather than tell, the program can’t show with the kind of rich specificity that would engage the reader. 

Human writers decide which details to include or exclude in their scenes. A description can’t address everything—a picture says a thousand words, after all. Writers need to select certain key details to guide the reader through the narrative, and the type and amount of detail changes the impact. 

Elementary writers sometimes provide either too much detail or too little. At minimum, a certain base level of logistical information is needed to convey the gist of the story, such as the fact that we are at a pond on a nice day, and beginners sometimes fail to provide sufficient context to create a sense of place and time. Others provide too much information, such that the reader must wade through long passages of detail, unsure which ones are important to pay attention to.  

Strong writers, however, select a careful balance of details that both feed the reader’s comprehension of the plot and add some kind of significance. For example, a writer might suggest, through a few carefully chosen words, an underlying “too-good-to-be-true” dynamic to the pond’s peacefulness that will become relevant later in the story, or fixate on the brief, temporary nature of the flowers’ blooms in order to provide implicit commentary on a character’s mood as she struggles with a miscarriage. We tend to think of well-written scenes as those which include only those details that matter for the story. 

ChatGPT struggles to insert this kind of nuanced layering into its description. Its writing is functional—it gets characters from point A to point B. But it can’t do the moves that make writing great, the kinds of biting, oblique details that unlock a whole perspective or relationship. It can’t make intentional choices about which details to include or exclude to create an elegant, distilled, complex scene.  

This layering can come through the language itself, but it can also come through the omission of language. To create compelling dialogue, don’t let characters directly answer one another. If one asks, “How was your day?” and the other says, “I want a divorce,” the first character should not reply, “Oh my god, you do?” They should say, “Did you pick up milk and eggs?” The scene’s poignancy lies in the space between the characters’ lines. But AI is trained to generate phrase patterns; it’s not trained to generate silence.  

Why does ChatGPT default to telling over showing? Because it’s trying to do its job—it’s responding to a prompt, an imperative. If you ask it to convey something, it will answer as directly as possible, which means it will favor exposition; by design, it cannot be oblique or indirect, cannot let details speak for themselves. 

How do we teach students to do the work AI cannot do—to show rather than tell? Many teachers use sensory description exercises, sending students into the world or bringing objects into the classroom, asking students to smell, touch, taste, observe, listen to things and write about them. And almost all teachers use personalized feedback and conversation with students. Given a scene like ChatGPT’s pond description, a writing teacher might ask questions to encourage the student to think critically and creatively about the details they might leverage to convey a more sophisticated and nuanced story. 

What these activities have in common is physicality: there is contact between students and the world they are describing; there is contact between student and teacher. AI lacks the sensory perception to illustrate the action of a scene, and defaults to summary; and it lacks the human connection needed to discuss a passage with another person in an open-ended, yet guided way. It can tell us what we’ve already saidbut it can’t tell us what to say. 

In June of 2023, McKinsey released a report predicting that by 2030, fifty percent of our work might be automated by generative AI tools. As an example, the report proposed that postsecondary English language teachers will be able to outsource some of their work to generative AI, “perhaps initially to create a first draft that is edited by teachers but perhaps eventually with far less human editing required. This could free up time for these teachers to spend more time on other work activities, such as guiding class discussions or tutoring students who need extra assistance.” 

But McKinsey fails to recognize that teaching is not like business—instructors are not bosses who deliver edits that need to be implemented. When a teacher asks exploratory questions to help guide students’ writing, the teacher doesn’t know the answersonly the students do, and the teacher is helping them locate it within themselves. Our feedback is meant to encourage students to think deeply and critically, not conform to certain standards. It’s meant to invite them to reflect more deeply on their purpose in writing: What is it that they are trying to say? What’s specific to their own experiences, to their own ideas, that could differentiate their prose from a generic narrative—or, perhaps, from an AI-generated scene? 

AI tools may make writing faster and easier, but by design, they cannot produce the kind of subtlety, nuance, and intentional obliqueness that characterizes our best prose. Perhaps, as Kurt Vonnegut said, most of our narrative writing can be reduced to only a few fundamental story shapes—basic plot arcs and conflict types that tend to recur in most storytelling. But it is the details along the way, the infinite variety of ways we illustrate these same few stories through showing rather than telling, that makes us want to keep writing and reading. 

Perhaps there are some limited uses of the technology for student writers. ChatGPT’s skill at summary and synthesis could be a useful tool for those struggling to figure out what they’re trying to say. Inputting a story and asking the program to articulate its main points in one sentence could be an interesting test for how clearly the plot is organized, or how well the message is coming through. It also could be interesting to feed ChatGPT your own passage and ask for a variation, to imitate and remix your own voice, so that you can understand the elements of your own writing voice through its caricature.

And perhaps the art of prompt engineering is useful as a writing practice on its own. Many of my rough story outlines look like a series of AI prompts: Add a scene where the protagonist is in the hospital and believes a nurse is trying to poison her. Include a conversation where the couple discuss whether to have a baby. My notes to myself help me map out basic plot points before going back and doing the hard work of building up the scene itself. Seeing how thinly AI fills in these scenes could conceivably help a writer trying to get away from overly familiar descriptions or exposition-heavy scenes by seeing what the program does—and then doing something completely different. 

But these are the kinds of exercises that one might do once, as an experiment, when feeling stuck on a project; it’s hard to imagine using them as part of a regular writing practice. For the most part, AI’s inclination to tell the reader things, rather than show, prevents it from being a useful drafting tool for narrative writers. 


When I debrief with students about their ChatGPT experiments, I also ask: What did it feel like to use the chatbot? Unnerving. What would it feel like to pass off its writing as your own? Embarrassing. 

When we prompt ChatGPT, it prompts us back: What makes an author? Why should any one person write? What makes a writer sound like themselves? Where do new ideas come from? What makes a story original? What’s the value of new text when the old can be remixed ad infinitum?

If we tell students not to use ChatGPT in their writing, it’s unlikely to work. But if we show them its capabilities and limitations, we empower them to think more deeply about their writing practice and authorial voice. 

 


LAURA HARTENBERGER teaches in the Writing Programs at UCLA. Her writing has appeared in NOĒMA Magazine, Redivider, The Massachusetts Review, Hawai’i Review, subTerrain, CutBank Magazine, NANO Fiction, and other journals. Her writing won a Southern California Journalism Award from the Los Angeles Press Club and has been highlighted on Longreads, Literary Hub, and in The Best American Nonrequired Reading. Find her on Instagram @laurahartenberger.

 

Featured image by Frederica Galli, courtesy of Unsplash.