Hello! This started as a news roundup but turned into an essay on a rather sprawling topic: creativity, artificial intelligence, and content. I’ll be back later in the week with arts and culture news you can use.
Imagine this, suggests the writer Debbie Urbanski in an essay published in LitHub in December of 2023: “An AI writes a novel, and the novel is good.”
It is a hypothetical she posits to ultimately suggest that novelists, instead of fearing the encroachment of artificial intelligence into the humanities, should embrace this technology as a partner or, at the very least, a research assistant. It is a necessary hypothetical not only because AI has not—at least according to public record—written a good novel, but also because the sentiment fails to explain what, exactly, we might consider to be a good novel.
Still, it is a hypothetical that many seem to cling to. It is not wise, after all, to play on the offensive, to embrace technology before it is forced upon the masses? As such, some writers have followed suit.
NaNoWriMo, a 501(c)(3) nonprofit organization that hosts its national novel-writing month each November (an activity that, surely, any person may do independently and on their own time), just updated its AI policy to say that “to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology, and that questions around the use of AI tie to questions around privilege.”
Condemning AI, it explains, is supposedly ableist and classist because 1. “Not all writers have the financial ability to hire humans to help at certain phases of their writing.” 2. “Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing.” 3. “All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain.” (It is unclear what AI has to do with this or how AI is supposed to help, especially with this final concern. It is of note that a current sponsor of NaNoWriMo is ProWritingAid, an AI-powered app that offers “comprehensive story critiques in seconds.”)
It is serendipitous, or rather ironic, that on Saturday, sci-fi author Ted Chiang published an essay in the New Yorker making his case for why AI cannot—and will not—make art.
A core part of Chiang’s argument is that in the creation of any work of art, humans have to make countless decisions. The process of creation through an AI model vastly cuts down those decisions and the work involved in making them:
The companies promoting generative AI programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception. It is a mistake to equate “large-scale” with “important” when it comes to the choices made when creating art; the interrelationship between the large scale and the small scale is where the artistry lies.
The fallacy of using AI in the attempt to make art is one that assumes that an idea, or inspiration, are the most important and defining piece of a work of art. “Generative AI,” Chiang writes, “appeals to people who think they can express themselves in a medium without actually working in that medium.”
Some suggest that AI, then, should be a tool—rather than a replacement—for the artist themselves. That is where Urbanski lands in her argument, and what New York Times critic-at-large A.O. Scott posits in a piece that was published a few weeks after Urbanski’s. Writers have certainly used AI with some demonstrated success, Scott notes, like the novelist Ben Lerner in a piece for Harper’s, the writer Sean Michaels in his novel Do You Remember Being Born?, which uses some AI-generated text to for the “dialogue” of an AI model featured in the novel, and writer Sheila Heti, who wrote her short story “According to Alice” with the help of a chatbot she herself customized. Visual artists, like the filmmaker Bennett Miller, have similarly used tools like the generative image AI system DALL-E 2 to create their own somewhat uncanny plays on the emerging and quickly advancing technology.
It is worth noting that these artists have seemingly engaged with artificial intelligence with an awareness that I am not sure less practiced creatives may have. As the quote famously attributed to Michelangelo goes: “It is already there; I just have to chisel away the superfluous material.” A block of marble has one master; artificial intelligence, in inexperienced hands, may seem to have its own sovereignty.
That may be why we see AI frequently promoted as a helper—and potentially, an eventual replacement—for human skill rather than as a tool to be wielded by deft hands. The question remains: What help does it have to offer?
In a study published earlier this summer in the journal Science Advances, researchers aimed to see if OpenAI’s GPT-4 could make humans more creative in the act of writing short stories. It measured participants’ inherent creativity by walking them through a task that required them to input 10 words that were as “different from each other as possible.” Then, participants were tasked with writing an eight-sentence short story on one of three topics; one-third of participants had to rely entirely on their own thoughts, one-third could receive a story idea from GPT-4, and the final third could receive up to five ideas from the AI model. Human reviewers judged the resulting story by two measures: originality and usefulness (a story’s possibility of being further developed).
The results found that GPT-4 didn’t make a difference in the story quality of participants who had already demonstrated high levels of creativity (even if the study’s means of testing that creativity is questionably constricted to a single means of determination). The large language model did “help” those who tended to be less creative. But AI ultimately reduced “the collective diversity” of the stories. Reliance on a model—trained with expansive but ultimately limited data—led to heterogeneity.
AI models can be trained with more data than is possible for any human to retain at once; after all, as Cornell professor Laurent Dubreuil writes in the July issue of Harper’s magazine, computer scientist Alan Turing—widely considered a founding father of AI—suggested that the key difference between artificial and human intelligence was “storage capacity.” Not original thought.
It is a strange insistence, then, that AI has the possibility to dream up ideas and creations that humans can’t possibly imagine, as Scott seems to suggest, regarding Heti’s chatbot: “Her narrative, which blithely contradicts itself, is nothing a human being would think to compose, and her voice — by turns playful, naïve, cold, vulnerable and obnoxious—exists in an uncanny valley of verbal expression. It doesn’t sound like anyone. And that’s the point.”
Wouldn’t a chatbot, trained on Heti’s own creative input—and potentially scores of other written scripts—have to sound like someone, in some regard?
That is what Dubreuil expresses in his Harper’s piece; although LLMs may be able to suitably mimic the style of a writer or even to craft metaphors that could seem to the unaware reader a flash of originality, it all comes back to the long history of the written word. “Undoubtedly, the GPTs of today are capable, and will become even more capable, of doing a pretty passable pastiche of the already said,” he writes. As a result, “we have already habituated ourselves to banality and mediocrity.”
Chiang is aligned with the sentiment. “The task that generative AI has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read,” he writes. He suggests that this is because the creation of generative AI is divorced from intention and decision-making; the screenwriter Charlie Kaufman, who was a fierce anti-AI advocate during last year’s WGA strike, similarly finds value in art because of its humanity. “If I read a poem, and that poem moves me, I’m in love with the person who wrote it,” he said last year, “And I can’t be in love with a computer program. I can’t, because it isn’t anything.”
This logic—that human-made creations are inherently more special or meaningful than those drafted by an LLM—isn’t necessarily incorrect. This is why, Chiang argues, the most simple messages—sentences like “I’m sorry” and “I love you”—have meaning in spite of their lack of originality.
But this stance—that intentionality is everything—does neglect part of what makes creeping recommendations for AI as a support tool so dangerous. Decision-making does not define art because “[i]n art, we don’t begin from choice, but from cliché, from a position of determination and dependency,” writes
in a response to Chiang’s essay:A defense of art-making in purely humanist-voluntarist terms will collapse under its own weight when placed under scrutiny, especially when we consider the extent to which so much writing produced by human beings, with human hands and human minds, is as automatic as that of LLMs. I’ve been hesitant to confront students about suspected AI-use because I simply cannot tell if the generic quality of their arguments and phrasing is the product of algorithmic flattening or of the quotidian automaticity of college-writing that has been around at least since Aquinas led disputations at the University of Paris.
To learn and grow as a writer, or an artist of any kind, a person must push beyond the limits of what is known and understood, and to do that takes ample amounts of work and study. Not only do LLMs serve their users material that has—on some level—been pre-processed, but they eliminate the labor and education that is necessary for a person to undertake in order to develop their own thought and creation. Think of how abstractionists studied the Dutch masters. Think of how the Beatniks studied the Romantics.
This is the thing: A person does not have to be a writer. They do not have to become an artist of any kind. They can simply choose not to do the work. Yet artificial intelligence has emerged as a tool that promises to speed up, to optimize, to enhance productivity in all areas of life—even intellectual and artistic life.
I do not think that it is a coincidence that the AI bubble has crept into the realm of art at a time when content is king. We have seen, at least for the past decade in media, a prioritization of quantity and speed—a need to produce more, and faster, in order to not drown amid increasingly competitive waters and mercurial algorithms.
More and more people—especially those belonging to younger generations—aspire to lives as content creators, a title that seems to connote independence from corporate rule and handsome pay. We see what content performs, and each time a permutation of an already successful model gains recognition, countless others follow its lead. We profess a desire for the new and the original, but in an ecosystem where discoverability favors those following the proven paths of creation, there is little short-term reward for those creators to sustain their pursuit of originality.
I’m not saying that there is no reward to originality, of course—just that we can’t be altogether surprised when we see, even on this platform, essays that resemble the same trending topics, embrace the same academical language, and land on the same vague or inconclusive theses, or lists designed for quick and easy digestion. But we also shouldn’t be quick to cast those creations in a negative light—only that we should view them as a first or second step in a creative process. Content is not art. But it is sometimes the thing that a person develops in order to better understand their own creative process and interests—if, and only if, they are willing to commit to that education.
What makes this environment so tenuous is that, for the past two decades of the internet’s existence, we have gained an intimacy with the creative process of those who see, understandably, little benefit in creating in the shadows. They need to build an audience, they are told, to make connections, to gain a following, and then their work may get its due.
This is a punishing illusion and one that can distort the intentionality of creation for the sake of publication. In such an environment, of course, those developing the technology that could, ostensibly, reduce the timeline of inspiration to production might hope to find an interested audience. “Lack of originality,” Dostoevsky writes in The Idiot, “everywhere, all over the world, from time immemorial, has always been considered the foremost quality and the recommendation of the active, efficient, and practical man.” ▲