With the rapid rise of artificial intelligence, AI is getting increasingly good at creating art and writing. This raises the question: will AI replace human artists? Can it?
I want to look at this question specifically as it applies to fiction writers. Can we expect to see AI-generated novels populating bookstores in the near future? What lies ahead for the industry of writing and publishing?
And as a certified young person, is my future as a writer at risk? With how fast this tech is developing, the entire industry will almost undoubtedly be transformed in the coming years and decades. The only question is how, and where will it leave us human writers?
For the entire history of the world, humans have been the only creatures capable of constructing fictional stories. This has been a unique power of ours that’s been one of the keys to our monumental success—fictional stories have formed the basis of cultures and systems of cooperation for millennia, allowing us to communicate, exchange ideas, and collaborate in ways no other species is capable of.
With the dawn of artificial intelligence, is our position as Earth’s greatest storytellers in jeopardy?
Spoiler alert: no. Not at all. And here’s why.
Here’s the thing about AI: it’s not better at storytelling than we are, and it never can be. At least, AI in its current form can never supersede our collective storytelling ability. This is because generative AI, which appears to create new text and images and videos from thin air, has to draw upon a vast library of things we’ve made. It’s sort of like a funhouse mirror, reflecting different perspectives and mixes of our own creations back at us. It can’t create anything completely original.
Then again, one could argue that none of us can, either. Like AI, we rely on input from the world around us that we then synthesize and turn into art, something that’s hopefully a fresh remix of ideas, though it can’t be completely new.
But compare an AI generator to a human writer and you’ll see some major deficits in AI’s creative writing skills. An AI’s output is often only as good as its prompt, while a good human writer is far better at working with less.
For example, I prompted ChatGPT to write a sad six-word story. Here’s what it wrote:
Lonely gravestone, forgotten name, endless solitude.
Kinda sad, I guess, although endless solitude doesn’t sound so bad. But it’s pretty generic, and it’s a stretch to say it’s a story. There’s no revelation, no surprise, nothing clever.
Of course, there’s a well-known sad, six-word story by a human writer that’s much more successful at what it set out to do. It goes like this:
For sale: baby shoes, never worn.
According to legend, it was written by Ernest Hemingway on a bet, but that may not be true. Whatever the case, I do know it was written by a human being, and it’s arguably much more of a story, and much sadder, than what the AI produced.
Now, this is on a small scale. Given enough good prompting, an AI could probably come up with some pretty good short fiction. And as this technology improves, the gap between AI and human writers on a sentence level will continue to decrease.
Things get a little more interesting when you look at something like a novel.
One thing AI is terrible at when compared to human writers, at least right now, is writing long, cohesive pieces. AI can’t write the next Great American Novel because it doesn’t have the power to sustain a structured and consistent story through hundreds of pages of text. It may be good at writing on a micro scale, but not so much on a macro scale. Writing an entire novel requires a good memory of everything that’s come before in the story, and usually a good idea of what’s ahead, too. Current AI models, at least those available to the public, don’t have very long memories. ChatGPT, for example, has a memory of about 4,000 words. That’s maybe one chapter of a book? So it’s a pretty bad collaborator for a project as big as a novel. Imagine working with a cowriter or ghostwriter who kept forgetting what book they were working on.
I’m sure the memory of AI systems will continue to increase as the technology improves—for instance, GPT-4 has a memory of about 8,000 words—but we’re still a long way from an AI novelist. As an AI system’s memory window increases linearly, the computing power necessary increases exponentially. We might see a significant slowdown in memory advancements in the future. This memory deficit is especially bad if an AI is tasked with writing a series of novels. Remembering the vast amounts of information and context necessary for such a task would be quite a challenge.
That’s one reason why human novelists don’t need to be all that concerned. If you write short stories… well, that’s too bad.
One of the problems this short memory contributes to is that AI is bad at structuring a story while writing. Sure, if prompted, it can write you a structure, but it doesn’t have the same capacity as humans to think ahead and implement narrative structure while actually writing the text of a story. A human writer can easily foreshadow something in Chapter One that won’t happen until Chapter Fifty-Five. An AI, however, is pretty limited in its scope of thinking ahead. Mostly, it’s predicting what follows each sentence, working word by word. In a sense, that’s how writers work, but we’re also capable of simultaneously forming the bigger picture, both in our minds and through the words we’re writing. AI may write a sentence with equal competency to a professional writer, but when you get to something at the scale of a novel or a series of novels, it gets left in the dust.
AI writes linearly. Humans don’t. Even the most linear human writers, even those who don’t outline or plan ahead at all, instinctively think about their story in a non-linear way, getting ideas for scenes that haven’t happened yet, thinking about different paths the plot could go down, or recalling a detail from earlier in the book that they want to go back and change. AI just writes one word after another, and doesn’t edit itself. If that sounds like a bad writing method… that’s because it is.
3. Context, Meaning, and Values
AI is not yet sentient. Hence… artificial intelligence. And ‘intelligence’ is perhaps a little hyperbolic, depending on your definition of intelligence. While it’s not just a fancier version of autocomplete, it does function in a similar way. Generative AI is predictive, meaning it uses probabilistic data to figure out what the most likely sequence of words would be in a given context.
But that context is often quite sparse. Novelists don’t live in a vacuum, fed a few words as a prompt for our next stories. If you give a human writer a prompt to write you a story, and give an AI the same prompt, the human will probably produce something you like better. This is because, ironically, we humans have far more data to go off of when it comes to prompts. We understand the context we’re living in, we know the person who’s delivering the prompt, and we remember everything that’s happened up to that point. An AI doesn’t know a particular user’s sensibilities and preferences unless specifically told.
Getting back to AI’s predictive abilities. Since it’s not sentient, since it doesn’t think the way we do, AI simply treats words as data. Data it’s well acquainted with, data it usually knows how to use appropriately, but not data it truly understands the meaning of. This is why AI can sometimes produce sentences that are so obviously wrong to humans. We may not have the vast catalog of training text that an AI can reference, but we have enough real-world experience and contextual understanding to recognize nonsense when we see it. We can infer ideas not explicit in text. We can make unlikely connections. We can deal with unexpected language far better than AI.
AI understands very well how words relate to one another. Most of the time, it’s pretty good at delivering solid pieces of text that make sense. But because it can’t actually understand what it’s saying, because it can’t infer anything meaningful from text, it fails spectacularly in ways humans easily excel. Even simple questions of logic and math can confuse AI. I wouldn’t expect it to fare very well when tasked with writing something with subtext or subtle thematic resonance. Those kinds of things tend to require someone who fully understands the deeper meanings of words, and their context in a broader story.
And, AI is a soulless, godless, amoral system. That’s not an insult; it’s a fact. AI has no values, unless it’s been specifically told what values to have. Even then, since it can’t understand the deeper meaning of words, AI isn’t always that great at following any values it’s been given. And a lot of AI has been trained on vast swaths of text from the internet, which, as a whole, isn’t exactly known for its high moral caliber. It reflects the same biases humans have. AI is by no means objective; it’s basically tossing the vast spectrum of human opinions into a blender and serving up a smoothie of questionable consistency and color. AI doesn’t understand what’s offensive and what’s morally wrong. It can be trained to recognize patterns and avoid certain kinds of words and phrases, but that can only do so much. Especially when it comes to subtleties and gray areas, AI doesn’t understand as much as humans can. So unless you happen to agree with all of the wildly different moral and ideological stances taken by millions of internet users, you should probably determine the moral message of your book yourself.
That said, AI does have content limits. At least, the most widely used AI. Since most large AI systems are developed by companies with certain standards and policies and a platoon of lawyers trying to avoid getting sued, they’re subject to restrictions that human’s don’t inherently have. This is a good thing, for the most part, but it also means AI can’t go as far as a human author might want or need to go in their story. Companies have to have somewhat strict policies for public use of their AI, but that means AI can’t go to certain dark places that authors will. This isn’t as much of a problem if you’re writing children’s books, but AI is less well-suited to writing about darker topics. And since it doesn’t have the same complex understanding of morality and ethics that humans do, it doesn’t know quite as well the difference between pushing the limits and stepping over the line.
The result of AI’s lack of inherent morals and its imposed content limits is that you might end up with stories that either spread a bad message or are so inoffensive and timid that they don’t have a message at all. Humans are a lot better at threading the needle.
4. Artistic Influences and Motives
Another important point regards influences. Having lived our own lives, and possessing memories of our own experiences, we’re aware to some extent of our influences as writers. And anything and everything can be an influence, from your childhood home to the last movie you watched to a dinner you had three years ago. AI’s influences are so incredibly vast as to be practically unknowable. While this theoretically means it’s less likely to plagiarize, it’s also much harder to check. You’re familiar with every piece of media you’ve consumed, and so it’ll be pretty obvious if you’re copying one of them.
Here’s a thought experiment. Say you come up with a very specific premise that you think is original, but was actually used in a film you’ve never heard of. If you prompt an AI system to come up with a story based on your premise, there’s a pretty good chance it’ll recognize the striking similarities to the existing film and copy much of its output from that very film. After all, it matches perfectly, and the AI’s predictive abilities tell it that the details of this film are the most likely to follow from the premise you gave it. Use the AI’s story, and you could inadvertently copy something you didn’t know existed.
On the other hand, even if the premise was identical to an existing story, developing the specific contours of the fleshed-out story yourself would guarantee some differences. You’ll pull from different sources and influences, producing something uniquely yours. AI can produce nothing that’s uniquely its own, since it doesn’t have an individual identity.
Additionally, almost all human writers have a goal when they write something, especially something as long and in-depth as a novel. We have a message we want to convey, an idea we want to explore, a change we want to enact, or a meaningful story we want to share. An AI has no goal, other than to follow instructions given to it by… humans. We write for our own reasons. AI also writes for our reasons, not its own.
5. Individuality and Sentience
Also, let’s not discount the fact that writing is fun for humans. AI systems, no matter how complex they are, don’t have feelings. Not yet, at least. So no matter how good AI gets at writing, it won’t replace human writers, if only because we’re too stubborn to stop writing our own stuff. Other professions that have been supplanted by technology have disappeared because no one wants to keep doing those jobs. But writing isn’t something we can automate for everyone and give up on doing ourselves. Narrative is the lifeblood of humanity; it has been throughout history.
For the writer, writing allows for a kind of self-discovery and deeper understanding of the world that can’t be found any other way. Certainly not by reading AI generated stories. It’s the creation process itself, far more than the output, that differs when comparing human writers to AI. A human and an AI may produce similar work, but the AI experienced nothing to create what it made. It has no memories, no feelings, no journey it went on in the process of creating the work.
Individuality also separates writers from AI. AI is built on a conglomerate of information collected from a vast web of data. We are individuals equally connected to a complex world, but with a unique perspective of it. In that sense, it’s the holes in our knowledge that may give us an advantage over AI. We have more space to imagine. More questions.
And let’s not forget that we humans tend to like fellow humans. We like to support real artists, not machines. If anything, human-generated content main gain a sort of prestige when compared to AI-generated content. The rawness, the rough edges, the imperfection of human art may be valued more than ever before. And sure, AI can imitate those aspects of art as well. But simply knowing that something was created by a real human with a real life and real memories and real feelings is powerful, philosophically and emotionally.
We’re sentient, and AI isn’t. I may like something it produces, even be blown away by it, but I won’t respect or admire an AI the same way I would a human artist. After all, if content like that is so easy for AI to produce, why should I care about it? The struggle that goes into the human creation of art is part of its beauty. When you appreciate the context of a piece, it resonates more deeply. And the context of any AI piece, which generally amounts to, “This was produced by an algorithmic model engineered by a multi-billion dollar corporation and trained on vast amounts of human input and probabilistic data…” well, it’s not exactly moving, is it?
6. Author Personas
One thing you might notice if you’ve tried talking to an AI chatbot is that they’re pretty boring. Compared to a real person, with idiosyncrasies and an individual perspective, who might have a sense of humor and a distinctive voice, AI chatbots are bland, at least before they’ve been given thorough prompting that forces them to adopt certain traits. I don’t find myself turning to AI for a fascinating conversation or interesting thoughts. That’s not necessarily a bad thing, since AI is arguably better at this point for more practical uses.
But think about real-life authors. By and large, authors are interesting people. A lot of them are pretty weird. They’ve got quirks and habits and unique philosophies. Often, readers pick up a book not because of the story itself but because of the name on the cover. Even if it’s not a famous author, readers can feel a personal connection to that writer, a connection that wouldn’t exist if the book were written by a faceless AI.
Yes, AI can write, but does that make it a writer? We don’t read books in a vacuum. Much of the time, even if the story is incredibly immersive, we are aware of the real-world context of what we’re reading. We have favorite genres and books, but we also have favorite authors, and not necessarily simply because they wrote books we like. Personally, I’d rather support a real person by reading their book than support a disembodied AI, even if the AI can technically write better.
And let’s not forget—AI is built on the work of humans who don’t receive credit or compensation for their contribution. This has raised some ethical questions, especially in the area of AI art generators, where the style of particular artists can be quite clear. That same issue can easily arise in AI text generators. You can ask AI to write something in the style of a certain writer, dead or alive, and then publish it as if it’s your own.
At the end of the day, I think we’ll always prefer real human writers. Sorry, AI.
7. AI’s Opinion
Rather than endlessly pontificating over the question of whether AI will replace novelists, let’s go straight to the source, shall we? I asked ChatGPT for an answer, and it replied:
It is highly unlikely that AI will completely replace novelists. While AI has the ability to generate text and even entire stories, it cannot replicate the human experience of creativity and emotional depth that makes a novel truly compelling.
Some serious self-deprecation here.
AI-generated writing is often formulaic and lacks the subtle nuances that make a story unique and relatable to readers. While AI can generate text based on patterns and trends, it lacks the empathy and emotional intelligence to create truly impactful and meaningful stories.
Sounds like someone needs to go to therapy.
When I asked it about potential roadblocks in the way of AI replacing human novelists, it listed creativity, emotional intelligence, originality, voice and style, and feedback and revision. This last point is one I haven’t gone into, but it is indeed another roadblock in AI’s ascent to literary stardom. AI isn’t as good at understanding and synthesizing feedback about its own work and editing what it’s written. Human writers often have to discern between conflicting feedback or find deeper problems at the root of certain surface-level criticisms. AI gives all feedback equal weight, and since it doesn’t understand context, it doesn’t know what to ignore or whose feedback to prioritize. It could struggle to read between the lines and find underlying problems that aren’t directly addressed in feedback.
So AI itself seems to be skeptical of its own abilities. Technically, it’s just reflecting information it’s gathered from human sources, so it could be wrong. But if ChatGPT does manage to become a successful novelist, at least it can’t say, “I told you so.”
8. AI as a Tool
So even if AI one day becomes as good as human creators, I don’t think it’ll replace us. Instead, I think it’ll be more useful as an assistant, a tool used by producers rather than a producer itself. Don’t get me wrong; people will use AI to create complete pieces of content that they’ll put out into the world with no added human effort—and that’s already happening—but I think the best art will always be made by real, human artists.
This might seem arrogant—and, as a real human myself, I admit I’m a little biased—but I think it’s more arrogant to think that we can create, in a matter of decades, artificial technology that’s superior in every way to the immensely complex minds that millions of years of evolution have endowed us with.
One thing AI doesn’t really do yet is creating things unprompted. It still requires human input to create anything coherent. The better the input, the better the result. In this way, it’s just like any other tool, from a paintbrush to the internet.
AI is only as good as its users. Maybe a little better, in some cases. But it doesn’t create art spontaneously, because why would it? That’s our thing. As long as AI remains a tool and not a competitor to humanity, I think it can do a lot of good.
Much like technological revolutions of the past, there’s a lot of hype, and a lot of potential, and a lot of promise in AI. And much like its predecessors, I think AI will evolve into a useful tool that works alongside artists instead of replacing them.
Already, there are many ways writers can use AI to enhance and assist their writing process. I’ve got a video on the way where I’ll explore and test out AI’s uses as a tool for writers, so subscribe if you don’t want to miss it.
I look forward to hearing your thoughts on AI and writing in the comments. If you want to watch the video versions of these blog posts, head over to my YouTube channel here, and subscribe for more videos on writing and publishing.
– Grayson Taylor