AI is all the rage right now, owing largely to ChatGPT and Midjourney. While enthusiasts laud the capabilities of these new tools and call artists and authors “Luddites” for not adopting it, I think there are some very good reasons to steer clear of them right now.
Here’s why I, as an author, won’t be using artificial intelligence to help me “write faster”—and why I think other (would-be) authors should think twice too.
Copyright Doesn’t Apply to AI Genrations
I’m going to start with what should be the biggest concern for anyone thinking they can make a quick buck off generations created with AI tools.
The copyright situation isn’t clear.
Back in the fall, an author applied for copyright on a graphic novel that used AI-generated imagery, created using Midjourney. The US copyright office granted the registration in September 2022.
In December 2022, though, they walked it back, and even more recently have rescinded the original certificate and issued a new one.
Copyright protects text and other parts of the graphic novel, because a human created them. The imagery, though, has no protection, because a human didn’t make it.
This is an enormous problem for anyone who wants to use tools like ChatGPT or Midjourney to create materials. It means businesses can’t necessarily claim copyright on articles they generate (hi, CNET). It means copyright doesn’t apply to AI-generated “art.” And it certainly means an AI-generated book doesn’t have copyright protections.
For an indie author like myself, who uses Amazon’s KU program, this is a nightmare situation. To enter my books into KU and get the highest royalty rate, I have to claim the copyright on a book. If I don’t hold exclusive copyright, Amazon can TOS me for infringement. I can claim the work is public domain to publish with KDP, but then I can only get the lower-tier royalty rate. And I cannot use KU.
Why Can’t You Use KU for PD Works?
Kindle Unlimited relies on exclusivity. Amazon demands that, during the tenure of your KU run, your work cannot be anywhere else. If they find it anywhere else—even on a pirate site—you might get TOS’d from the program.
You cannot guarantee exclusivity of a public domain work, because it’s … well, public domain. Since it’s PD, anyone can take it and use it.
That’s why you can’t put it in KU—it’s not exclusive—and it’s why Amazon doesn’t offer the hefty royalty rate.
I think, more concerning for authors, is the confusion over copyright when AI becomes involved. If AI is involved, then we could claim that the work belongs to the public domain.
That’s right, AI bros, I could take your work and repost it myself and make all them sweet, sweet dollars that you want to make. In theory, anyone could do that.
Copyright exists to protect creators so we can make money off our creations for our lifetime. AI effectually destroys those protections.
AI Will Force Copyright Law to Evolve
I say this knowing that the likely outcome here is going to be a string of court cases. The outcome of these suits will redefine and refine copyright law. Some AI use may be permissible, so a lot of these suits will focus on where we cross a line between “machine-generated” (not eligible) and “human-generated” (eligible). In turn, the use scenarios for how authors could use AI will become clearer.
For the moment, though, I’d steer clear of it. If you use it today, your copyright is in jeopardy until the legal system works out all the bugs.
Speaking of Lawsuits …
The copyright question around AI is based in current case law, particularly the “monkey selfie” case. But that’s not the only legal mire facing AI tools right now.
The other thing authors need to be watching is a string of lawsuits against AI generation tools for art. So far, two suits have been launched—one by media giant Getty Images.
The complaints in both suits are similar: these AI tools scraped copyrighted works from the internet to put into large learning datasets for the tools.
It’s important here to understand that these “artificial intelligence” tools are not actually intelligent. What they are is complex algorithms that take user input (prompts) and create output based on a very large database of previously ingested materials—the learning database.
This is not to say the capabilities of these programs aren’t neat, but the point of the matter is that AI is not actually creating anything new. It’s just smashing together datapoints in its existing database and spitting something out. And, for every “cool” thing we get, we get a lot more crap.
This is part of the argument against copyright protection—no “intelligent design” happened here. The other issue, of course, is that the training databases themselves are just huge pits of copyright infringement.
Creators Did Not Consent to Inclusion of Their Works
Most of the artists did not give their explicit consent to add their copyrighted works to the training set. Basically, AI platforms took their data without their consent—which is a huge no-no in privacy these days.
That’s also copyright infringement. I can’t take photos off Getty’s site and do whatever I want with them, even if I pay for rights to use the image. These AI tools never negotiated rights licensing. Artists weren’t compensated for their work being reused and now endlessly recycled within the database.
If that seems like a huge problem, that’s because it is.
It’s worth noting here that Getty is an enormous company with pretty deep pockets, so they can really hash this out in court if they want. And they have reason to: part of their business model is rights licensing, with exclusive images you can’t get access to anywhere else. Getty also hosts images from a lot of professional artists, photographers, and so on—so them going to bat here is also important for individual artists.
Plagiarism vs. Copyright and the Ethics of AI
Even if the scraping of works isn’t straight up copyright infringement, it is plagiarism. Plagiarism is less serious within the legal system, in that you can’t sue someone over plagiarizing you. You can sue over copyright infringement, and plagiarism often veers into that.
Plagiarism is a big deal professionally, though. In school, you can get yourself kicked out of university for plagiarizing. Book publishers are leery about this. And for professional writers, most of our clients will tell us “no plagiarism!”
So, what to do about AI, which is basically nothing but plagiarized materials?
The problem here is the way training bases were formed. Since we have copyright infringement, we also have plagiarism; even if we didn’t have infringement, we might still have plagiarism.
At the end of it, then, using AI is simply unethical. Accusations of plagiarism are often career kryptonite—so why court the issue by using AI?
Case in point is a recent Medium article, which was published and went viral shortly after the article it plagiarized. Even though the article went through AI, long chunks of text were still discernible from the original.
So, if you were caught in days of yore, your career would be over. Why is it suddenly okay to steal from your fellow authors because you used a tool that literally stole from many, many authors?
It’s Crap, Plain and Simple
I’ve seen a few (short) pieces generated with AI. Leaving aside copyright issues and ethics, it holds its own pretty well when it comes to doing things like generating form letters.
Ask anything else of it, and the wheels start to come off. The few AI-generated kids’ books I’ve looked at have about zero soul—there is no authorial voice. Again, that’s great for business forms. It’s less great when the execution of an idea is the sole reason people want to read something.
I don’t want to see what it does with anything longer; it seems to lose the plot relatively quickly and descend into chaos.
AI was likely trained on short pieces, public domain works, blog posts, and even fanfic. Some fanfic is notoriously bad. A lot of fanfic writers lose the plot; it doesn’t shock me that AI also loses the plot on a regular basis.
Let’s add to this the fact that readers seem to be able to clock AI-generated articles a mile away. CNET got in trouble when readers noticed egregious factual errors in the text of some articles. Readers immediately recognized the Medium article as AI. Many KU users have complained about an increase in incoherent drivel on the platform—and pondered the likelihood of AI use.
So, sure, you can get a draft together quickly, but the critical question remains: is it any good?
The answer seems to be no—again, not surprising, because a lot of books written by humans aren’t very good! A lot of very bad writing might populate AI databases, and that means AI is learning from those “bad” examples.
Using AI Is a Disservice to Everyone, Even Ourselves
The desire to use a tool to create more books on an accelerated schedule is, perhaps, understandable. Readers are voracious and KDP’s algorithms—plus social media ones—seem to demand this kind of brutal pace from authors. Unless you publish every month or every other month, you’re lost in the machine.
In that environment, AI becomes tantalizing. Imagine being able to put a whole book together in a single weekend!
But, again, the question of quality remains. And at the current moment, AI is not going to put out quality works. That means we still have to spend time editing and rewriting and finessing—time and effort we could have used to write something original anyway.
Some people will skip that stage and rush to the next. Much like publishers complain that NaNo encourages people to just rush out a novel when they get NaNoWriMo drafts dumped on them in December and January, AI encourages authors to just shit out whatever next, quality be damned.
That’s a disservice to readers. The fact we’re using tools that steal from our peers and colleagues makes it worse. And the fact that we’re willing to skip the joyful part of storytelling—crafting the narrative—suggests we’re doing a disservice to ourselves and our craft as well.
These are all things we should be thinking about, vs. how these incredibly imperfect tools can crap out the shoddiest manuscripts as fast as possible so we can bilk money from our readers, without regard for anything—including the legal ramifications of using them.
Anyway, that’s why I won’t be using AI in my writing any time soon.