Hallucination when discussing AI is when AI presents nonsensical or false information as fact. The Turkish Journal of Psychiatry has looked at how common hallucination is, “In a study aiming to evaluate the frequency of AI hallucinations in research proposals entirely prepared by ChatGPT, it was shown that out of 178 references generated by ChatGPT, 69 were not Digital Object Identifiers (DOIs) and 28 references did not appear in Google searches or have an existing DOI (Athaluri et al. 2023) (Mahmut).” DOIs are numbers that represent article datasets that can be linked back to their source on the internet. Without a DOI, it is clear that ChatGPT invented the source and therefore, hallucinated its existence. The same article, ‘Is Artificial Intelligence Hallucinating?’ also states that AI can get trapped in hallucination, “generative AI can continue to produce incorrect content sequentially once it has produced incorrect content, a behavior known as the snowball effect of hallucination” (Zhang and Press et al. 2023). (Mahmut)”
It’s because of this that AI cannot be used to create documents or even answer questions without research. In 2024, a lawyer in Massachusettes was caught using AI, because the artificial intelligence has cited cases that never happened, they were completely fictional. This is the danger in using AI. The Maryland State Bar Association says this on a article about the case, “there is nothing wrong with using reliable AI technology for assistance in preparing legal documents. However, the ethical and professional rules that govern all attorneys require them to ensure the accuracy of their filings. This particular case highlights those obligations” (MSBA). The problem with this statement is that AI has proven itself, especially in this case, to be unreliable. It does however repeat my argument that AI needs to be supervised, using AI without human supervision leads to consequences, like in this instance, getting fined 2,000 dollars. So why does the publishing industry want to use it that way?
As evidenced by his quote from the New York Times in the introduction, Mr. Malaviya wants to publish more titles without hiring more people with the help of AI. That is the stated goal of using AI for Penguin Random House. It’s unlikely they are the only company, especially in publishing, to think of AI as a cost-cutting measure. But can it actually cut costs? Another point demonstrating this is outlined in the BBC article “I’m being paid to fix issues caused by AI” about a copywriter and her experience with AI. To quote the article, “Ms Skidd spent about 20 hours rewriting the copy, charging $100 (£74) an hour. Rather than making small changes, she had to redo the whole thing." (BBC). This goes against Malaviya’s greatest point and desired goal for AI, to reduce the number of people working, and up productivity. Hiring more people to fix issues caused by AI does not lower the workforce needs. To that end, AI is not the solution. Other professionals have spoken on the quality of AI writing. Steve Almond, published author and New York Times Bestseller, touches on this in his work, Truth is the Arrow, Mercy is the Bow with, “AI programs mimic. They don’t create. Ask one to compose a sonnet and it will spit out fourteen lines of doggerel. That’s not a person making word choices. It’s a machine performing a task” (Almond, P.G. 108). He also touches on how readers are looking for unexpected writing- but AI can’t create unexpected writing, it only draws on what has already been created. Keeping the limitations of AI in mind, how does Penguin Random House’s CEO, Nihar Malaviya, expect to make it “easier to publish more titles without hiring ever more employees”? What exactly will AI do?