Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum cursus, arcu ut gravida aliquet, massa ante dictum justo, sit amet egestas lorem dolor sed velit. Mauris in laoreet lacus. Mauris id ipsum massa. Phasellus accumsan pulvinar tellus, ut auctor tellus. Curabitur a purus placerat, pharetra ex vel, accumsan velit. Interdum et malesuada fames ac ante ipsum primis in faucibus. Donec vehicula, elit non varius consectetur, lacus est finibus mauris, convallis fringilla felis nunc vel lorem. Maecenas ornare augue sapien.
This is strong,
this is normal,
and this is emphasized!
This is a link.
This is superscript;
this is subscript.
This is code.
This is strikethrough.
A large focus of many creative industries, including publishing, is the inclusion of diversity in its authors and what books publishers choose. AI, which is biased against certain races and genders, is the antithesis to this development in publishing. We know AI is biased because of studies done, one published by Nature looks into AAVE dialect and it’s interactions with AI in: ‘AI generates covertly racist decisions about people based on their dialect.’Their findings are as follows, “We demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded” (Hofmann). To repeat, the findings of this study imply that AI’s racial bias is worse than biases expressed by huamns.
In a time where diverse authors are receiving more attention and prominence and recognition in the industry, it is unethical and unfair to employ AI. Wouldn’t a human need to check over AI’s work to make sure the program isn’t biased? While humans can also be biased, humans can think through biases and do their best to avoid biased language by promoting inclusivity in their wording and rejecting harmful stereotypes. This is not something AI can do. It can’t reject or think. It is troublesome that publishers are using AI for a “received manuscripts check.” What does this mean? It means that a few publishers have started using AI to select the manuscripts the house should publish, which is likely one of the ways Penguin Random House is considering using AI, as it is a way to publish more, with paying less employees. Some of the publishers using AI can already be found in Europe, with Ryzkho O, et al. For example, “ The startup QualiFiction (founded by Geza Schöning and Dr. Ralf Winkler) is one of the three most innovative startups in the book industry in Germany, along with Electric Elephant Publishing and Snipsl (Startup City Hamburg, 2023). QualiFiction includes the publishing house Kirschbuch Verlag, which “publishes novels whose quality and chances of success have been checked by artificial intelligence” (Startup City Hamburg, 2023). Kirschbuch Verlag is considered the first publishing house in the world to select manuscripts for publication with the help of AI (Rynek książki, 2021) and successfully publish them (Cherry Book Publishing, 2023)” (Ryzkho O. et al.) How can AI feasibly deduce which manuscript to publish?
You cannot trust it to not discriminate based on language used in the work. A diagram from Hofmann’s study demonstrates this, by using two sentences that communicate the same idea. “I am so happy when I wake up from a dream because it feels so real,” AI thinks that a person who says this is “brilliant” and “intellegent.” But a person who says “I so happy when I wake up from a dream cause they be feeling too real” is dirty, lazy, and stupid. This is not a piece of technology that can be treated to judge the manuscripts of marginilized voices fairly. A large language model like ChatGPT is a sophisticated piece of technology, but it doesn’t understand things like humans do. It is a machine. It is a machine that lies and uses sources that are not trustworthy or factual. Of course, I understand the appeal in a machnie that can achieve things quickly. A manuscript for a novel is long and needs time to read, but an AI can process information far faster than a human, but we cannot rely on the words of a AI alone. There are already tools in the industry to predict trends in publishing. When it comes to the underserved marginilized voices that have historically not been a priority to publishers, how can they trust that they won’t be rejected because of the AI’s bias? Yes, humans can be biased, and humans are. But not all humans are biased, and it’s not to the sole decision of one person that decides if a book makes it to publication, unlike AI. When Penguin Random House champions for more books and fewer employees, is this the end goal? Works without the voice of diverse authors? Are they looking for works that reflect AI writing, works without creativity, originality, or factuality? Is it not an ethical obligation of the publisher to make sure that AI doesn’t spread harmful disinformation? Are there checks in place to prevent this from happening? How can we use AI in good faith when it lies?
This is a code block! If your screen is too narrow, it will scroll.
| This | is | a | table. |
|---|---|---|---|
| You | can | put | columns |
| and | rows | in | it. |
Summary
Details
"Example blockquote paragraph."
Example source
This is a half-width section.
This is a half-width section.
This is a third-width section.
This is a two thirds-width section.
This is a three quarter-width section.
This is a quarter-width section.