AI is not an author.
AI text looks enticing to the untrained eye.
AI is excellent as an assistant for mundane writing tasks such as emails, day-to-day administration, short-answer tests or quizzes, and coding. However, AI is not an author with the critical thinking capabilities required for scholarly communication.
Publishers, examiners, and scholars have nearly all concluded that an AI model is not an author and, therefore, cannot take responsibility for its content in the same way a human can. In fact, the guidelines established by the International Committee of Medical Journal Editors (ICMJE) state that authors must be willing to be responsible and accountable for the content of their manuscript.
AI can be untrustworthy and lack transparency
High-quality work requires human judgment and oversight, where the content is sound and trustworthy. AI systems, however, lack transparency.
Each AI Large Language Model (LLM) provides information whose internal logic or ‘black box’ is obscured. This raises all kinds of ethical concerns. For example, most AI systems, such as ChatGPT, do not disclose their sources. For these reasons, LLMs cannot be held accountable for all aspects of the work, including ensuring that questions related to accuracy and integrity are appropriately investigated. LLMs lack any sense of free will and, as such, hold no responsibility for their actions.
AI mimics human intelligence
AI tries to mimic human intelligence. This idea is not new. In fact, in the 1950s, Alan Turing proposed a test to determine whether a computer program could think or speak like a human being. The test aimed to demonstrate that a computer program has artificial intelligence if it can mimic human responses. In the 1960s, people could sit at their computers and have simple conversations with a chatbot called Eliza. Eliza was the inspiration for the film 2001: A Space Odyssey, featuring HAL 9000 (or simply Hal), a spaceship computer that can control the ship’s functions and interact genially with the ship’s astronaut crew, until it malfunctions.
LLMs can produce biased content
Large Language Models (LLMs) naturally produce biased data, stemming from both the training data and the AI algorithms, which are usually obscured in a ‘black box’. There is a large amount of content that AI cannot use for training data because it is not publicly available online. The result is that some voices are amplified, and others are marginalised.
LLMs can hallucinate
LLMs have a habit of hallucinating when they don’t have the answer to a human prompt. AI rarely admits it doesn’t know the answer to a prompt and seems incapable of dealing with the unknown. Instead, LLMs will hallucinate. The hallucinations will look like they are correct or ‘factual’ when they are, in fact, false.
Hallucinations happen because LLMs are programmed to combine groups of words from different sources in any one row. This is why AI writing looks distinctly like, well, AI, with predictable patterns that look authoritative and convincing to the average reader. This can produce what is sometimes referred to as the ‘Halo effect’, where content can appear perfect and factual but is, in fact, entirely nonsensical.
LLMs have no sense of timeliness
LLMs rely on the entire internet, including outdated training data, and then merge it with modern-day text, which could produce output inconsistent with modern-day social norms and practices.
AI slop
The cognitive offloading to machines can produce lazy, poor-quality ‘slop’. The word ‘slop’ now has multiple meanings, including low-quality or inaccurate content generated by AI. Even though humans can also produce slop, the AI can make much larger quantities while presenting them in a way that looks high-quality on the surface.
If you would like to know more about how Wise Directions Copyediting can assist you, please contact us today.