AI Chatbots Have Thoroughly Infiltrated Scientific Publishing

https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/

One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis

Cropped image of a line chart shows various words, including “noteworthy” and “intricate,” increasing in usage over time.

Amanda Montañez; Source: Andrew Gray

Researchers are misusing ChatGPT and other artificial intelligence chatbots to produce scientific literature. At least, that’s a new fear that some scientists have raised, citing a stark rise in suspicious AI shibboleths showing up in published papers.

Some of these tells—such as the inadvertent inclusion of “certainly, here is a possible introduction for your topic” in a recent paper in Surfaces and Interfaces, a journal published by Elsevier—are reasonably obvious evidence that a scientist used an AI chatbot known as a large language model (LLM). But “that’s probably only the tip of the iceberg,” says scientific integrity consultant Elisabeth Bik. (A representative of Elsevier told Scientific American that the publisher regrets the situation and is investigating how it could have “slipped through” the manuscript evaluation process.) In most other cases AI involvement isn’t as clear-cut, and automated AI text detectors are unreliable tools for analyzing a paper.

Researchers from several fields have, however, identified a few key words and phrases (such as “complex and multifaceted”) that tend to appear more often in AI-generated sentences than in typical human writing. “When you’ve looked at this stuff long enough, you get a feel for the style,” says Andrew Gray, a librarian and researcher at University College London.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


LLMs are designed to generate text—but what they produce may or may not be factually accurate. “The problem is that these tools are not good enough yet to trust,” Bik says. They succumb to what computer scientists call hallucination: simply put, they make stuff up. “Specifically, for scientific papers,” Bik notes, an AI “will generate citation references that don’t exist.” So if scientists place too much confidence in LLMs, study authors risk inserting AI-fabricated flaws into their work, mixing more potential for error into the already messy reality of scientific publishing.

Gray recently hunted for AI buzzwords in scientific papers using Dimensions, a data analytics platform that its developers say tracks more than 140 million papers worldwide. He searched for words disproportionately used by chatbots, such as “intricate,” “meticulous” and “commendable.” These indicator words, he says, give a better sense of the problem’s scale than any “giveaway” AI phrase a clumsy author might copy into a paper. At least 60,000 papers—slightly more than 1 percent of all scientific articles published globally last year—may have used an LLM, according to Gray’s analysis, which was released on the preprint server arXiv.org and has yet to be peer-reviewed. Other studies that focused specifically on subsections of science suggest even more reliance on LLMs. One such investigation found that up to 17.5 percent of recent computer science papers exhibit signs of AI writing.

Line charts show how scientific publishing volume and usage of various AI-associated and “control” words changed from 2015 to 2023, per the Dimensions database. Bar charts compare year-over-year percentage change in usage of these words from 2022 to 2023.

Amanda Montañez; Source: Andrew Gray

Those findings are supported by Scientific American’s own search using Dimensions and several other scientific publication databases, including Google Scholar, Scopus, PubMed, OpenAlex and Internet Archive Scholar. This search looked for signs that can suggest an LLM was involved in the production of text for academic papers—measured by the prevalence of phrases that ChatGPT and other AI models typically append, such as “as of my last knowledge update.” In 2020 that phrase appeared only once in results tracked by four of the major paper analytics platforms used in the investigation. But it appeared 136 times in 2022. There were some limitations to this approach, though: It could not filter out papers that might have represented studies of AI models themselves rather than AI-generated content. And these databases include material beyond peer-reviewed articles in scientific journals.

Like Gray’s approach, this search also turned up subtler traces that may have pointed toward an LLM: it looked at the number of times stock phrases or words preferred by ChatGPT were found in the scientific literature and tracked whether their prevalence was notably different in the years just before the November 2022 release of OpenAI’s chatbot (going back to 2020). The findings suggest something has changed in the lexicon of scientific writing—a development that might be caused by the writing tics of increasingly present chatbots. “There’s some evidence of some words changing steadily over time” as language normally evolves, Gray says. “But there’s this question of how much of this is long-term natural change of language and how much is something different.”

Symptoms of ChatGPT

For signs that AI may be involved in paper production or editing, Scientific American’s search delved into the word “delve”—which, as some informal monitors of AI-made text have pointed out, has seen an unusual spike in use across academia. An analysis of its use across the 37 million or so citations and paper abstracts in life sciences and biomedicine contained within the PubMed catalog highlighted how much the word is in vogue. Up from 349 uses in 2020, “delve” appeared 2,847 times in 2023 and has already cropped up 2,630 times so far in 2024—a 654 percent increase. Similar but less pronounced increases were seen in the Scopus database, which covers a wider range of sciences, and in Dimensions data.

Other terms flagged by these monitors as AI-generated catchwords have seen similar rises, according to the Scientific American analysis: “commendable” appeared 240 times in papers tracked by Scopus and 10,977 times in papers tracked by Dimensions in 2020. Those numbers spiked to 829 (a 245 percent increase) and 20,536 (an 87 percent increase), respectively, in 2023. And in a perhaps ironic twist for would-be “meticulous” research, that word doubled on Scopus between 2020 and 2023.

More Than Mere Words

In a world where academics live by the mantra “publish or perish,” it’s unsurprising that some are using chatbots to save time or to bolster their command of English in a sector where it is often required for publication. But employing AI technology as a grammar or syntax helper could be a slippery slope to misapplying it in other parts of the scientific process. Writing a paper with an LLM co-author, the worry goes, may lead to key figures generated whole cloth by AI or to peer reviews that are outsourced to automated evaluators.

These are not purely hypothetical scenarios. AI certainly has been used to produce scientific diagrams and illustrations that have often been included in academic papers—including, notably, one bizarrely endowed rodent—and even to replace human participants in experiments. And the use of AI chatbots may have permeated the peer-review process itself, based on a preprint study of the language in feedback given to scientists who presented research at conferences on AI in 2023 and 2024. If AI-generated judgments creep into academic papers alongside AI text, that concerns experts, including Matt Hodgkinson, a council member of the Committee on Publication Ethics, a U.K.-based nonprofit organization that promotes ethical academic research practices. Chatbots are “not good at doing analysis,” he says, “and that’s where the real danger lies.”

A version of this article entitled “Chatbot Invasion” was adapted for inclusion in the July/August 2024 issue of Scientific American.

{
"by": "Brajeshwar",
"descendants": 6,
"id": 40236805,
"kids": [
40237006,
40239708,
40238129,
40238117,
40238573,
40236921
],
"score": 19,
"time": 1714660626,
"title": "AI Chatbots Have Thoroughly Infiltrated Scientific Publishing",
"type": "story",
"url": "https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/"
}
{
"author": "Chris Stokel-Walker",
"date": "2024-07-30T17:33:45.003Z",
"description": "One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis",
"image": "https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&w=1200",
"logo": "https://logo.clearbit.com/scientificamerican.com",
"publisher": "Scientific American",
"title": "AI Chatbots Have Thoroughly Infiltrated Scientific Publishing",
"url": "https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/"
}
{
"url": "https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/",
"title": "Chatbots Have Thoroughly Infiltrated Scientific Publishing",
"description": "One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis Amanda Montañez; Source: Andrew GrayResearchers are misusing ChatGPT...",
"links": [
"https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/"
],
"image": "https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&w=1200",
"content": "<div><p>One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis</p><figure><img src=\"https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=600\" alt=\"Cropped image of a line chart shows various words, including “noteworthy” and “intricate,” increasing in usage over time.\" srcset=\"https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=600 600w, https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=900 900w, https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=1000 1000w, https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=1200 1200w, https://static.scientificamerican.com/dam/m/2161eea18fe7dd06/original/AIsciencePubs_graphic_leadImage.png?m=1714411499.322&amp;w=1350 1350w\" /><figcaption> <p>Amanda Montañez; Source: Andrew Gray</p></figcaption></figure></div><div><p>Researchers are misusing ChatGPT and other artificial intelligence chatbots to produce scientific literature. At least, that’s a new fear that some scientists have raised, citing a stark rise in suspicious AI shibboleths showing up in published papers.</p><p>Some of these tells—such as the <a target=\"_blank\" href=\"https://twitter.com/gcabanac/status/1767574447337124290?\">inadvertent inclusion</a> of “certainly, here is a possible introduction for your topic” in a recent paper in <i>Surfaces and Interfaces</i>, a journal published by Elsevier—are reasonably obvious evidence that a scientist used an AI chatbot known as a large language model (LLM). But “that’s probably only the tip of the iceberg,” says scientific integrity consultant Elisabeth Bik. (A representative of Elsevier told <i>Scientific American</i> that the publisher regrets the situation and is investigating how it could have “slipped through” the manuscript evaluation process.) In most other cases AI involvement isn’t as clear-cut, and automated AI text detectors are <a target=\"_blank\" href=\"https://www.scientificamerican.com/article/tech-companies-new-favorite-solution-for-the-ai-content-crisis-isnt-enough/\">unreliable tools</a> for analyzing a paper.</p><p>Researchers from several fields have, however, identified a few key words and phrases (such as “<a target=\"_blank\" href=\"https://blog.j11y.io/2023-11-22_multifaceted/\">complex and multifaceted</a>”) that tend to appear more often in AI-generated sentences than in typical human writing. “When you’ve looked at this stuff long enough, you get a feel for the style,” says Andrew Gray, a librarian and researcher at University College London.</p><hr /><h2>On supporting science journalism</h2><p>If you're enjoying this article, consider supporting our award-winning journalism by <a target=\"_blank\" href=\"https://www.scientificamerican.com/getsciam/\">subscribing</a>. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.</p><hr /><p>LLMs are designed to generate text—but what they produce may or may not be factually accurate. “The problem is that these tools are not good enough yet to trust,” Bik says. They succumb to what computer scientists call <a target=\"_blank\" href=\"https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/\">hallucination</a>: simply put, they make stuff up. “Specifically, for scientific papers,” Bik notes, an AI “will generate citation references that don’t exist.” So if scientists place too much confidence in LLMs, study authors risk inserting AI-fabricated flaws into their work, mixing more potential for error into the already messy reality of scientific publishing.</p><p>Gray recently hunted for AI buzzwords in scientific papers using Dimensions, a data analytics platform that its developers say tracks <a target=\"_blank\" href=\"https://www.dimensions.ai/\">more than 140 million</a> papers worldwide. He searched for words disproportionately used by chatbots, such as “intricate,” “meticulous” and “commendable.” These indicator words, he says, give a better sense of the problem’s scale than any “giveaway” AI phrase a clumsy author might copy into a paper. At least 60,000 papers—slightly more than 1 percent of all scientific articles published globally last year—may have used an LLM, according to Gray’s <a target=\"_blank\" href=\"https://arxiv.org/abs/2403.16887\">analysis</a>, which was released on the preprint server arXiv.org and has yet to be peer-reviewed. Other studies that focused specifically on subsections of science suggest even more reliance on LLMs. <a target=\"_blank\" href=\"https://arxiv.org/abs/2404.01268\">One such investigation found that </a>up to 17.5 percent of recent computer science papers exhibit signs of AI writing.</p><figure><picture><source media=\"(min-width: 750px)\" srcset=\"https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=1350 1350w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=2000 2000w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=900 900w\" sizes=\"(min-width: 2000px) 2000px, (min-resolution: 3dppx) 50vw, (min-resolution: 2dppx) 75vw, 100vw\"></source><source media=\"(min-width: 0px)\" srcset=\"https://static.scientificamerican.com/dam/m/5c79875b1cc75f72/original/AIsciencePub_graphic_m.png?m=1714414131.768&amp;w=1000 1000w, https://static.scientificamerican.com/dam/m/5c79875b1cc75f72/original/AIsciencePub_graphic_m.png?m=1714414131.768&amp;w=1200 1200w, https://static.scientificamerican.com/dam/m/5c79875b1cc75f72/original/AIsciencePub_graphic_m.png?m=1714414131.768&amp;w=600 600w, https://static.scientificamerican.com/dam/m/5c79875b1cc75f72/original/AIsciencePub_graphic_m.png?m=1714414131.768&amp;w=750 750w\" sizes=\"(min-resolution: 3dppx) 50vw, (min-resolution: 2dppx) 75vw, 100vw\"></source><img alt=\"Line charts show how scientific publishing volume and usage of various AI-associated and “control” words changed from 2015 to 2023, per the Dimensions database. Bar charts compare year-over-year percentage change in usage of these words from 2022 to 2023.\" src=\"https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=900\" srcset=\"https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=1000 1000w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=1200 1200w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=1350 1350w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=2000 2000w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=600 600w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=750 750w, https://static.scientificamerican.com/dam/m/70b3cc20536569c3/original/AIsciencePub_graphic_d.png?m=1714414131.768&amp;w=900 900w\" /></picture><figcaption><p>Amanda Montañez; Source: Andrew Gray</p></figcaption></figure><p>Those findings are supported by <i>Scientific American</i>’s own search using Dimensions and several other scientific publication databases, including Google Scholar, Scopus, PubMed, OpenAlex and Internet Archive Scholar. This search looked for signs that can suggest an LLM was involved in the production of text for academic papers—measured by the prevalence of phrases that ChatGPT and other AI models typically append, such as “as of my last knowledge update.” In 2020 that phrase appeared only once in results tracked by four of the major paper analytics platforms used in the investigation. But it appeared 136 times in 2022. There were some limitations to this approach, though: It could not filter out papers that might have represented studies of AI models themselves rather than AI-generated content. And these databases include material beyond peer-reviewed articles in scientific journals.</p><p>Like Gray’s approach, this search also turned up subtler traces that may have pointed toward an LLM: it looked at the number of times stock phrases or words preferred by ChatGPT were found in the scientific literature and tracked whether their prevalence was notably different in the years just before the November 2022 release of OpenAI’s chatbot (going back to 2020). The findings suggest something has changed in the lexicon of scientific writing—a development that might be caused by the writing tics of increasingly present chatbots. “There’s some evidence of some words changing steadily over time” as language normally evolves, Gray says. “But there’s this question of how much of this is long-term natural change of language and how much is something different.”</p><h2 id=\"symptoms-of-chatgpt\">Symptoms of ChatGPT</h2><p>For signs that AI may be involved in paper production or editing, <i>Scientific American</i>’s search delved into the word “delve”—which, as <a target=\"_blank\" href=\"https://pshapira.net/2024/03/31/delving-into-delve/\">some informal monitors</a> of AI-made text have pointed out, has seen an unusual spike in use across academia. An analysis of its use across the 37 million or so citations and paper abstracts in life sciences and biomedicine contained within the PubMed catalog highlighted how much the word is in vogue. Up from 349 uses in 2020, “delve” appeared 2,847 times in 2023 and has already cropped up 2,630 times so far in 2024—a 654 percent increase. Similar but less pronounced increases were seen in the Scopus database, which covers a wider range of sciences, and in Dimensions data.</p><p>Other terms flagged by these monitors as AI-generated catchwords have seen similar rises, according to the <i>Scientific American</i> analysis: “commendable” appeared 240 times in papers tracked by Scopus and 10,977 times in papers tracked by Dimensions in 2020. Those numbers spiked to 829 (a 245 percent increase) and 20,536 (an 87 percent increase), respectively, in 2023. And in a perhaps ironic twist for would-be “meticulous” research, that word doubled on Scopus between 2020 and 2023.</p><h2 id=\"more-than-mere-words\">More Than Mere Words</h2><p>In a world where academics live by the mantra “<a target=\"_blank\" href=\"https://www.businessinsider.com/fake-science-crisis-ai-generated-rat-giant-penis-image-2024-3\">publish or perish</a>,” it’s unsurprising that some are using chatbots to save time or to bolster their command of English in a sector where it is often required for publication. But employing AI technology as a grammar or syntax helper could be a slippery slope to misapplying it in other parts of the scientific process. Writing a paper with an LLM co-author, the worry goes, may lead to key figures generated whole cloth by AI or to peer reviews that are outsourced to automated evaluators.</p><p>These are not purely hypothetical scenarios. AI certainly has been used to produce scientific diagrams and illustrations that have often been included in academic papers—including, notably, one <a target=\"_blank\" href=\"https://scienceintegritydigest.com/2024/02/15/the-rat-with-the-big-balls-and-enormous-penis-how-frontiers-published-a-paper-with-botched-ai-generated-images/\">bizarrely endowed rodent</a>—and even to <a target=\"_blank\" href=\"https://www.scientificamerican.com/article/can-ai-replace-human-research-participants-these-scientists-see-risks/\">replace human participants in experiments</a>. And the use of AI chatbots may have <a target=\"_blank\" href=\"https://arxiv.org/abs/2403.07183\">permeated the peer-review process itself</a>, based on a preprint study of the language in feedback given to scientists who presented research at conferences on AI in 2023 and 2024. If AI-generated judgments creep into academic papers alongside AI text, that concerns experts, including Matt Hodgkinson, a council member of the Committee on Publication Ethics, a U.K.-based nonprofit organization that promotes ethical academic research practices. Chatbots are “not good at doing analysis,” he says, “and that’s where the real danger lies.”</p><p><i>A version of this article entitled “Chatbot Invasion” was adapted for inclusion in the July/August 2024 issue of </i>Scientific American<i>.</i></p></div>",
"author": "Chris Stokel-Walker",
"favicon": "",
"source": "scientificamerican.com",
"published": "2024-05-01t06:45:00-04:00",
"ttr": 237,
"type": "article"
}