ChatGPT and other large language models are changing the process of almost every writing-based task. Publishing journal articles is no exception. As academic authors grapple with the limits and abilities of this new technology, some major publishing platforms are initiating rules to define and, at times, outright ban the use of generative AI tools in writing and publishing journal articles. Below you’ll find a list of some of the major publishers who have initiated policies around the use of large language models. Please note that these policies may change over time as the use, opinion, awareness, and definitions of generative AI change.
Many of the authorship policies cite author responsibility and accountability as uniquely attributable to humans.
A consistent theme throughout many ethics policies is trust and transparency in science and other scholarship. Thus readers, reviewers, and editors must know precisely what aspects of research have been created or augmented via generative AI technologies.
Themes across the policies concerning figures, images, and artwork emphasize the importance of original artwork that respects the principles of copyright of an original creator. Figures and images should be original citing concerns over research integrity, however, some exceptions are made such as tools that adjust the brightness or contrast of a figure.
Nature and the portfolio of Springer Nature Journals prohibit the submission of generative AI-created images citing copyright regulations.
Nature has also created the possibility of an exception based on articles that directly reference these tools and image creation as topically relevant.
Elsevier prohibits the use of generative AI or AI-assisted tools in the creation or manipulation of images in submitted manuscripts.
Elsevier prohibits the use of generative AI or AI-assisted tools in the creation of artwork such as might be submitted as a journal cover.
Authors who use the tool, but do not disclose their use may face retraction by the journal. Over time a longstanding retraction tracker and non-profit organization called Retraction Watch will likely note an increase in retractions due to undisclosed use of AI in manuscript preparations or scientific methods. Retraction Watch lists reasons for each retraction in the database. It remains to be seen if a reason will be added for misuse of AI, or whether these violations that represent a lack of trust in science will be folded into existing reasons. Some plausible existing reasons that AI misuse could fall under are Falsification/Fabrication of Image or Concerns/Issues About Authorship. However, it may be of interest to future researchers to separate out the data of retractions related directly to generative AI.