This page provides your librarians' current understanding of the scholarly publishing community's consensus surrounding disclosure of the use of artificial intelligence tools, whether through transparency statements or citation. This information has been updated as of September 2025.
Organizations such as ICMJE (International Committee of Medical Journal Editors) and policy making bodies such as the European Parliament have updated their recommendations to include artificial intelligence, broadly mandating transparency and disclosure of the use of AI.
AI Transparency Statements are a good first step towards ethical use of AI. These are short statements in a document that share whether AI was used and how. They can include the tools used and links to prompts, which are akin to "methods" in a research study and help a viewer/reader understand your authorship and/or creative process.
Artificial Intelligence Transparency Statements state 1.) whether AI was used at any stage of the creation of a work, 2.) if used, which tool(s), 3.) How the creator used the tools and how their output is reflected in the submitted work.
Users may also be asked to share a link back to the creator’s prompt and tool response. However, some AI tools may share your chat or prompt publicly if you press any "share" button or link generator through the tool. For example, xAI -- the creators of the AI tool "Grok" -- published the chats of hundreds of thousands of users without their explicit consent or knowledge. This included not only the chats but the documents and images shared by users. If users are asked to link back to their prompt, they should be made aware that this will make their prompt publicly available.
AI transparency statements serve several key purposes:
Some examples:
A 2024 thematic analysis of academic publisher guidelines on the use of artificial intelligence found that academic publishers overwhelmingly prohibit attribution of authorship to Gen AI tools. A recurrent theme in the guidelines is also transparency about the use of AI tools through disclosure in the presentation of scholarship.
Examples:
ICMJE, Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (section II, no.4, pp. 3)
European Parliament, EU AI Act: first regulation on artificial intelligence (see "Transparency Requirements")
Major publishing organizations such as the Committee on Publishing Ethics, the World Association of Medical Editors, JAMA Network, prohibit the use of text generated from LLMs. The journal Nature’s policy states:
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”
Just as all style guides have unique approaches to documentation and citation, there is no universally accepted way to acknowledge the use of generative AI in academic writing. However, most organizations agree on one thing: Generative AI tools such as ChatGPT cannot be cited as an author of a publication.
This article, How to Cite ChatGPT, from the APA Style blog outlines current experiences with using ChatGPT and recommends authors not only verify all sources that are generated by ChatGPT, but also read the content in the sources provided in summaries to ensure accuracy.
APA’s current journal publishing policies on generative AI are:
View additional guidance for authors in the APA Journals Policy on generative AI: https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai
Although the recommendations in the Chicago Manual of Style online are a bit more vague in terms of assigning ChatGPT the author position in a citation, they do acknowledge this is an "evolving topic" and users should expect to keep up with future updates. CMOS does provide more structured guidance when it comes to scholarly publishing, deferring to language from the Committee on Publication Ethics:
"AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements."
Modern Language Association of America (MLA) style uses the concept of core elements and containers, "to give writers flexibility to apply the style when they encounter new types of sources" (MLA Style Center).
As with other style guides, MLA cautions against treating any AI tool as an author:
Several examples of paraphrasing and quoting are provided on this page from the MLA Style Center.