(31 Jan 2023) The recent release of AI technology that generates new text has raised serious questions among the research community. For one, “Can ChatGPT be named an author of a research paper?”
The resounding answer from arXiv leaders and advisors is, “No.” A computer program cannot, for example, take responsibility for the contents of a paper. Nor can it agree to arXiv’s terms and conditions. Other organizations agree.
To address this issue, arXiv has adopted a new policy for authors regarding the use of generative AI language tools.
The official policy is:
- continue to require authors to report in their work any significant use of sophisticated tools, such as instruments and software; we now include in particular text-to-text generative AI among those that should be reported consistent with subject standards for methodology.
- remind all colleagues that by signing their name as an author of a paper, they each individually take full responsibility for all its contents, irrespective of how the contents were generated. If generative AI language tools generate inappropriate language, plagiarized content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
- generative AI language tools should not be listed as an author; instead authors should refer to (1).
The announcement is here.