(14 Mar 2023) The rules are set out in the first AI ethics policy from Cambridge University Press and apply to research papers, books and other scholarly works.
They include a ban on AI being treated as an ‘author’ of academic papers and books we publish.
The move provides clarity to academics amid concerns about flawed or misleading use of powerful large language models like ChatGPT in research, alongside excitement about its potential.
The Cambridge principles for generative AI in research publishing include that:
- AI must be declared and clearly explained in publications such as research papers, just as scholars do with other software, tools and methodologies.
- AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge.
- Any use of AI must not breach Cambridge’s plagiarism policy. Scholarly works must be the author’s own, and not present others’ ideas, data, words or other material without adequate citation and transparent referencing.
- Authors are accountable for the accuracy, integrity and originality of their research papers, including for any use of AI.
The press release is here.