Study finds the chatbot doesn’t acknowledge concerns with problematic studies
(15 Aug 2025) The large language model–based chatbot ChatGPT fails to highlight the validity concerns with scientific papers that have been retracted or have been the subject of other editorial notices, according to a new study.
The analysis, published by Learned Publishing on Aug. 4, examines whether GPT 4o-mini recognizes the problems with 217 scholarly studies that have been either retracted or highlighted for validity concerns by the Retraction Watch Database.
The study authors asked GPT 4o-mini, a text-oriented version of the popular software, to evaluate the quality of the 217 papers 30 times each, yielding a total of 6,510 reports. They found that the tool didn’t mention in any of the reports that the papers being analyzed had been retracted or had validity issues.
Find out more here.



