(6 Apr 2026) On January 6, 2026, freelance journalist Alex Preston published his review of the Jean-Baptiste Andrea book Watching Over Her.
Though the review was overall positive, it became the source of controversy when, in March, readers noticed similarities between it and one published in The Guardian by Christobel Kent in August 2025.
When confronted about the similarities, Preston admitted to using AI to draft the article and that the AI system he used incorporated the language. The New York Times then cut ties with Preston, citing multiple violations of their ethics.
At almost the exact time this was going on, an AI-powered local news network named Nota shuttered its doors abruptly. That came after Poynter and Axios notified the company’s CEO, Josh Brandau, of multiple examples of lifted language, copyright-infringing images, and other copied material.
All in all, eleven local news sites were shuttered. It’s unclear if the two part-time editors who were responsible for generating the site have been let go.
As we discussed in February, if you don’t disclose AI usage in spaces where human authorship is expected, you are committing plagiarism. But these cases take the issue a step further, with the AI plagiarizing the human authors it was trained upon.
As these cases show, this is not a hypothetical risk. It is something that is actively happening right now. And, if you’re using AI, even with proper disclosures, you need to be aware of it and take steps to prevent it.
Read more here.




