(22 Oct 2025) New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.
The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools.
Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.
Key findings:
- 45% of all AI answers had at least one significant issue.
- 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
- 20% contained major accuracy issues, including hallucinated details and outdated information.
- Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
- Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.
Why this distortion matters
AI assistants are already replacing search engines for many users. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
‘This research conclusively shows that these failings are not isolated incidents,’ says EBU Media Director and Deputy Director General Jean Philip De Tender. ‘They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.’
Peter Archer, BBC Programme Director, Generative AI, says: ‘We’re excited about AI and how it can help us bring even more value to audiences. But people must be able to trust what they read, watch and see. Despite some improvements, it’s clear that there are still significant issues with these assistants. We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.’
Next steps
The research team have also released a News Integrity in AI Assistants Toolkit, to help develop solutions to the issues uncovered in the report. It includes improving AI assistant responses and media literacy among users. Building on the extensive insights and examples identified in the current research, the Toolkit addresses two main questions: “What makes a good AI assistant response to a news question?” and “What are the problems that need to be fixed?”.
The full findings can be found here: Research Findings: Audience Use and Perceptions of AI Assistants for News
The announcement is here.



