(8 Jan 2026) A new report by HEPI and Taylor & Francis explores the potential of AI to advance translational research and accelerate the journey from scientific discovery to real-world application.
Using Artificial Intelligence (AI) to Advance Translational Research (HEPI Policy Note 67), authored by Rose Stephenson, Director of Policy and Strategy at HEPI, and Lan Murdock, Senior Corporate Communications Manager at Taylor & Francis, draws on discussions at a roundtable of higher education leaders, researchers, AI innovators and funders, as well as a range of research case studies, to evaluate the future role of AI in translational research.
Key findings
The report finds that AI has the potential to strengthen the UK’s translational research system, but that realizing these benefits will require careful implementation, appropriate governance and sustained investment.
Key findings include:
- AI could accelerate translational research by enabling faster analysis of large and complex datasets, supporting knowledge synthesis and improving links between disciplines. However, the availability and quality of such datasets remain uneven, limiting the ability of AI tools to support research translation in some fields.
- Access to AI skills and expertise is increasingly important and building this access into interdisciplinary frameworks will be a key component of driving translational research.
- AI can improve the accessibility and visibility of research, including through plain-language summaries, semantic search (search functions that utilise concepts and ideas and not simply keywords, giving a more accurate result) and new formats aimed at audiences beyond academia.
- There are clear risks associated with AI use, including challenges around reproducibility, bias, deskilling, academic integrity, intellectual property and accountability.
Recommendations
To ensure AI supports high-quality and responsible translational research, the report makes recommendations for research funders, institutions and publishers, including:
- Setting clear expectations for the responsible use of AI, including alignment with guidance such as the UK Research Integrity Office’s Embracing AI with Integrity.
- Investing in trustworthy and ethical AI, including work to improve transparency, reduce bias and support reproducibility.
- Strengthening support for interdisciplinary research, including better recognition of team-based work and clearer routes to access AI expertise.
- Supporting shared and open AI research infrastructure to reduce duplication and make researcher-developed tools more widely available.
- Encouraging data sharing and reuse, alongside investment in infrastructure that supports secure and responsible access to data.
Find out more here.




