(18 Nov 2025) Traditional evaluation methods primarily rely on structured data and statistical indicators. While useful, they only provide a partial view of research outcomes and struggle with persistent challenges, including:
- Long timelines: outcomes may emerge years or decades after initial funding.
- Fragmented data: they operate in data silos, making it difficult to link funding decisions to real-world applications across different sectors.
- Invisible pathways: they miss the intricate knowledge flows through which research actually drives societal change, including indirect contributions from seemingly “unsuccessful” projects that nonetheless provide valuable insights for future breakthroughs.
The misalignment between research impact timelines and policy evaluation cycles creates a fundamental challenge: policymakers need evidence-based insights on shorter cycles than research impact typically unfolds, yet traditional metrics cannot capture the subtle, long-term pathways through which knowledge creates value.
To address these limitations, this study presents a framework combining artificial intelligence capabilities with human domain expertise. The methodology is guided by four key principles that enable more effective research impact assessment.
- 360-degree view of data integrating diverse sources including publications, patents, clinical trials, company websites, and policy documents. This holistic perspective captures multiple stages of the research lifecycle, from initial funding through commercialisation and clinical application.
- A modular, end-to-end workflow using machine learning and natural language processing (NLP) to extract and categorise relevant entities while applying semantic similarity analysis to identify connections between different research outputs. These techniques structure information into knowledge graphs that link research topics to stakeholders, funding programs, and translational applications.
- Expert-in-the-loop paradigm recognising that AI-generated outputs require human review and domain contextualisation to ensure validity and policy relevance. This approach balances scalable automation with interpretive accuracy.
- Openness and transparency, building on Open Science infrastructure like the OpenAIRE Graph to ensure reproducibility and enable others to adapt the methodology for different research domains.
Find out more here.




