UK Universities Quietly Adopt Generative AI to Assess Research Quality, Report Finds
National study reveals growing but uneven use of AI in REF preparation and calls for urgent governance and sector-wide standards
A new national report led by the University of Bristol has revealed that generative artificial intelligence tools are already being used across parts of the UK higher education sector to help evaluate research quality for the next Research Excellence Framework.
The findings show that AI is being deployed quietly but increasingly, even as many academics express deep skepticism about its suitability for such influential assessment processes.
The report, based on evidence from sixteen universities and nearly four hundred staff, shows that institutions are experimenting with GenAI in markedly different ways.
Some are using AI to gather evidence of research impact, refine case studies and streamline administrative preparation, while others have developed internal tools to help review and score research outputs.
These practices remain largely undeclared, with the report noting that expectations of AI use by REF panellists are already high despite the absence of formal national guidance.
Lead author Professor Richard Watermeyer said AI could significantly reduce the cost and burden of REF preparation, which reached an estimated four hundred and seventy-one million pounds during the previous cycle.
He warned, however, that without oversight the technology risks widening inequalities between well-resourced institutions capable of building powerful AI systems and those reliant on basic public tools.
Survey responses reflect this divide: between fifty-four and seventy-five percent of academics strongly opposed using AI for REF, while professional services staff and newer universities reported greater openness to adoption.
Senior leaders interviewed for the study expressed mixed views.
Some described AI integration as inevitable and essential for future competitiveness, while others cautioned that the sector is in the midst of an "AI bubble" and lacks clarity on the limitations of current tools.
Many highlighted persistent mistrust and uneven familiarity with AI among staff, reinforcing the need for training, clear disclosure rules and strong human oversight.
The report recommends that every university publish a policy on AI use for research and REF activities, implement appropriate security and risk-management measures, and provide comprehensive training for staff.
It also calls for national oversight, including a sector-wide governance framework and the creation of a shared, high-quality AI platform accessible to all institutions.
Without such measures, the authors warn that fragmented adoption could undermine fairness, transparency and confidence in one of the UK's most significant research-funding mechanisms.
With comparable assessment frameworks recently discontinued in Australia and New Zealand, the report argues that the UK has an opportunity to lead global reform.
It concludes that while GenAI is not a complete solution, it cannot be excluded from the future of national research evaluation in an increasingly data-driven era.