British universities have been warned to “stress-test” all assessments after new research revealed “almost all” undergraduates are using generative artificial intelligence (genAI) in their studies.
A survey of 1,000 students – both domestic and international – found there had been an “explosive increase” in the use of genAI in the past 12 months. Almost nine out of 10 (88%) in the 2025 poll said they used tools such as ChatGPT for their assessments, up from 53% last year.
The proportion using any AI tool surged from 66% in 2024 to 92% in 2025, meaning just 8% of students are not using AI, according to a report published by the Higher Education Policy Institute and Kortext, a digital etextbook provider.
Josh Freeman, the report’s author, said such dramatic changes in behaviour in just 12 months were almost unheard of, and warned: “Universities should take heed: generative AI is here to stay.
“There are urgent lessons here for institutions,” Freeman said. “Every assessment must be reviewed in case it can be completed easily using AI. That will require bold retraining initiatives for staff in the power and potential of generative AI.
“Institutions will not solve any of these problems alone and should seek to share best practice with each other. Ultimately, AI tools should be harnessed to advance learning rather than inhibit it.”
Students say they use genAI to explain concepts, summarise articles and suggest research ideas, but almost one in five (18%) admitted to including AI-generated text directly in their work.
“When asked why they use AI, students most often find it saves them time (51%) and improves the quality of their work (50%),” the report said. “The main factors putting them off using AI are the risk of being accused of academic misconduct and the fear of getting false or biased results.”
One student told researchers: “I enjoy working with AI as it makes life easier when doing assignments; however, I do get scared I’ll get caught.”
Women are more worried about these factors than men, who show greater enthusiasm for AI, as do wealthier students and those on science, technology, engineering and maths (Stem) courses.
According to the report, half of students from the most privileged backgrounds used genAI to summarise articles, compared with 44% from the least privileged backgrounds. “The digital divide we identified in 2024 appears to have widened,” the report concluded.
Students generally believe their universities have responded effectively to concerns over academic integrity, with 80% saying their institution’s policy is “clear” and 76% believe their institution would spot the use of AI in assessments. Only a third (36%) of students have received training in AI skills from their university.
“They dance around the subject,” said one student. “It’s not banned but not advised, it’s academic misconduct if you use it, but lecturers tell us they use it. Very mixed messages.”
Dr Thomas Lancaster, a computer scientist at Imperial College London who researches academic integrity, said: “Students who aren’t using generative AI tools are now a tiny minority.
“I know some students are resistant to AI, and I can understand the ethical concerns, but they’re really putting themselves at quite a competitive disadvantage, both in education, and in showing themselves as ready for future careers.”
A spokesperson for Universities UK said: “To effectively educate the workforce of tomorrow, universities must increasingly equip students to work in a world that will be shaped by AI, and it’s clear progress is being made.
“But they need to balance this with the challenges posed by a rapidly developing technology. This survey shows that universities and students are alive to the potential risks posed by AI tools in the context of exams and assessment.
“All have codes of conduct that include severe penalties for students found to be submitting work that is not their own and they engage students from day-one on the implications of cheating.”