Nigel Francis of Buckinghamshire New University and Parakram Pyakurel of NMITE discuss why the current debate of AI use in assessments may be missing the point.
The arrival of Generative AI triggered a sense of panic across UK Higher Education, with the sector’s response split mainly into two camps. One scrambles to find ways to ‘AI-proof’ the traditional take-home essay, an increasingly futile exercise. The other, more concerningly, advocates a retreat to the perceived safety of the old fortress: the three-hour, closed-book exam. Students are also split on the use of AI for coursework.
A flawed response
But what if both responses are flawed? What if our frantic attempts to solve the ‘AI problem’ are blinding us to the fact that we are merely reinforcing an assessment system that was already failing? Perhaps GenAI isn’t the crisis we need to solve, but rather the catalyst that forces a conversation we should have started a long time ago about the real purpose of assessment.
For too long, the essay and the exam have been the default options of our academic assessment approach. Yet, we know their limitations. The generic essay often devolves into a formulaic performance, rewarding students who can master a specific style of writing rather than those who can genuinely think critically. The timed exam, meanwhile, primarily assesses memory and performance under pressure, two attributes that are increasingly less relevant in a world of open-book challenges. Neither method is a rich environment for developing the suite of interconnected competencies that we proudly list in our programme handbooks as ‘graduate attributes’. Essays and timed exams are also less relevant from employment competency perspective as jobs are unlikely to require employees to sit quietly for 3 hours, without talking to anyone, to complete set of problems like the way students are trained. Surveys are already showing widening gap between skills that students obtain from higher education and employer skills needs.
A better goal
The true goal of a university education isn't to create graduates who are good at writing essays or passing exams. It is to cultivate individuals capable of complex problem-solving, who can collaborate effectively, exercise personal and professional judgements in ambiguous situations, self-reflect and increasingly demonstrate digital and AI literacy. These attributes are not a checklist of isolated skills; they are an integrated set of competencies developed holistically. We further argue that these attributes are also needed for a cohesive, functioning society. You cannot cultivate authentic adaptability in a rigid exam hall, nor can you measure collaborative skill through an individual essay.
This is where authentic, competency-based assessment comes in. Consider an example from the biosciences: a lab-based practical. Here, a student isn't just asked to recall information; they are required to do something. They must demonstrate technical proficiency with equipment, adapt their method if an experiment yields unexpected results, critically analyse the data they generate, and communicate their findings clearly. In this single, meaningful task, they are developing and demonstrating that entire suite of graduate attributes in an integrated way. This type of assessment is largely resistant to AI shortcuts and is also intrinsically more valuable for the student’s development. It assesses process and capability, not just a polished final output.
Consider another example from engineering: building a physical artefact or prototype in a team. Here, students need to work collaboratively in a team, find consensus among different design options, make materials choice, select fabrication method, and build the prototype. Universities may utilise this as a one-off final year project assessment, but mini projects could be embedded throughout the curriculum.
The real challenge
Framed this way, AI ceases to be a threat and instead has the potential to become a powerful ally. It has rendered worthless the proxies for learning we have relied on for too long, forcing us to focus on what truly matters: assessing the skills that will empower our graduates in their professional lives. The panicked retreat to closed-book exams is a failure of imagination, an attempt to rewind the clock when we should be designing the clock for the future. It signals a prioritisation of institutional convenience over pedagogical purpose.
As we stand at this assessment crossroads, the question for each of us is not simply, ‘How do we stop students from using AI?’. The real, more challenging question we must ask ourselves is this:
Am I preparing my students for the future they will actually face, or just the one I would like them to have?