# A review of the University of Roehampton's AI Guidance (2025) *07 Sep 2025* #AI #Assessment #LLM #Final ## Introduction In preparation for the start of the 2025/26 academic year, the University of Roehampton has issued [guidance](https://library.roehampton.ac.uk/ld.php?content_id=35968467) on "using AI the right way" (University of Roehampton, 2025, p. 1). This document stipulates that assessments will fall into one of three categories: those where AI use is "impossible or irrelevant" (*ibid.*, p. 2); those where its use is possible but must be acknowledged; and those where it is necessary. The guidelines also articulate several general principles for responsible engagement with AI, including a welcome and practical emphasis on security measures such as Multi-Factor Authentication (*ibid.*, p. 4). However, this analysis will argue that, without significant structural changes to make assessments fall into the first category, these guidelines function as mere *discursive changes*, ultimately proving insufficient to address the fundamental challenges AI poses to assessment validity. ## Discursive vs. structural changes Corbin and colleagues (2025b) offer a crucial conceptual distinction for evaluating institutional responses to AI. *Discursive changes* are defined as modifications that "rely solely on the communication of instructions, rules, or guidelines to students" (*ibid.*, p. 5) leaving the mechanics of the assessment task unchanged and making their success entirely dependent on student compliance. In contrast, *structural changes* "directly alter the nature, format, or mechanics of how a task must be completed" (*ibid.*, p. 6) thereby building validity into the assessment architecture itself, independent of student volition. Of course, guidelines are by their nature discourse, and looking solely at them does not inform us as to what structural assessment re-designs may or may not be taking place in actual courses. More problematic, in the middle assessment category (under which all existing assessment will fall by default) is the central edict: "AI should not do the assessment for you" (University of Roehampton, 2025, p. 2), an instruction that compels students to define its meaning for themselves. Having "AI" be the subject of the sentence shifts the agency away from the student, where it should lie, in a manner reminiscent of the *nominalisation* noted by Hayes (2015) in the discourses of educational technology in policy documents, where abstractions ("technology", "the strategy") are cast as agents of change. This statement requires student to engage in *boundary work*: the task of defining an ethical boundary for oneself, based on the intersection of available information, social context, structural incentives, and personal values (Gieryn 1983, 1999). # "\[A\]n absurd line" (Corbin *et al.*, 2025b, p. 1) The ambiguity of such an edict is the focus of Corbin and colleagues' (2025b") investigation into how students and educators navigate the boundaries of acceptable AI use. Their research reveals that, in the absence of clear, enforceable, and structurally embedded boundaries, students are forced to construct their own individual ethical frameworks. This boundary work is a source of significant cognitive and emotional burden, creating anxiety and uncertainty. Vague institutional guidelines, rather than alleviating this pressure, exacerbate it by establishing what one participant termed an "absurd line" that is impossible to consistently interpret or enforce. Putting the onus on the student to "declare" or "cite" their usage relies on a degree of compliance that cannot be expected given the pressures and incentives under which they operate. Gonsalves's (2025) research into non-compliance with AI use declarations found that a majority of students fail to declare, driven by factors including fear of academic repercussions, ambiguous guidelines, and peer influence. Perhaps a more pedagogically-focused formulation of "AI should not do the assessment for you" (University of Roehampton, 2025, p. 2) could be "You should not use AI to complete the assignment without meeting the learning outcomes". The "work" of assessment should be in service of learning, not an end in itself; if students are expected to monitor their use, they could benefit from a framing around learning outcomes. # Constructive Alignment The integrity of assessment is underpinned by the principle of *constructive alignment* (Biggs, 1996). This framework posits that for deep learning to occur, a course's intended learning outcomes, its teaching and learning activities, and its assessment tasks must be explicitly and coherently aligned. The assessment should be a valid measure of a student's achievement of the outcomes, which have been facilitated by the teaching activities. AI presents a fundamental challenge to this model. It can defeat constructive alignment by enabling an assessment to be submitted, and potentially get a passing grade, without the student having engaged in the requisite cognitive activities to achieve the learning outcomes - or certainly not to the degree presumed in the spirit of their writing. The tool can effectively sever the link between the assessed product and the student's learning process. # Further points of critique Above, I have drawn from the literature to inform my analysis, but some of the guidelines are so self-evidently defective that a critique does not require this backing. First, _"Avoid sharing your academic work or assessments in AI tools that do not guarantee data security"_ (University of Roehampton, 2025, p. 3). It is unclear what "data security" refers to: use of data willingly input into an LLM system is beyond the scope of the Data Protection Act (2018), hence the previous point of guidance to not input personally identifiable information into them. Without a list of safe tools, or at least criteria to assess their data safety, this seems to expect students to adjudicate the terms of service of global technology companies themselves. Particularly frustrating is _"Use AI tools that are approved by your university"_, without further detail as to what those tools are. A student may rightfully expect *this very document* to list those tools, or point to an official resource that does. These unclear clauses hints at boundaries in "the right way" to use AI, yet do not draw them, an uncertainty likely to cause anxiety. Most objectionable to me, however, is the inclusion, in a paragraph reminding students to "Consider the Environmental Impact" (University of Roehampton, 2025, p. 3), of a mitigating sentence: > As a student, you can make a difference by using AI thoughtfully and only when it adds real value to your work. *At the same time, AI can also support environmental solutions—helping researchers tackle climate challenges and improve energy efficiency.* Being mindful of your digital choices is part of being a responsible and sustainable learner. (University of Roehampton, 2025, p.3, emphasis added) AI's potential to help in climate research is point often made immediately after mentioning the considerable environmental harms of generative AI, in a prototypical example of *[False Balance](https://en.wikipedia.org/wiki/False_balance)* ("bothsidesism"). Climate and energy models are a completely different application of deep learning ("AI") than Large Language Models, with significantly smaller footprints in training and deployment. More importantly, this point has no relevance to *student use of AI* (for assessment). The triteness of this section and the irrelevance of this sentence made me suspect it could be un-edited synthetic text; [quillbot.com](https://quillbot.com/ai-content-detector) deemed this paragraph, and the whole end of the document, as AI-generated with moderate confidence. ![[Quillbot screenshot.png]] As to "citing" AI, It is unclear whether it is an acknowledgement of use, or a systematic tracking of the source of the prose, as illustrated by the formatting of this essay's conclusion, below. Note that square brackets indicate additions, not rephrasing. # Final thoughts. "*The issuance of these guidelines is a necessary and understandable step in the higher education sector's ongoing effort to adapt to a new technological reality. It is clear that institutions, staff, and students are all navigating challenging and unfamiliar territory, and this document contains valuable principles for ethical and secure practice. It is a part of the solution. The hope, however, is that this is not the final word, but rather a preliminary measure that precedes* \[a resolution of the ambiguities this document creates, and\] *a deeper engagement with the structural redesign of assessment. Such* \[effort\] *is essential*\[, at Roehampton or elsewhere, to ensure assessment meaningfully verify learning outcomes, and by extension that\] *institutional degrees remain a valid testament to student knowledge and capability* \[(even)\] *in the age of* \[generative] *AI*." (Google Gemini 2.5 Pro). Without it, those guidelines will remain not only a discursive change, but more grievously, a *performative* one. # Disclosure statement I used Google Gemini to draft this essay, by inputting the full text of the references below (I assume Google to be deemed a "guaranteeing data security" (their [track record](https://www.cbsnews.com/news/google-ordered-pay-425-million-privacy-tracking-case) of [repeated](https://www.theguardian.com/technology/2023/sep/14/google-location-tracking-data-settlement) fines and [settlements](https://www.bbc.co.uk/news/business-67838384) for abusive data [capture](https://time.com/6233752/google-location-tracking-settlement-privacy) notwithstanding), as well as a sample (3k+ words) of my own academic writing for tone and diction. I also included in my prompt the desired structure, with specific instruction as to points to mention, with some sentences verbatim. I then verified the accuracy of the system's summary of the sources against my literature notes, and revised the text, although not as heavily as I would had I not provided so much context, or perhaps as I should, looking at the balance of words in the conclusion. As a result the final text is technically *mostly* synthetic output, in spite of maintaining a consistent tone. The full prompting exchange is available [here](https://gemini.google.com/app/dcfa3e2220515438). I hope a qualified reader can let me know if AI has done the work for me, here. # References Author Unknown (2025) _Student Guidelines on the use of AI_. University of Roehampton. Biggs, J. (1996) ‘Enhancing Teaching through Constructive Alignment’, _Higher Education_, 32(3), pp. 347–364. Corbin, T. _et al._ (2025a) ‘“Where’s the line? It’s an absurd line”: towards a framework for acceptable uses of AI in assessment’, _Assessment & Evaluation in Higher Education_, pp. 1–13. Available at: [https://doi.org/10.1080/02602938.2025.2456207](https://doi.org/10.1080/02602938.2025.2456207). Corbin, T., Dawson, P. and Liu, D. (2025b) ‘Talk is cheap: why structural assessment changes are needed for a time of AI’, _Assessment & Evaluation in Higher Education_, 0(0), pp. 1–11. Available at: [https://doi.org/10.1080/02602938.2025.2503964](https://doi.org/10.1080/02602938.2025.2503964). Gieryn, T.F. (1983) ‘Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists’, _American Sociological Review_, 48(6), pp. 781–795. Available at: [https://doi.org/10.2307/2095325](https://doi.org/10.2307/2095325). Gieryn, T.F. (2022) _Cultural Boundaries of Science: Credibility on the Line_ (1 online resource (413 pages) vol). Chicago: University of Chicago Press. Available at: [https://www.degruyter.com/isbn/9780226824420](https://www.degruyter.com/isbn/9780226824420) (Accessed: 1 September 2025). Hayes, S.L. (2015) _The political discourse and material practice of technology enhanced learning_. phd. Aston University. Available at: [https://publications.aston.ac.uk/id/eprint/26694/](https://publications.aston.ac.uk/id/eprint/26694/) (Accessed: 16 January 2025). [^1]: "LLMs analysing prose generated by LLM" is the new "blind leading the blind".