AI in assessment and feedback
In this step, we'll look at AI in assessment and feedback, which demands a literacy of its own for both teaching staff and students.
In the previous step, you looked at how students’ critical AI literacy can be developed. In this step, we’ll look at AI in assessment and feedback, which demands a literacy of its own for both teaching staff and students.
A brief history
Before focussing on how artificial intelligence is reshaping assessment and feedback in education, it’s worth understanding the historical context of automated assessment. For decades, educators have used various forms of automation to help manage the assessment workload. The most familiar example is likely the optical mark recognition (OMR) technology used to automatically grade multiple-choice tests. This technology, developed in the 1960s, uses specialised scanning equipment to detect marks made on standardised forms, dramatically reducing the time needed to grade objective assessments.
Similarly, many learning management systems have long offered capabilities for automated feedback on quizzes and tests. However, these systems typically rely on pre-written feedback that’s triggered by specific answers: essentially a sophisticated lookup table rather than artificial intelligence. The quizzes in this course use the same technique. While valuable, these tools are limited to situations where answers can be definitively categorised as right or wrong, and where feedback can be predetermined.
The real challenge has always been assessing qualitative work: essays, reports, projects, and other forms of open-ended assessment that require nuanced evaluation and detailed feedback. This is where modern AI tools offer intriguing new possibilities, though this also raises important questions about appropriate implementation.
AI assistance or AI judgement?
There have been attempts made since the 1960s to employ AI to help assess essays; in recent years, a lot of effort has been put into using deep and machine learning to achieve this goal [1]. The focus here, though, will be on the utilisation of tools that have now become freely and commonly available to teachers who do not necessarily have the technical background or resources to develop or apply sophisticated, dedicated AI tools. As we navigate the integration of AI into assessment and feedback processes, it’s crucial to distinguish between AI assistance and AI judgement. This distinction lies at the heart of responsible AI use in education.
AI assistance
AI assistance is something I am certainly an advocate of, because it offers the potential to improve the consistency and quality of feedback without removing the human assessor from the loop. An approach I have used involves using AI to generate initial feedback based on predefined rubrics or criteria. For instance, an educator might input a marking rubric into a system like ChatGPT, instructing it to produce boilerplate feedback for various grade bands. This approach harnesses AI’s efficiency while maintaining human oversight and personalisation. See this example: Using a marking rubric and ChatGPT to generate extended boilerplate (and tailored) feedback. Here, no student work is being analysed per se. Rather the assessor’s judgements are fed via an AI chatbot pre-loaded with a rubric so that generalised comments can be written to form the backbone of further nuanced feedback. It ensures greater consistency and removes some of the effort in writing similar feedback across multiple assignments.
AI judgement
Conversely, AI judgement, which I would tend to advise against unless using a supported, institutional tool with permission, involves uploading student work directly to AI systems for evaluation. This raises significant privacy concerns and reliability issues. The nuances of student work, especially in higher education, often require human understanding and contextual knowledge that AI currently lacks.
As we will see elsewhere in this course, there is a lot of drive to increase the amount of observed work as assessment, along with an increasing emphasis on the process and iteration. The very fair response to this is often: ‘Great! But where do I get time to manage this? It isn’t scalable!’. I certainly understand this and believe that such approaches can be made more scalable if we are open to modifying the ways in which we manage marking and feedback. We also need to be open to helping students optimise use of these tools to generate feedback and to ensure that, if they do this, they do so with criticality and NOT with the assumption that the ‘computer knows best’! One way that many teachers are finding AI can support them is to audio record feedback and use machine transcription and AI summaries of those transcripts as a way of providing depth but avoiding hours of typing.
Implementation considerations
When implementing AI in assessment and feedback, several factors warrant careful consideration:
- Pedagogical alignment: we are, of course, keen to find ways to streamline processes, but this should never be at the expense of our teaching goals and duties to our students.
- Human oversight: while AI can assist, human judgement remains crucial in assessment. The dystopian vision of a future where AI is marking AI-produced assessments is clearly not something we want to encourage!
- Privacy and data protection: we must address the ethical implications of AI use, especially regarding student data. One thing I say to colleagues is: NEVER upload student work to a third-party tool without explicit permission and, even then, think very carefully about what your goals are.
- Continuous development and evaluation: regular experimentation, careful piloting, and assessment of AI tools’ effectiveness and impact is essential if trust is to be established.
Issues of trust
One important factor is around perceptions of trust. Tishenina argues that using AI to generate student feedback undermines the fundamental trust and authenticity in educational relationships, potentially causing more harm than student plagiarism, as it violates the mutual expectation of genuine engagement between educators and learners [2]. Given that many students will have been told they should not use AI, you might imagine the response if they believe their teachers have handed the responsibility of marking to a machine! On the other hand, recent research [3] shows that whilst students state they prefer human over AI feedback when the source is known, there is in fact a tendency for them to prefer the AI feedback when it is not. This suggests that we need to be open about use but also prudent in how we present it, ensuring students (and teachers) understand the processes, checks, and extent of the AI augmentation (if we decide to use it at all).
Looking ahead
The future of AI in assessment and feedback is ripe with potential. We might see AI playing a role in creating more authentic and accessible assessments, or in promoting student autonomy and self-regulated learning. However, as we explore these possibilities, we must remain vigilant about the ethical implications and continue to prioritise the human elements that are central to education. It may be that as we develop fluency in the use of tools increasingly integrated into everyday tasks, we adapt to AI use in a seamless and less visible way, or we might find ourselves using AI tools specifically designed for these tasks. For example, tools like Feedback Fruits are being piloted across universities, colleges, and schools in the UK. We also need to be alert to the ways students are being presented with ‘real time writing feedback’ with AI tools such as those integrated in Grammarly.
Now that you have completed this step, you should have a good understanding of emerging possibilities in the use of artificial intelligence to mark students’ work and give feedback. In the next step, you will see a more detailed process describing how some higher education teachers are experimenting with AI for essay marking.
References
- Borade JG, Netak LD. Automated grading of essays: A review. In: Singh M, Kang DK, Lee JH, Tiwary US, Singh D, Chung WY, editors. Intelligent human computer interaction: 12th International Conference, IHCI 2020, Proceedings, Part I [Internet]; 2020 Nov 24-26; Daegu, South Korea. Switzerland: Springer Cham; 2021. p. 238-49. Available from: https://doi.org/10.1007/978-3-030-68449-5_25
- Tishenina M. The broken pillar: AI for feedback generation and the erosion of students’ trust. 2024 Feb 12 [cited 2025 Feb 26]. In: British Educational Research Association (BERA). Artificial intelligence in educational research and practice [Internet]. United Kingdom: BERA. 2023 -. Available from: https://www.bera.ac.uk/blog/the-broken-pillar-ai-for-feedback-generation-and-the-erosion-of-students-trust
- Nazaretsky T, Mejia-Domenzain P, Swamy V, Frej J, Käser T. AI or Human? Evaluating Student Feedback Perceptions in Higher Education. In: Ferreira Mello R, Rummel N, Jivet I, Pishtari G, Ruipérez Valiente JA, editors. Technology Enhanced Learning for Inclusive and Equitable Quality Education: 19th European Conference on Technology Enhanced Learning, EC-TEL 2024, Proceedings, Part I [Internet]; 2024 Sep 16-20; Krems, Austria. Springer Cham; 2024. p 284-98. Available from: https://doi.org/10.31219/osf.io/6zm83
- Moule T. FeedbackFruits pilot: Initial stages. 2024 Mar 27 [cited 2025 Feb 26]. In: Jisc. Artificial Intelligence: AI in universities and colleges [Internet]. United Kingdom: Jisc. 2021 Sep -. Available from: https://nationalcentreforai.jiscinvolve.org/wp/2024/03/27/feedbackfruits-pilot-initial-stages/
Try it out
Have a go at one or both of the following tasks:
- If you have a rubric to hand, try the processes described in this video by Martin.
- Choose an LLM, such as Claude AI, and present it with some writing; something you’ve written, or you can use this step of the course. Then, enter the following prompt:
“Offer me bulleted feedback on this article I have written. The first four bullets should focus on the structure and content. Comment on clarity of argument and completeness. The second four bullets should comment on language use, style, and readability.”
Join the conversation
One of the struggles faced by teachers is how much time and effort marking/grading student assessments takes. Yet, many teachers are reluctant to address the time and effort issue by exploring AI options.
Why do you think this is? What is your view on the use of AI for feedback?
Reach your personal and professional goals
Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.
Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.
Register to receive updates
-
Create an account to receive our newsletter, course recommendations and promotions.
Register for free