Skip main navigation

Can GenAI tools help with marking and feedback?

In this article, Dr Abdulla Sodiq and Dr Jayne Pearson demonstrate how GenAI can help to provide students with faster, more personalised feedback.
In the previous step, you saw how the notion of critical AI literacy aligns not only with core academic principles and values such as critical and independent thinking, but also supports the development of academic skills. In this step, you will identify some ways that GenAI tools can be used to help students gain and use feedback.
Classroom with students sitting at desks while an educator is writing feedback for a student Image produced using Midjourney – an AI imaging tool
As academic circles begin to consider the benefits of AI tools such as ChatGPT in higher education, an area where there is innovative practice is marking of students’ work and providing feedback.
One emerging practice is where the rubric for an assessment task (see the image below as an example) is entered into ChatGPT (or similar tool). The tool is then instructed to produce ‘personalised’ constructive feedback for a student achieving, let’s say, an overall 54% mark (see Example 1 below).
Example marking rubric, see Word download for accessible copy To access an alternative format of the rubric table above, please find a PDF file at the bottom of this step, under ‘Downloads’

Example 1.1 Educator’s request (prompt) in ChatGPT for feedback on students’ work.

This is a rubric for a take-home assessment – please generate some boilerplate feedback for a student achieving 54%. [1]
ChatGPT then with impressive speed (within a matter of seconds) produces feedback or feedforward (areas students could do better in) in bullet point form for each of the learning outcomes (see Example 1.2 below). A point to note is that it’s ‘personalised’ only to the extent where, for example, the feedback template produced starts with ‘dear’ and leaves space for the name to be added, as well as using the pronouns ‘you’ and ‘I’.
The feedback could be instructed to be positive in tone and ChatGPT generates text using phrases such as ‘while your work is strong in… / keep up the good work.’ AI tools can also be used to provide samples of work at a given level for students to deconstruct or exemplify a specific piece of work at a given level. This can form part of the rubric or be used separately in class to develop students’ assessment literacy.

Example 1.2 ChatGPT generated feedback for a student scoring 54% overall.

Dear [Student Name],

Thank you for submitting your take-home assessment. After reviewing your work, I have assigned a grade of 54%, which falls within the “Fair” category. Below, I have provided feedback on the various aspects of your work, as outlined in the assessment rubric.

1. Response to question/task set (Fair): You have covered some of the main points in the indicative answer, but there were a few significant omissions or areas that focused on less relevant material. It would be helpful to concentrate more directly on the question to ensure a comprehensive response.

2. Understanding of taught material/concept (Fair): While you demonstrated a reasonable reproduction of taught materials and a rudimentary understanding of the concepts, there is room for improvement in this area. It is important to fully grasp the material and concepts covered by the course. [1]

ChatGPT goes further. You can ask it to automatically customise the auto-generated feedback for a student with a good score of 65% but with a relatively lower score (let’s say 55%) in one of the learning outcomes (see Example 2). In other words, the overall feedback can be tweaked within seconds with specific feedback or feedforward points on areas where students have scored lower (or higher).

Example 2.1 Educator’s request (prompt) for student feedback to be customised for specific outcomes.

Please now generate feedback for a student whose work was 60% across the whole piece but was only adequate in terms of evidence – please focus on suggesting development of supporting evidence use in the feedback. Make the feedback supportive throughout. [1]

Example 2.2 ChatGPT-generated feedback customised for learning outcome 4 in bold.

Dear [Student Name],

Congratulations on achieving a grade of 60% in your take-home assessment. This places you in the “Good” category, which reflects your strong performance in several areas. Below, I have provided feedback on the various aspects of your work, along with suggestions for improvement, focusing on the use of supporting evidence.

1. Response to question/task set (Good): Your response directly addressed the question, with only minor omissions or areas that focused on less relevant material. Keep up the good work, and make sure to address all parts of the question comprehensively in future assignments.

2. Understanding of taught material/concept (Good): You demonstrated a solid understanding of the material and concepts covered by the course. Continue to engage with the course content and strive for an even deeper comprehension of the subject matter.

3. Engagement with the literature (Good): Your knowledge and understanding of the recommended reading were sound. To further strengthen your engagement with the literature, consider seeking out additional resources and recent papers relevant to the topic.

4. Evidence to support points or arguments (Adequate): While your overall work was strong, the use of supporting evidence could benefit from improvement. In future assignments, ensure that you thoroughly discuss evidence, justify your points with relevant sources, and synthesise information from diverse sources. This will help to enhance the credibility and persuasiveness of your arguments. [1]

On reflection, it is not uncommon for educators to copy/paste templates of feedback expressions into students’ overall feedback. Schools have been using pre-prepared statements in students’ yearly reports for some time. In my own university-based teaching, I have developed statements for use in various marking assignments. Where the ChatGPT facility is different, and probably a significant time-saver, is it produces a whole report for a given student within a very short period of time aligned to existing rubrics.

Challenges of AI-generated feedback

There are some missing elements in the ChatGPT-generated feedback. For example, if it says, ‘make sure you address all parts of the question’, the subject specialist educator can step in and add an example from the student’s work for them to grasp the point made in the feedback. Alternatively, the marker could cross-refer to a particular page/point in the student’s work. In positive feedback where the auto-generated ChatGPT feedback says, ‘You demonstrated a solid understanding of course concepts’, the marker could strengthen the personalised quality of the feedback if they add examples of the concepts from the student’s work.

Another aspect to consider for the above-mentioned ChatGPT approach to work sensibly is that the original rubrics must have academic sense. For instance, within the rubrics, the knowledge/skills descriptors for the bands within a 0–100% grade range should demonstrate an increasing level of quality. Inconsistent descriptors or statements for the bands would not allow ChatGPT to produce feedback that would make sense to the students. It is also possible to train students to use generative AI to ‘translate’ the trickier points within the rubrics so that they are written in a language that students find easier to understand.

Could we push the affordance of AI further, for instance, in essay marking? There have been attempts made since the 1960s; have a look at the experimentation that began with Dr Ellis Page’s Project Essay Grade. However, a critical aspect hard to address is the uncertainty of the lengths a student might go to to manipulate an essay marking AI tool. To me, that is a good enough reason to leave the status of play (in terms of essay marking) where it is, as Daisy Christodoulou, suggests.

An important point to bear in mind is that however we use AI tools, we shouldn’t be inserting students’ work into AI tools for data privacy reasons (see General Data Protection Regulations, GDPR, 2018).

Conclusion

An important use of generative AI tools is for students themselves to use them to generate dialogic feedback on their own work. As in the previous step, students can be scaffolded to use these tools to help them throughout the process of completing an assessment, with the advantage that feedback from AI is immediate and dialogic.

Overall, ChatGPT can produce structured feedback or feedforward that is consistent with the quality of existing rubrics. An equally important point is that the saved time from the auto-generation of marking feedback could be used wisely to further enhance the quality of the feedback by adding the most important aspect of any feedback – personalisation.

Now that you have completed this step, you should have a good understanding of emerging possibilities in the use of artificial intelligence in marking students’ work and giving feedback. In the next step, you will discuss detection and fairness and the reliability of current AI detection tools.

References

  1. Compton M. Using a marking rubric and ChatGPT to generate extended boilerplate (and tailored) feedback. UCL MediaCentral. 2023 Apr 20.

Join the conversation

How might you use generative AI tools to generate rubrics that would be helpful for students? What prompts could students use to obtain immediate dialogic feedback as they work on the process of assessment?

© King’s College London
This article is from the free online

Generative AI in Higher Education

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now