Can GenAI tools help with marking and feedback?
Example 1.1 Educator’s request (prompt) in ChatGPT for feedback on students’ work.
This is a rubric for a take-home assessment – please generate some boilerplate feedback for a student achieving 54%. [1]
Example 1.2 ChatGPT generated feedback for a student scoring 54% overall.
Dear [Student Name],Thank you for submitting your take-home assessment. After reviewing your work, I have assigned a grade of 54%, which falls within the “Fair” category. Below, I have provided feedback on the various aspects of your work, as outlined in the assessment rubric.
1. Response to question/task set (Fair): You have covered some of the main points in the indicative answer, but there were a few significant omissions or areas that focused on less relevant material. It would be helpful to concentrate more directly on the question to ensure a comprehensive response.
2. Understanding of taught material/concept (Fair): While you demonstrated a reasonable reproduction of taught materials and a rudimentary understanding of the concepts, there is room for improvement in this area. It is important to fully grasp the material and concepts covered by the course. [1]
Example 2.1 Educator’s request (prompt) for student feedback to be customised for specific outcomes.
Please now generate feedback for a student whose work was 60% across the whole piece but was only adequate in terms of evidence – please focus on suggesting development of supporting evidence use in the feedback. Make the feedback supportive throughout. [1]
Example 2.2 ChatGPT-generated feedback customised for learning outcome 4 in bold.
Dear [Student Name],Congratulations on achieving a grade of 60% in your take-home assessment. This places you in the “Good” category, which reflects your strong performance in several areas. Below, I have provided feedback on the various aspects of your work, along with suggestions for improvement, focusing on the use of supporting evidence.
1. Response to question/task set (Good): Your response directly addressed the question, with only minor omissions or areas that focused on less relevant material. Keep up the good work, and make sure to address all parts of the question comprehensively in future assignments.
2. Understanding of taught material/concept (Good): You demonstrated a solid understanding of the material and concepts covered by the course. Continue to engage with the course content and strive for an even deeper comprehension of the subject matter.
3. Engagement with the literature (Good): Your knowledge and understanding of the recommended reading were sound. To further strengthen your engagement with the literature, consider seeking out additional resources and recent papers relevant to the topic.
4. Evidence to support points or arguments (Adequate): While your overall work was strong, the use of supporting evidence could benefit from improvement. In future assignments, ensure that you thoroughly discuss evidence, justify your points with relevant sources, and synthesise information from diverse sources. This will help to enhance the credibility and persuasiveness of your arguments. [1]
On reflection, it is not uncommon for educators to copy/paste templates of feedback expressions into students’ overall feedback. Schools have been using pre-prepared statements in students’ yearly reports for some time. In my own university-based teaching, I have developed statements for use in various marking assignments. Where the ChatGPT facility is different, and probably a significant time-saver, is it produces a whole report for a given student within a very short period of time aligned to existing rubrics.
Challenges of AI-generated feedback
There are some missing elements in the ChatGPT-generated feedback. For example, if it says, ‘make sure you address all parts of the question’, the subject specialist educator can step in and add an example from the student’s work for them to grasp the point made in the feedback. Alternatively, the marker could cross-refer to a particular page/point in the student’s work. In positive feedback where the auto-generated ChatGPT feedback says, ‘You demonstrated a solid understanding of course concepts’, the marker could strengthen the personalised quality of the feedback if they add examples of the concepts from the student’s work.
Another aspect to consider for the above-mentioned ChatGPT approach to work sensibly is that the original rubrics must have academic sense. For instance, within the rubrics, the knowledge/skills descriptors for the bands within a 0–100% grade range should demonstrate an increasing level of quality. Inconsistent descriptors or statements for the bands would not allow ChatGPT to produce feedback that would make sense to the students. It is also possible to train students to use generative AI to ‘translate’ the trickier points within the rubrics so that they are written in a language that students find easier to understand.
Could we push the affordance of AI further, for instance, in essay marking? There have been attempts made since the 1960s; have a look at the experimentation that began with Dr Ellis Page’s Project Essay Grade. However, a critical aspect hard to address is the uncertainty of the lengths a student might go to to manipulate an essay marking AI tool. To me, that is a good enough reason to leave the status of play (in terms of essay marking) where it is, as Daisy Christodoulou, suggests.
An important point to bear in mind is that however we use AI tools, we shouldn’t be inserting students’ work into AI tools for data privacy reasons (see General Data Protection Regulations, GDPR, 2018).
Conclusion
An important use of generative AI tools is for students themselves to use them to generate dialogic feedback on their own work. As in the previous step, students can be scaffolded to use these tools to help them throughout the process of completing an assessment, with the advantage that feedback from AI is immediate and dialogic.
Overall, ChatGPT can produce structured feedback or feedforward that is consistent with the quality of existing rubrics. An equally important point is that the saved time from the auto-generation of marking feedback could be used wisely to further enhance the quality of the feedback by adding the most important aspect of any feedback – personalisation.
Now that you have completed this step, you should have a good understanding of emerging possibilities in the use of artificial intelligence in marking students’ work and giving feedback. In the next step, you will discuss detection and fairness and the reliability of current AI detection tools.
References
- Compton M. Using a marking rubric and ChatGPT to generate extended boilerplate (and tailored) feedback. UCL MediaCentral. 2023 Apr 20.
Join the conversation
How might you use generative AI tools to generate rubrics that would be helpful for students? What prompts could students use to obtain immediate dialogic feedback as they work on the process of assessment?
Reach your personal and professional goals
Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.
Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.
Register to receive updates
-
Create an account to receive our newsletter, course recommendations and promotions.
Register for free