CHINTHAKA BALASOORIYA: When I think about the most effective evaluation strategies, I would place feedback from students right on top. And here I would use both the formal and informal feedback that I get from students. So at the end of each course, I really carefully go through all of the comments that I get from students, and I try to see where I can build on my strengths and what areas need attention. In addition to this, I also use an informal method where, midway through each course, I use a very simple method, a simple piece of paper which has what’s working well and what can be improved.
And this is completely anonymous, that gives students an opportunity to tell me in what ways I can tailor my strategies to suit this particular group even better. So I find feedback in both these ways really useful, because it enables me to change and improve my strategies to better fit with the preferences of each particular learner group. When I combine this with what I get through my former CATEI feedback, I find that this gives me reach data that I can work with to continuously improve my education and practise.
ELIZABETH ANGSTMANN: So, I use a few different forms of evaluation. I mean, we have course representative meetings where we meet with a student who’s been designated as the course representative. And that student gathers feedback from the other students in that course who may be hesitant to approach the lecturer with feedback, but they’re happy to talk to another student, who then passes on the feedback. So I get a lot of feedback about how I’m going from those course representative meetings. I also get feedback from the concept inventory test, because these tell me how much my students have learned. Yet, there’s also the CATEI evaluations that the students fill in at the end of semester.
I find the comments in these useful, as they indicate directly what you can possibly work on to improve for next time.
PATSIE POLLY: Well, for example, the Adaptive eLearning Lab, so the virtual lab for the Western blotting. Every lab we implement now has a survey– has a survey that is delivered as part of the lab. And students are asked to evaluate on their thinking, their learning, their experience in its entirety. So yeah, that’s the way we normally do it. It’s quite integrated, so at least it’s just happened in the moment rather than, well that seemed like a good one when I did it four weeks ago, but I can’t really remember. Yeah, that lab was good, because it made me think this way. Or it wasn’t very good, because it was not making sense.
But what I haven’t mentioned is we’ve got a whole reflective process going out through the courses, through programmes. And what we see in those reflective blogs that are posted by students is they’re thinking and doing, and they’re almost evaluating aspects of the course as they’re evaluating their thinking and learning as they go through. So it’s insightful for us to look at that beyond just a reflective pace that we grade or evaluate for depth or understanding. We look at that in terms of, hold on a minute. This is really working, because a majority of students are on target for this task. They seem to click with it.
So that, by doing some of the assessment– implementing assessment strategies, we have learned how to evaluate and get feedback as well.
VICTORIA CLOUT: I look at student feedback in terms of the comments that they gave in their evaluations. And sometimes, if I’m making some big changes like the flipped classroom and introducing different assessments, I might even do an informal polling kind of thing on Moodle. See how the students are reacting, talk to the students, get the students to tell me. They’re a really good sounding board about whether they think it’s a high workload, it’s not a high workload, or that they’re being challenged or not being challenged. I’ve become known for the flipped classroom, full-blended learning, that colleagues within the business school have approached me. And I sat down with them, with their ideas.
And I always want to share with them what I’ve done, and find out their opinion as well. So I think it’s really well-rounded. So talking to the students, talking to the staff on the course, and talking to other staff in the uni, or at other universities.
RACHEL THOMPSON: I would use the examination questions. So knowing, in terms of being very careful how I follow up on the results of examinations, has been very helpful, and has shown where I’ve actually improved my practise and where I still need to improve. The adaptive tutorials give you very clever feedback, so you can see student engagement, disengagement, of every page, every slide, if you like, of that tutorial. And it gives you points of where students have signed off, given up, or where they’ve spent too long. So you get a very complex readout with that. I also have a look at my quizzes.
So, the quizzes in Moodle– looking at the quizzes in Moodle to see where students are still having problems. It does depend on the questions, the questions are arranged from being simple to difficult. But if it’s a simple question that students are getting consistently wrong at the first try, there’s a message there. So I do look at that, too.
Another form of evaluation has been a very simple questionnaire that we’ve uploaded, because for the medical programme, we’ve been blending over the past two, three years. And that’s gone at end of every blended object that we’ve introduced. And that’s been very simple. Similar to the CIQ, the Brookfield CIQ. It gives you some basic readout about what they like, what they didn’t like, and what they learned. But you also have some qualitative information. And looking through those has been extremely helpful too, so. I think it’s a balance of qualitative and quantitative, and making sure that you’re reading carefully about the assessments, how they’re going. And those three things together can actually help.
PAUL EVANS: There are different kinds of evidence that we can use to evaluate whether we’re teaching effectively. The most obvious one we can go to is that our student evaluations at the end of the semester, we call it CATEI the moment, and whatever system we’ll adopt in the future will be similar. That can be a good source of information. But I think, by the time students have finished a course, they’re not so interested in providing really detailed comments. And you might get the impression from that students don’t really know what good teaching and good learning looks like. But actually, I think students have very good ideas about what good teaching is.
And so it’s good, during the semester, to ask them. And I do that informally, one-on-one, during tutorials, or sometimes I’ll use a more structured way of doing that. I’ll issue a survey, it can be anonymous. Sometimes you just feel more comfortable answering anonymously. And literally ask them which activities in the course so far are you finding really effective for your learning, and what would you like more help with? And students are very good at providing feedback on your teaching as it’s happening. Particularly if you express the idea that you care about their feedback and you’re really interested in knowing how you’re going during the course.
So definitely for students, in terms of what they want of your teaching strategies, and also monitoring the students’ learning as well. So asking questions during a lecture, so for example, pausing in the middle, or even as wrapping up a topic, I’ll ask a question based on the material.
STEPHEN DOHERTY: I draw upon a variety of information sources to test the effectiveness of my teaching. From the traditional approaches, to looking at the end of semester student evaluations, to the online or ongoing feedback, both solicited and spontaneous, in the lectures and in the tutorials that I have with my students. In the reflective elements that are embedded into the weekly assignments, so that students can indicate, perhaps difficulties or other issues within the course that I can test and identify issues quite quickly, and hopefully remedy them quite quickly.
Also then, because I teach with up to 14 different tutors, I get quite a lot of feedback about how each tutorial goes or how a group of tutorials is progressing on a weekly basis. We’re quite a closely-knit team. And then I also partake in courses such as [? Fault, ?] and use some of the peer observation where I invite senior colleagues to sit-in in some of my face-to-face lectures, both spontaneous and planned, for specific activities. For example, the flipped classroom activities, which I started using last year. I asked a senior colleague to come in and identify things that could be improved and things that worked well.
And I also then used the video lecture function to go back over lectures that I thought worked quite well, or didn’t work in the way that I thought, to identify areas that I could improve upon or solicit advice from others. More formally, then, in enrolling in courses such as [? Fault ?] and in my own annual progression review, I’m able to work with a senior colleague who has expertise in learning and teaching to identify professional development, and to identify areas that I could improve upon for the next semester, and so on. And then most recently, in using learning analytics, I’ve also been able to identify, on a much more systematic basis and more objective measures around activities.
Or what elements the students have visited more often. Or which elements, perhaps videos, that didn’t work so well or were not as effective, or were not used by the students. So I like to take quite an holistic approach to informing and to testing the efficacy of my teaching.
ADAM HULBERT: In terms of collecting feedback, I do recommend an old-fashioned survey. What’s great about that, is the students have a sense that you’re actually really, deliberately trying to listen to what their experience is. It’s concrete, it’s in front of them. They know that this stuff is being collected. It’s also really valuable because you can get a sense of what’s going on on a wide scale, and also what’s going on at an individual scale, and you can use that data to really work through the course. The other reason it’s really useful, is that you have it written down. So if you forget, you just have to go back to that data, and you can have a sense of that.
So I really think that’s a really valuable way of doing things. And I recommend doing it, personally, before the final end of semester. So that is because you can act on it earlier on in semester. I also strongly recommend talking to students. I think there’s no reason to not do that. And I’ve often presented it to students by saying, I’m this old man. I haven’t been a student for many years, so I don’t know what the experience is like for you. So I have to have this conversation with you to get a sense of what it’s like. And they appreciate that you’re making this time to listen to their experience and develop the course based on that.