Skip main navigation

The elements of quality assessments

Linda McKee from AACTE discusses the importance of ensuring assessment of teacher preparation programs are valid, reliable, and fair.
In the previous activity, you read about Linda Suskie’s four traits of quality assessments.
Another way of testing the quality of your assessment instruments - and avoiding any bias in them - is to think in terms of three key elements: validity, reliability and fairness. The first element is FAIRNESS. A fair assessment ensures all stakeholders are equally represented and protected. An assessment that’s fair will embody the values of equity, diversity, and integrity. It’s difficult to write an assessment like a survey without introducing your own bias into the questions in some way. So when reviewing your assessments for fairness – try asking yourself the following questions… Have you asked multiple stakeholders the same questions? Is your assessment accessible to every user - both in terms of taking the assessment and also receiving understandable results?
Does your assessment meet the needs of a diverse population? Have you checked the language in your assessment for cultural bias? Here’s an example of language bias. In the US we use the term ‘clinical experience’ to mean a new teacher’s experiences in a classroom during their training. But this term doesn’t have the same meaning internationally. When I say ‘clinical’, you might think I’m referring to a medical or healthcare environment, but actually in the US we use that term to describe education training as well. So if we were to include a question about a teacher’s ‘clinical experience’ in an assessment, we would be introducing a cultural bias against the rest the world.
It would not be FAIR to take that same assessment instrument and use it in another country. The second element is RELIABILITY The burden for any teaching program is collecting comparable measures over time. If you’re running a program with a small number of candidates enrolled, your sample size in any assessment will be small, so your data will be less reliable. And if you make any changes to the assessment over time, you won’t be able to compare the results to previous years, and you’re going back to square one. So reliability is about the consistent use of the tool across a large population, or over a long period of time.
Finally - VALIDITY For an assessment to be reliable it also needs to be valid. By validity we mean the accuracy of the measurement tool itself - does it measure what it says it will? It can be difficult to prove this. Here’s an example - if your set of scales at home over- or under-measures your weight by 3 pounds, it will consistently measure your weight the same every day. So from that point of view, the scales are reliable, but its readings would not be valid. So how do you test for validity? How would you know if your scale is off? Well, you can address the risk by triangulating multiple sources of data, as well as multiple uses of the same instrument.
So, in summary assessing an assessment for validity, reliability, and fairness can feel quite onerous. There have been many books and seminars devoted to the topics of validity and reliability, but you might still find it difficult to demonstrate them in your context. Over the next steps, we’ll review some academic papers that help us to define the key elements of fairness, reliability and validity, and we’ll share some useful case studies of assessments that demonstrate these three traits.
So far in Week 1, we’ve explored the different approaches to assessment at a national level, and we’ve discussed the benefits of developing local tools for measurement. Now let’s turn our focus to assuring the quality of any assessment you use.
The CAEP-commissioned report “Building an Evidence-Based System for Teacher Preparation,” (Allen, Coble, and Crowe, 2014) describe increasing expectations for demonstrated teaching skill, measured through nationally-normed assessments.
Hence, programs using locally developed instruments face more scrutiny and carry the burden to demonstrate soundness.
In this video, Linda returns to the notion of a three-part framework for quality assessment which we introduced earlier in the week.
Over the following three steps, we’ll explore each of these three elements in more detail.
Michael, A., Coble, C. and Crowe, E. (2014). ‘Building an Evidence-Based System for Teacher Preparation’. Teacher Preparation Analytics. Available at
This article is from the free online

Designing Assessments to Measure Student Outcomes

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education