Contact FutureLearn for Support Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 1 secondIn the previous activity, you read about Linda Suskie’s four traits of quality assessments. Another way of testing the quality of your assessment instruments - and avoiding any bias in them - is to think in terms of three key elements: validity, reliability and fairness. The first element is FAIRNESS. A fair assessment ensures all stakeholders are equally represented and protected. An assessment that’s fair will embody the values of equity, diversity, and integrity. It’s difficult to write an assessment like a survey without introducing your own bias into the questions in some way. So when reviewing your assessments for fairness – try asking yourself the following questions… Have you asked multiple stakeholders the same questions?

Skip to 0 minutes and 59 seconds Is your assessment accessible to every user - both in terms of taking the assessment and also receiving understandable results? Does your assessment meet the needs of a diverse population? Have you checked the language in your assessment for cultural bias? Here’s an example of language bias. In the US we use the term ‘clinical experience’ to mean a new teacher’s experiences in a classroom during their training. But this term doesn’t have the same meaning internationally. When I say ‘clinical’, you might think I’m referring to a medical or healthcare environment, but actually in the US we use that term to describe education training as well.

Skip to 1 minute and 52 seconds So if we were to include a question about a teacher’s ‘clinical experience’ in an assessment, we would be introducing a cultural bias against the rest the world. It would not be FAIR to take that same assessment instrument and use it in another country. The second element is RELIABILITY The burden for any teaching program is collecting comparable measures over time. If you’re running a program with a small number of candidates enrolled, your sample size in any assessment will be small, so your data will be less reliable. And if you make any changes to the assessment over time, you won’t be able to compare the results to previous years, and you’re going back to square one.

Skip to 2 minutes and 39 seconds So reliability is about the consistent use of the tool across a large population, or over a long period of time. Finally - VALIDITY For an assessment to be reliable it also needs to be valid. By validity we mean the accuracy of the measurement tool itself - does it measure what it says it will? It can be difficult to prove this. Here’s an example - if your set of scales at home over- or under-measures your weight by 3 pounds, it will consistently measure your weight the same every day. So from that point of view, the scales are reliable, but its readings would not be valid. So how do you test for validity? How would you know if your scale is off?

Skip to 3 minutes and 29 seconds Well, you can address the risk by triangulating multiple sources of data, as well as multiple uses of the same instrument. So, in summary assessing an assessment for validity, reliability, and fairness can feel quite onerous. There have been many books and seminars devoted to the topics of validity and reliability, but you might still find it difficult to demonstrate them in your context. Over the next steps, we’ll review some academic papers that help us to define the key elements of fairness, reliability and validity, and we’ll share some useful case studies of assessments that demonstrate these three traits.

The elements of quality assessments

So far in Week 1, we’ve explored the different approaches to assessment at a national level, and we’ve discussed the benefits of developing local tools for measurement. Now let’s turn our focus to assuring the quality of any assessment you use.

The CAEP-commissioned report “Building an Evidence-Based System for Teacher Preparation,” (Allen, Coble, and Crowe, 2014) describe increasing expectations for demonstrated teaching skill, measured through nationally-normed assessments.

Hence, programs using locally developed instruments face more scrutiny and carry the burden to demonstrate soundness.

In this video, Linda returns to the notion of a three-part framework for quality assessment which we introduced earlier in the week.

Over the following three steps, we’ll explore each of these three elements in more detail.

References
Michael, A., Coble, C. and Crowe, E. (2014). ‘Building an Evidence-Based System for Teacher Preparation’. Teacher Preparation Analytics. Available at https://www.angelo.edu/content/files/21316-building-an-evidence-based-system.pdf

This video is from the free online course:

Designing Assessments to Measure Student Outcomes

American Association of Colleges for Teacher Education (AACTE)

Course highlights Get a taste of this course before you join:

  • The importance of quality assessments
    The importance of quality assessments
    video

    Linda McKee from the American Association of Colleges for Teacher Education explains to find out the effectiveness of a training program.

  • Steps in writing surveys
    Steps in writing surveys
    video

    Watch Linda McKee from AACTE's five-part process for designing and deploying a survey to measure student satisfaction in your teaching program.