Disseminating the outcomes of evaluation
Some things to think about when planning your evaluation
Take time to explain
If you plan to collect data from students or have a colleague observing you, make sure that you explain what you are planning to do and why. This is often not communicated clearly to students and might lead to disengagement (i.e. ‘oh, another survey to fill in…’).
If you take the time to explain to your students that you would like them to help you with a small evaluation exercise, which you are doing as part of a FutureLearn FULT course, in many cases students will change their perceptions of you and what you are doing, and may appreciate that you are taking steps to develop as a teacher. A simple explanation will often lead to richer responses and useful feedback.
If you decide to undertake a peer evaluation of your teaching, you should find the experience insightful. Once again, explain to your students that you are completing a FutureLearn course and ask them whether it is ok if your peer is sitting in and observing the class.
After the evaluation activity, make sure that you close the feedback loop and communicate to your students the outcomes of the evaluation and how you are going to act on them.
If you decide on the option of an analysis of existing data, you will need to make sense of the available data and make sure that you put the data in context. For example, if you were using Moodle (a learning management system) data, if the course is fully online, all the interactions between students, the material and the instructors will be logged in the system, however, in a ‘blended-learning’ situation, the data is only telling a part of the story.
Finally, if you do not have a teaching or practice session over the period of this module, you can still leverage on the discussion of peers to plan an evaluation at the next possible opportunity. For the purpose of the FULT reflective e-portfolio, you can retrospectively revisit a past teaching session.
Moving from evaluation to scholarly publication
Evaluation and research are closely aligned concepts, for example, researchers may have observed that the stages of your mini-evaluation which include the actions of Plan, Act and Reflect align closely with a model of Action Research (Kemmis, McTaggart & Nixon, 2014). An evaluation process is usually specific to a context (Alkin, 2011), data is delimited and findings are related to quality enhancement. In contrast, research aims to produce findings that may be generalisable to many contexts and may contribute to scholarship through dissemination as publications.
The focus of this course is on evaluation as a process for enhancing learning and teaching. Evaluation is essential to any quality process. In this context, if you are using secondary data, it would not usually require human ethics approval.
Secondary data includes that generated as a by-product of higher education administration, for example, enrolments, LMS activity logs, CATEI/MyExperience. You must make sure that any evaluation process you do adheres to the relevant university guidelines and policies. These will have considerable differences in different national and institutional contexts.
However, if you are collecting primary evaluation data as part of a research project with the intention of publishing results, then you will need to obtain human ethics approval. Primary data may include assignments, surveys, observations, interviews, focus groups, or informal feedback from students. If you would like to learn more about human ethics you can explore the links provided in the “want to know more?” section of this step.
We ask you to check the protocols and processes regarding ethics approval in your institution (this is often dictated at a national level).
Want to know more?
If you would like to more about this topic on the outcomes of evaluation there are additional resources listed in the Want to know more.pdf for this step.
Alkin, M. C. (2011). Evaluation essentials: From A-Z. New York, NY: The Guilford Press.
Kemmis, S., McTaggart, R. & Nixon, R. (2014). The Action Research planner. Doing critical Participatory Action Research. Singapore: Springer.
© UNSW Sydney 2017