Want to keep learning?

This content is taken from the Coventry University's online course, Disaster Interventions and the Need for Evaluation, Accountability and Learning. Join the course to learn more.

Evaluation and monitoring

"" Damage in the city of Moron, Haiti, following Hurricane Matthew © Avi Hakam, CDC. CC BY 2.0. Selecting the image will take you to the original on a third party site.

Monitoring, evaluation, accountability and learning, under the acronym MEAL, has become an increasingly prominent element of disaster and humanitarian interventions in recent years. But as we’ve mentioned previously, monitoring, evaluation, accountability and organisational learning have existed and evolved as organisational processes outside of disaster and humanitarian programs since the 1970s.

Let’s look at the relationship between evaluation, monitoring and quality.

Evaluation

Organisational evaluation evolved out of the methodological approaches used by social scientists (sociologists, economists, educators and psychologists) as they applied their approaches to real world social policy settings (Alkin et al., 2006, p. 21).

Evaluation activities aim to improve programs through the assessment and analysis of the practices and processes undertaken by those implementing the interventions. Since state and organisational policy determine priorities, and therefore programs and what they need to achieve, it’s clear that policies, programs and practice are intertwined foci of evaluation.

The word evaluation comes from the old French and Latin words for ‘value’, and can be interpreted in terms of the worthiness or importance of something in its context, or alternatively its value in more objective numerical or monetary terms.

Lincoln and Guba (1986, p. 550), key academics in the field of evaluation and its application to social policy interventions, define evaluation as:

…a type of disciplined inquiry undertaken to determine the value (merit and/or worth) of some entity – the evaluand – such as a treatment, program, facility, performance, and the like – in order to improve or refine the evaluand (formative evaluation) or to assess its impact (summative evaluation).

In order to value the merit or worth of something, we ‘measure’ either how effectively something has been done or the impact it has had. In the case of effectiveness we tend to measure performance, and in the case of impact we try to measure change relative to an initial baseline set of conditions.

Monitoring

Administrators have long monitored inputs and outputs: 5,000 years ago, the ancient Egyptians monitored outputs of grain and livestock; today, governments track expenditure, debt, revenues, staffing, and goods and services with varying degrees of accuracy and success.

Monitoring and associated evaluation approaches were largely quantitative in the 1960s and 70s, shifting in recent decades to a mixed data approach that better reflects the lived experience of social interventions.

In 2005, OECD/DAC (Development Assistance Committee) adopted the term Management for Development Results (MfDR), and most current monitoring and evaluation practice is a form of Results Based Management (RBM). RBM is a management strategy that takes a holistic approach to measuring performance, achievement of outputs, evaluation of outcomes, and overall impacts of an intervention. We’ll come back to RBM later in the module.

The development of standards

Monitoring and tracking performance, and then analysing the information that is generated to evaluate worth and value, requires more than just objectives to be set (objective-based management). Standards are descriptions (qualitative or targets/metrics) of what should be achieved and how it should be achieved, usually in terms of minimum acceptable level of performance and impact. Standards are the goals against which interventions are measured.

As the field of evaluation evolved and recognised the value of the qualitative evaluation of ‘worth’, standards have been used in order to remove some of the elements of subjectivity and facilitate more consistent (repeatable and comparable) analysis.

Monitoring and evaluation in disaster interventions

Much of the evaluation done in the sectors of development, disaster and humanitarian action requires performance to be monitored over time and impact to be assessed at various points during and after implementation.

Having detailed information about the performance and impact of social interventions allows us to report the findings through evaluation reporting mechanisms (documents, presentations, videos, meeting discussions) to stakeholders. The evaluation and data that underpins it allows:

  • Internal and external quality control and management
  • Accountability
  • Learning, change and improvement

References

Alkin, M., Alkin, M., Christie, C., Anderson, N., Chen, H., Cook, T. D., Cook, T. D., Reichardt, C. S., Cronbach, L., Ambron, S., Dornbusch,S., Hess, R. Hornik, R., Phillips, D. Walker, D., Weiner, S., Denzin, N. K., Lincoln, Y. S., Donaldson, S. I.,… & Birkeland, S. (2006). Introduction: the evaluation of policies, programs, and practices. In I. F. Shaw, J. C. Greene, & M. M., Mark (Eds.), The SAGE handbook of evaluation (pp. 1-30). SAGE Publications Ltd. DOI: 10.4135/9781848608078. Access via Locate here.

Lincoln, Y., & Guba, E. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. In D. D. Williams (Ed.), Naturalistic Evaluation. New Directions in Program Evaluation (pp. 73–84). Jossey-Bass.

Share this article:

This article is from the free online course:

Disaster Interventions and the Need for Evaluation, Accountability and Learning

Coventry University