Skip to 0 minutes and 10 secondsDR KRISHNA HORT: Measurement of quality care is complex, as we have seen in considering the range of dimensions of quality of care, including safety, effectiveness, efficiency, patient responsiveness, and equity, and the different perspectives of different stakeholders. However, measurement is at the heart of quality of care. This can be seen in the quality triangle depicted in this slide. The triangle links together the three key steps-- defining quality, setting the standards and indicators to measure them, measuring quality, and then improving quality. Measurement is necessary to measure quality against standards and to monitor progress towards achievement of a particular standard of care, and thus enables us to determine if effective coverage-- the final step in the Tanahashi framework-- has actually been achieved.
Skip to 1 minute and 8 secondsDefining quality of care begins with determining standards. Standards explicitly define what is required in order to achieve high quality care. Standards are based on the evidence of what is required in order to achieve the desired outcomes for any particular area of health care. Standards usually consist of broad statements of intent, which explain the rationale and supporting evidence, followed by criteria or quality statements that outline the elements required to meet the standard and will be the basis of the indicators. They are usually set out in domains based on the underlying conceptual framework, which links domains together to achieve health care outcomes. Here is an illustrative domain from the World Health Organisation maternal neonatal health standards.
Skip to 2 minutes and 2 secondsThere are three quality statements which describe measurable aspects of care or performance that can subsequently be used to develop indicators. Note that while global standards have been developed, standards should be adapted to the local context to ensure their applicability and to obtain the desired outcomes for mothers and newborns in different contexts. An indicator is a measurement to determine whether the standards are met. Indicators can be defined or based on three steps in the delivery of care, as you can see in this slide-- input, processes, and outcomes. These steps were first established by Donabedian in 1966. Input is what needs to be in place for the desired care to be provided.
Skip to 2 minutes and 51 secondsFor example, the physical and human resources, the policies, and the guidelines. Process. How does the system work? What are the methods and procedures that are required to ensure the quality of services? And then outcome or output. What was the effect of the provision and experience of care on health and on people-centred outputs or outcomes? Defining indicators requires consideration of three aspects. Which stakeholder perspective, which aspects of care, and the expected linkage between structure, process, and outcomes to establish the validity of the indicator as a measure of quality of care. Quality assurance and quality improvement are two complementary but slightly different approaches to using measurement to improve quality of care. The two approaches can be compared in this slide.
Skip to 3 minutes and 50 secondsQuality assurance has an overarching goal of calibrating the performance of a system to certain standards. Managers drive this change and enforce compliance as much as possible. While quality improvement tends to give attention to changes in one area of the system, it is less about routine measuring against normative standards and more about all participants teaching and learning from each other. The two approaches complement each other, as can be seen in this slide. After a period of regular monitoring through quality assurance, a quality improvement initiative results in an improvement in the level of performance to a new zone of control which continues to be monitored through quality assurance. This table presents four commonly-used methods for collecting quality of care data.
Skip to 4 minutes and 42 secondsEach method focuses on a different aspect of service provision, with the first two focusing on inputs and the second two more on process with some outcome information in the fourth. While data collection on inputs is relatively straightforward, collecting data and measuring process and outcomes is complex and subject to errors or biases. These errors include courtesy bias. For example, at exit interviews, clients avoid reporting unfavourably on their experience. Recall bias. Clients don't always recall all the information or procedures that they received. And expected answer bias, when interviewing respondents provide what they think is the expected answer.
Skip to 5 minutes and 31 secondsAnother potential bias is termed the Hawthorne effect-- when practices while being observed are different from the practices that are undertaken when their provider is not being observed. More recently, two new methods have been introduced. Firstly, the simulated client or mystery client method. An actor presents with rehearsed and predetermined symptoms or complaints to a provider who is blinded-- that is, they don't know which clients are real and which are simulated-- and then reports on what are the procedures or practices the provider undertakes. And vignettes. Vignettes are written or acted depictions of symptoms and signs, but the provider knows that this is a test. The provider then describes or acts out what they should do. However, these methods, too, are not straightforward.
Skip to 6 minutes and 29 secondsAnd measurement of process of care and outcomes remains challenging. Finally, different approaches are appropriate for different levels of care. For example, at hospital level, approaches include formal quality improvement processes, accreditation-- which is really a type of quality assurance. And the focus is more on process and outcomes in terms of safety. At a primary care facility, approaches include facility assessment, tests of provider competency, and use of simulated clients. The focus is more on readiness, availability and competence of providers, responsiveness, and client satisfaction. For community programs, the focus is more on the community perspective.
Skip to 7 minutes and 14 secondsThe responsiveness, competence and availability of community-level workers, particularly in regards to their communication skills and understanding of messages, and the extent of community involvement; in particular, the involvement of disadvantaged and marginalised community members and the functioning of community forums and committees which govern their involvement.
Approaches and tools to measure and monitor quality of care
Not surprisingly, the approaches to measure quality vary in their complexity, depending on whether you are monitoring a quality assurance process within a health facility, implementing a quality improvement program or if you are trying to include quality within national coverage indicators. It becomes even more difficult when we are trying to measure new aspects of care that haven’t been part of our routine monitoring or evaluation in the past. Take for example the relatively recent focus on respectful maternity care.
How would you measure the level of respectful maternity care in a particular facility? You can review this paper for some different suggestions.
© Nossal Institute for Global Health at the University of Melbourne