Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Understanding monitoring for ROP

Monitoring is important to improve the accountability and effectiveness of ROP programmes
Stage 3 of a planning cycle is the project or programme implementation, including the monitoring phase. This is planned in detail at the beginning and then managed throughout the project or programme lifespan. Imagine that you have planned a journey from point A to point Z. In the plan, you know what car you will use, the routes you will travel, how many people will be travelling, and the amount of fuel needed for the journey. You also have an estimate of the time for travelling. Once the journey starts - the implementation of the plan every few hours you look at the fuel gauge, check the distance travelled, and perhaps even ask how the passengers are doing. This is known as monitoring.
You may have planned to assess this information at several points along the way. Here, you decide if the car needs to be refuelled or if a break is necessary to rest the passengers. Information on fuel, distance and how the passengers are doing are referred to as indicators. Indicators tell you how the journey is going. Monitoring is the continuous surveillance of the implementation of a program or project Monitoring activities check if a project is proceeding according to the plan. Are you doing what you said you will do? Monitoring is important for a number of reasons. What gets monitored is more likely to get done. If you don’t monitor performance, you can’t tell success from failure.
If you can’t see success, you can’t reward it. And if you can’t recognise failure, you can’t correct it. Finally, if you can’t demonstrate results, you can’t sustain support for your actions.
For any plan, achievement is aligned with completion of objectives and there are usually several activities that need to be carried out for an objective to be achieved. Each activity requires inputs, such as finance, resources, et cetera, and for specific tasks to be completed. This is known as the process. as a result of process activities and objectives are completed. And this, in turn, leads to an outcome and an impact. Indicators measure what or how much has been done. Process indicators provide information on tasks done and inputs consumed as part of activities to achieve objectives. Process indicators are collected regularly on a weekly, monthly, or quarterly basis.
Outcome indicators are used to assess if the path taken is working well and if changes need to be made to the plan’s objectives. These are collected over longer intervals, once or twice a year. An impact indicator is an indication of change that has resulted from a plan. Impact indicators are collected and reported on at the end of a programme cycle or reviewed on an annual basis. This provides an understanding of whether a programme or intervention has been effective. In any programme it is important to decide what information should be collected and when. Managers must decide on what could be monitored, what should be monitored, and what must be monitored.
For example, in the ROP programme in district X, one of the objectives is to increase screening for ROP in preterm babies, born at a gestational age of less than or equal to 34 weeks, from 30% to 75% within a year. The three neonatal intensive care units (NICUs) in the district carry out several activities to achieve this objective.
Two of the main activities are: To establish a weekly list of all the babies eligible for ROP screening, and For the dedicated ROP nurse to prepare the listed babies for screening by the technician within 30 days after birth. To check that these two activities actually happen and to manage their progress,
managers regularly collect data on several process indicators, including: The number of eligible babies listed for screening. The number of babies actually screened, and The number of babies not screened before 30 days and the reasons why. To see if the activities make a difference, managers review the data and calculate outcome indicators.
In our example, the managers calculate: The percentage of eligible babies screened The percentage of screened babies that required no further screening, and, the percentage that needed treatment or follow up. Similar outcome data on treatment can also be obtained. Impact indicators are more complicated. Data for the whole year from all three NICUs needs to be looked at along with additional information on how the team coped with the ROP activities and the challenges they faced.
Monitoring data is used to understand and improve the
effectiveness of screening by ROP programmes: Coverage - the percentage of NICUs which are providing ROP services. Access to screening - the percentage of babies who should have been screened for ROP who were screened. Quality of screening – the percentage of babies who received the correct diagnosis, and Adherence to follow up – the percentage of parents who attended follow up appointments.
Important indicators for monitoring ROP treatment are: Access to treatment – the percentage of babies with type 1 ROP who were treated, and Quality of treatment – the percentage of babies treated who had good outcomes.
Golden rules for monitoring: Do not collect too many monitoring indicators or collect indicators too often. Use all the monitoring indicators you collect and discard indicators that aren’t used. Use the monitoring indicators at the level that they are collected at, as process, outcome, or impact. Educate staff about the need to collect monitoring indicators. And don’t make things worse. Don’t destroy a monitoring system that works.
To manage monitoring effectively requires a reliable system.
Managers need to decide: Who will collect the indicators at each level. Once the data has been collected who will extract, document and analyse the data and where will the reports on the findings of the analysis be sent. And lastly, how will the programme act, based on the findings? These are key details that must be managed by the ROP programme manager. Monitoring requires the selection and training of key staff to enable them to take on responsibility for monitoring. In our example, it is essential that the programme manager provides direct feedback every quarter to the ROP nurses in the NICUs on the outcomes, so that they can reflect and improve.
At a national level, it is important to know the number of neonatal units caring for preterm infants and the percentage providing ROP screening and treatment. This can be done every few years, to give a national picture of the coverage of ROP services.
The minimum data to monitor how well ROP services
are being delivered in each neonatal unit are: Number of babies eligible for screening. Number and percentage of eligible babies screened. Number and percentage of babies screened who required treatment. Number and percentage of babies who were treated. Birthweight and gestational age of babies needing treatment. This helps managers see whether the screening criteria are suitable. Outcome of treatment. Online systems with real-time monitoring are the best. Collecting the right data is important to guide the programme and its success. There is a saying that if you put rubbish in you get rubbish out so managers must remember to select indicators properly. In summary, to monitor the implementation of any programme
it is critical to: Identify the problems Explore what happened and why Take corrective action, and Assess whether the problems have been addressed.
Monitoring is important because it improves accountability for the use of funds and resources and improves performance to achieve outcomes.

Although blindness from ROP is almost entirely avoidable preterm infants can still become blind, even in settings where services to detect and manage ROP are in place. The effectiveness of any intervention is measured by how well it improves the health of the population. This is often influenced by many factors.

To assess how effective an ROP service is in preventing blindness amongst preterm babies we need to consider these key factors:

  • Coverage: % of units with ROP services
  • Access to screening: % of babies who should be screened who are screened
  • Quality of screening: % of babies who receive the correct diagnosis, and
  • Adherence to all screening: % of parents who adhere to screening appointments after discharge.
  • Access to treatment: % of babies with sight threatening (Type 1) ROP who are treated
  • Quality of treatment: % with good outcomes.

In an ideal setting, all these factors would be functioning at 100%. This means that all eligible babies are screened well and on time, and all needing treatment receive excellent care. The net result would mean there is no ROP blindness in the population.

However even if one of these factors is sub-optimal, blindness can occur. Imagine a scenario where an ROP service has been put in place but is only provided in half of the neonatal units in a region or country, so coverage is only 50%. In addition, only 75% of the eligible babies are actually being screened in these units.

To estimate the effectiveness of this ROP service at the population level we multiply all the variables together. So, in this scenario the effectiveness is: Coverage (50%) x Access to screening (75%) x Quality of screening (100%) x Adherence to follow up (100%) x Access to treatment (100%) x Quality of treatment 100% = 37.5%.

In this population, the ROP services will prevent less than half (37.5%) of all ROP blindness.

Professor Clare Gilbert (global ROP expert and lead educator on this course) says: ‘I often talk to ophthalmologists who are screening, and ask them ‘What proportion of babies who should be screened in this unit do you think you are screening?’ Some look at me a bit oddly, and say ‘All of them, of course.’ This situation is far more likely if the neonatal team are not actively engaged in the screening process, and lists of eligible babies who should be screened every week are not provided. Under these circumstances the ophthalmologists will only screen the eligible babies who are present on the day and they think they are screening all who should be screened. What they do not appreciate is that eligible babies may have been discharged the day before, or they do not know whether babies are returning in the right numbers to be screened after discharge – they just screen those who attend.’
This article is from the free online

Retinopathy of Prematurity: Practical Approaches to Prevent Blindness

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now