Skip main navigation

Feasibility Studies – An Overview

Read the article by Professor David Wright which provides and overview of feasibility studies.

(You can download this article using the link below or continue reading here: Feasibility studies an overview – Article by Professor David Wright)

Section 1: Introduction

Funders are increasingly reticent to fund definitive trials unless they are reassured that all components of the research design have been fully tested and all assumptions regarding recruitment and outcome selection can be supported with evidence. They need to be confident that the trial will not fail due to weaknesses in design and that the intervention has the maximum chance of demonstrating a meaningful effect.

Feasibility studies are designed to provide this reassurance and are consequently designed to do the following:

  • Identify whether a sufficient population within which the idea can be tested exists
  • Develop and testing processes for recruitment
  • Estimate available study population size
  • Estimate participating recruitment rates
  • Estimate participation retention rates
  • Test data collection processes for practicability
  • Check suitability of all outcome measures with respect to data quality
  • Support the identification of the most appropriate primary outcome measure
  • Ensure that costs associated with the intervention can be identified, measured and valued
  • Test practicability and acceptability of both the intervention and research processes
  • Develop and testing plans for process evaluation
  • Test procedures for enhancing fidelity
  • Test whether the research or intervention processes cause any reactivity bias or contamination within the control arm

In summary therefore the objectives for a feasibility study would be:

  • Estimate participant population size, recruitment and retention rates
  • Describe processes for enhancement of intervention fidelity
  • Describe suitability of data collection processes and outcome measures for inclusion in definitive trial
  • Describe the process for economic evaluation of the intervention
  • Define the process evaluation
  • Describe intervention and research process acceptability and practicability
  • Identify potential for contamination or reactivity bias within the control arm

It is important to note that within a study to test feasibility you are NOT determining the effect or impact of the intervention. This can only be done with the final definitive trial.

Section 2: Protocol

The main elements which would be expected to appear in a feasibility protocol and rationale for their inclusion are provided below.

2.1 Feasibility study management

The management of any complex intervention study should generally include the following:

  • Representatives from each health system stakeholder group affected by or involved in the intervention
  • At least two patient and public involvement representatives
  • Health economist, medical statistician, qualitative researcher, behavioural scientist depending on research design requirements

Usually chaired by the principle investigator, this group is there to provide input into the research design, delivery and dissemination processes. Consequently, having a range of expertise within the project management group enables you to both pre-empt and respond to problems efficiently and effectively.

This is the group who would be ultimately responsible for reviewing the data from the feasibility study and deciding what needs to change and how, prior to the piloting and definitive trial stages.

Frequency of meetings will be at the behest of the principle investigator, however a balance must be struck between maintaining effective communication and managing workloads. Consequently, whilst meetings between those immediately responsible for current activities will need to be frequent, whole management group meetings can be less so and every three months is often more than adequate.

Patient and public involvement

A key element of all research into complex health interventions is Patient and Public Involvement (PPI). Whilst this may have started by just including representatives in the management group, the reasonable expectation now is that they are equal partners in the process from idea conception to dissemination. There are no good reasons for not including PPI team members in design of the research process, participant recruitment, data collection, data analysis, report writing and dissemination processes. Consequently, due to the potential significant workload, projects now have small teams of PPI members involved in the research.

Cultural competence in research

It is important that all research is culturally competent i.e. the research is generalisable to the whole population within which the intervention is based and does not unconsciously disadvantage or preclude any groups from inclusion. It is important that all processes (consent, intervention, results dissemination) are culturally sensitive and unlikely to prevent people from different groups from actively engaging with the research at any stage.

Ensuring that your patient and public involvement group is appropriately culturally diverse is one approach to helping you to achieve this as they can review your planned recruitment documents and processes, your recruitment strategy i.e. how you will identify your patients, considering your intervention design and your dissemination plans.

Other approaches to achieving cultural competence can be derived by a thorough review of the literature to identify any differences in requirements or expectations reported by any groups. It will also enable you to identify any differences in outcomes which you should be aware of and design your intervention to address them.

Underpinning theory

If you have used theory to inform intervention design this should be clearly stated at the outset, combined with your rationale for using this theory over others which may also be deemed suitable.

2.1 Logic model

Providing a logic model at the feasibility stage is always useful as it demonstrates that you have carefully considered your intervention design and outcome measures. At this stage however it should always be described as draft as you should plan to update it following the experience of your feasibility study. Logic models should be iteratively improved as the project develops.

2.2 Covid_19 and Carbon footprint

If the pandemic has taught us one thing, it is that management groups no longer need to regularly meet physically face to face to work effectively. It has also made research teams review their data collection processes such that research associate visits to collect physical data are minimised and that qualitative work is now largely undertaken virtually. Increasingly approaches to collecting data digitally are being developed to enable researchers to collect data from across their study settings without leaving their research site.

Within all study protocols it is now usual to include a statement regarding research management and design modifications made as a result of Covid_19. Some funders can also request reassurance regarding the carbon impact of the research. A movement to virtual working and data collection additionally addresses this agenda.

2.3 Feasibility Study design

The first decision to make, is whether your feasibility study is to include randomisation or not.

A ‘before or after study’ or ‘service evaluation’ design is when you provide the new service to all patients and measure outcomes before and after. Whilst it is simpler to undertake it provides less useful information for the final RCT e.g. you don’t know how randomisation may affect willingness to participate, whether there is potential for contamination from the intervention i.e. the control arm learn from the intervention arm and change their behaviour or reactivity bias in the control arm. The ‘John Henry effect’ where the control arm changes its care as a result of being observed and to compensate for not being in the intervention arm. You also don’t know whether data collection in a control arm will be as effective. Randomisation at the feasibility stage will provide you with more confidence at the pilot stage but is not essential.

If you include randomisation within your feasibility study, you can also include an objective of ‘describe an effective process for randomisation’.


You will be expected to describe the different elements within your intervention at the feasibility stage. As this is a feasibility study however you can describe those elements which you will be testing for exclusion or amendment.

Intervention fidelity

Complex interventions and related processes must be tested and designed such that another individual could deliver the intervention equally effectively in their own setting i.e. there is good intervention fidelity. Consequently, even at the feasibility stage it is usual for interventions to be tested in a variety of different settings with a range of individuals.

Fidelity is enhanced by having standard operating procedures, using intervention recording process e.g. software, designed to ensure that all stages are completed and the provision of appropriate training for the individuals delivering the intervention. Which of these you use is up to the project management team but can be tested at the feasibility stage.

Within a feasibility study we would want to implement our planned processes for enhancing fidelity and then receive feedback on them to enable revision prior to the pilot and definitive stages. This forms part of the process evaluation which is described later.

Study population

In performing any study, you need to carefully define your study population in order to minimise unnecessary heterogeneity. Consequently, you need to clearly state your inclusion and exclusion criteria. NB: these should however not be opposites to each other i.e. if your inclusion criteria is ‘Aged 16 or over’ you do not need to state the exclusion criteria of ‘Under the age of 16’.

It is usual to exclude patients who are in other studies to prevent them from becoming ‘over researched’ as this will affect response rates to questionnaires and data collection.

Due to the small sample sizes included in feasibility studies it is however reasonable to purposively sample your participants to ensure that you have a sample which is sufficiently varied and therefore enable you to test a wide range of contexts / systems / groups of patients.

Rationale for sample size

Feasibility studies should not be powered to detect a difference and sample sizes are generally relatively small i.e. less than 100 and can be as low as 30. You would traditionally justify your sample size dependent on what you want to learn and the precision you can live with within your final estimate.

If you wanted to estimate attrition rate over 6 months, then a sample size of 60 will provide precision of ±10% around an attrition rate of 20%. NB: Sample size of 30 precision of ±14%.

A sample size can be justified by the qualitative element i.e. if 8 practitioners each provide the intervention to 5 patients each, this will provide a reasonable sample of both practitioners and patients to interview to assess intervention acceptability and practicability and to understand intervention fidelity.

Funding bodies which fund the whole process from feasibility through pilot and to the definitive stage may ask the researcher to define their progression criteria at each point i.e. the criteria which must be met before moving to the next stage. If any criteria are not met then funding for the next stage would not be released.

Progression criteria from feasibility and pilot studies can be graded red (stop), amber (amend process before proceeding) and green (go). An example for intervention fidelity could be:

  • Green: 90% of medication reviews adhered fully to the protocol
  • Amber: 70%-89% of medication reviews adhered fully to protocol
  • Red: < 70% of medication reviews are fully adherent

Researchers are now recommending that your feasibility study sample size is based on the precision within of your progression criteria and believe that between 60 and 80 patients are required in a randomised feasibility study. (See attached PDF for ref 1. Lewis. et al. Determining sample size for progression criteria for pragmatic pilot RCTs: the hypothesis test strikes back! Pilot and Feasibility Studies, 2021. 7(1): p. 40.)

Consequently, to justify your sample size, discussions with your statistician and qualitative researcher are required. The statistician will also be able to help you to develop your progression criteria.


If you choose to use randomisation in your feasibility study, you can test the process in advance of the pilot and definitive trial.

One decision you need to make is the size of blocks to be used for the randomisation process. Using pure uncontrolled randomisation can result in very different size groups at the end of the study. Block randomisation is used to ensure that the numbers in both arms remain reasonably similar. E.g. Within a block of eight there would be four allocations to the intervention arm and four to the control but the order would be unknown. Therefore, if the blocks were being used across a trial, then the worst difference we would get between two groups at the end would be four patients and that would only occur if the final block was only half used and all of the first four were allocated by chance to the same group.

The smaller the block size the better chance you have of ending up with two similar sized groups at the end but also of the order being predictable to participants. The larger the block size the greater the chance of different group sizes at the end of the trial, particularly if you have a large number of sites all with their own blocks. There is no correct answer to this and therefore, again, this can be tested at the feasibility stage. NB: It is important that those recruiting to the study and involved in the randomisation process are blind to the block size.

It is usual for randomisation to be undertaken automatically via computer. This requires programming with the set block size. Interaction with the system and ability to access it all need testing to ensure that it is practicable and acceptable to research associates.

Time zero

It is important to define time zero as outcome measures are usually stated at a set time point post this e.g. Quality of life at 6 months (post time-zero). It is usually set at the point of randomisation but can be moved depending on when the intervention is planned to start post-randomisation e.g. if you have to train your healthcare professionals to deliver the intervention and this takes one month then you can state that Time-zero will be one month post-randomisation. At the feasibility stage you can test the validity of this assumption as you may find that you need more or less time than originally planned for.

Outcome measures

We have already discussed the process for selecting outcome measures. At the feasibility stage you are testing them for acceptability, practicability and quality. It is helpful to separate measures of process i.e. those things which you change which are predictors of better clinical outcomes, from clinical and humanistic outcomes. Clinical outcomes are those which are clinically measurable e.g. Blood pressure, HBA1C levels, whilst humanistic are the consequences of improvements in care and are important to the patient e.g. satisfaction with care and quality of life.

Data collection

It is important to identify your sources for data collection. Frequently there maybe more than one source available for the same data e.g. medical practice and care home records will record visits by the doctor to the resident and prescribed medications. At the feasibility stage, data can be collected from both sources to enable you to identify the most reliable source or to ensure that you have complete data by reconciling the two. You can also determine which is the most efficient from the perspective of the research associates.

During the data collection process, it is necessary to ask the individuals collecting the data to record how long data collection takes and to makes notes regarding the process.

The response rate any third party completed data collection tools i.e. by patients, carers or healthcare providers should also be recorded.

Health Economics

At the feasibility stage, health economics is about ensuring that all costs associated with the intervention are identified, they can be measured and valued. What is exactly required from the definitive trial will be decided by the agreed approach to demonstrating the value of the intervention. Consequently, this section should be written by the health economist within the research team.

Data analysis

The objectives which can be met by your quantitative data analysis are:

  • Estimate participant population size, recruitment and retention rates
  • Describe suitability of data collection processes and outcome measures for inclusion in definitive trial
  • Describe the process for economic evaluation of the intervention

For the first objective you would state that you would analyse your data descriptively to provide the following numbers of patients:

  • identified as potentially suitable
  • excluded due to exclusion criteria
  • approached for consent and reasons for not approaching patients (these will include ‘left the ward for operation’ or ‘discharged before approached)
  • who consented to participate in the trial
  • who were retained in the trial at the end

To describe the suitability of data collection processes you would report the time taken for data collection for each outcome, summarise any field notes regarding completion plus response rates for any third party data collection form completion.

To describe the suitability of outcome measures you would report the amount of data received for each outcome measure and the quality of the data received e.g. for proxy quality of life you report that 5% of carers did not complete at least one of the five domains meaning that the total score would be deemed to be missing.

Where any amount of data is likely to be missing then discussion with your health economist or statistician regarding processes for imputation of missing data and their potential appropriateness is required.

If your feasibility study included randomisation then you will want to provide a demographics table for visual comparison of the two groups to see whether any stratification (change to the process to ensure that two groups are more likely to be equally represented in both arms) may be required or whether the block randomisation resulted in relatively equal group sizes.

As a feasibility study you would additionally state that no statistical analysis, beyond descriptive statistics, would be undertaken as the study is not designed to estimate effectiveness.

To describe the process for economic evaluation of the intervention you could report the costs identified for inclusion plus the proposed processes for measurement and valuation, providing evidence from the data within your feasibility study to justify these.

Process evaluation

The purpose of a process evaluation in a definitive trial is to understand the context within which the intervention was delivered, describe how it was delivered and understand how it did and didn’t work.

Whilst a feasibility study can provide some insight into these (which are discussed in detail in the next section), its main purpose is to inform and test the proposed plans for process evaluation. The results are then used to finalise the process evaluation protocol.

The objectives related to the process evaluation within a feasibility study are therefore:

  • Describe processes for enhancement of intervention fidelity
  • Describe intervention and research process acceptability and practicability
  • Identify potential for contamination or reactivity bias within the control arm
  • Define the process evaluation

To test the processes for enhancing intervention fidelity you should plan to obtain feedback from the individuals delivering the intervention and from those receiving it. You can plan to observe the process at the feasibility stage for these purposes but may not want to repeat this in the main trial due to concerns regarding reactivity bias.

Training is one of the most common approaches used to enhance fidelity. Feedback on the training immediately post completion can provide insight into how the session can be further enhanced. What is most important however is how well that translates into the intervention delivery. Consequently, observing a sample of events at the feasibility stage can provide better insight into which elements worked and which required further enhancement or focus. You can also survey service recipients for feedback on the service to identify which elements of the intervention they received and how effective they believed the intervention was. Qualitative work, interviews or focus groups, with a smaller number of service recipients will provide even greater insight. It is important however that the service recipients don’t feel that they are being asked questions by the survey providers as this is likely to result in social desirability bias, where respondents are less willing to be honest.

Surveys and qualitative research methods can also be used to obtain feedback on both intervention and research process acceptability and practicality.

If you have included randomisation, contamination can be explored within the control arm through qualitative work and by measuring changes in process measures before and after study completion i.e. you may notice the proportion of medication errors dramatically reduced at the end of the study compared to the start in the control arm and therefore may want to explore why this occurred.

Reactivity bias i.e. changes in behaviour in response to being observed (See glossary of terms step 1.3 for Hawthorne and John Henry effects), can be explored qualitatively with both providers and recipients, so that approaches to minimise this in the main trial can be introduced.

Within plans for a process evaluation, quantitative measures of process will be recorded. These can be:

  • Activities undertaken by the service provider
  • Changes to made care/treatment
  • Changes in patient behaviour
  • Changes made to systems

These things are predictors of improved clinical outcomes and therefore changes in them can help us to explain any changes in clinical or humanistic outcomes.

If we consider the medication review service in care homes, we could describe:

  • How often the service provider visited the home and for how long
  • What changes to medication were made as a result of the medication review
  • What changes to monitoring were made as a result of the medication review
  • What information was provided to the care home residents and staff
  • What changes were made to recording and ordering systems
  • Which medicines residents were prescribed at the start and end of the study

If we collect this information for each service provider, we can combine this with clinical and humanistic outcome data to help us to understand why the intervention and didn’t work. At the feasibility stage however we are testing our ability to collect these measures and ascertaining their appropriateness and helpfulness in understanding the process.

Progression criteria

If your funder has promised funding for the next stage(s) of the research i.e. pilot and/or definitive trial, then at the end of your protocol it is appropriate to state your progression criteria i.e. the standards you need to meet to be able to move forward to a definitive trial. As explained earlier this will be based on green, amber and red criteria. You can find the table: Examples of progression criteria attached as a PDF below.

The boundaries are for your team to decide and the funder to agree. When considering progression boundaries, it is important to recognise that they accumulate i.e. a low recruitment rate combined with a high attrition rate will result in failure to achieve target population size. Similarly, with a full dose being delivered to only 80% of patients and concerns regarding effectiveness from 20% of patients (which may not align with those who received a sub-optimal dose) can, in combination, result in potentially only 60% of patients receiving the fully effective intervention.

You don’t need to cover all criteria for progression. Recruitment rate, attrition rates and identification of a primary outcome measure are probably the minimum you can get away with. If your reach is less than 80% then the generalisability of your intervention falls into question.


Feasibility studies enable researchers to develop processes and test many assumptions underpinning their plans for a definitive trial. Funders are frequently happy to fund feasibility studies as they significantly increase the chances of the final trial being effective and are a relatively small investment in resources given the potential cost of failure from a definitive trial.

Feasibility studies have very bespoke objectives which are different to those of either a pilot or definitive study. They are designed for reflective researchers who want to learn from mistakes and test ideas and different options in a safe manner i.e. without adversely affecting large numbers of patients, healthcare professionals or research team members.

Due to their relatively small size and the testing of a number of different elements a pilot study is usually subsequently required to confirm that the final research design holds together and is likely to deliver as anticipated.

© Professor David Wright University of Bergen
This article is from the free online

Developing and Testing Complex Healthcare Interventions

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now