Which design is best suited to assess the effectiveness of an intervention?
Single-case experimental designs (SCED) are experimental designs aiming at testing the effect of an intervention using a small number of patients (typically one to three), using repeated measurements, sequential (± randomized) introduction of an intervention and method-specific data analysis, including visual analysis and specific statistics. The aim of this paper is to familiarise professionals working in different fields of rehabilitation with SCEDs and provide practical advice on how to design and implement a SCED in clinical rehabilitation practice. Research questions suitable for SCEDs and the different types of SCEDs (e.g., alternating treatment designs, introduction/withdrawal designs and multiple baseline designs) are reviewed. Practical steps in preparing a SCED design are outlined. Examples from different rehabilitation domains are provided throughout the paper. Challenging issues such as the choice of the repeated measure, assessment of generalisation, randomization, procedural fidelity, replication and generalizability of findings are discussed. Simple rules and resources for data analysis are presented. The utility of SCEDs in physical and rehabilitation medicine (PRM) are discussed.
Show Health care evaluation is the critical assessment, through rigorous processes, of an aspect of healthcare to assess whether it fulfils its objectives. Aspects of healthcare which can be assessed include:
Healthcare evaluation can be carried out during a healthcare intervention, so that findings of the evaluation inform the ongoing programme (known as formative evaluation) or can be carried out at the end of a programme (known as summative evaluation). Evaluation can be undertaken prospectively or retrospectively. Evaluating on a prospective basis has the advantage of ensuring that data collection can be adequately planned and hence be specific to the question posed (as opposed to retrospective data dredging for proxy indicators) as well as being more likely to be complete. Prospective evaluation processes can be built in as an intrinsic part of a service or project (usually ensuring that systems are designed to support the ongoing process of review). There are several eponymous frameworks for undertaking healthcare evaluation. These are set out in detail in the Healthcare Evaluation frameworks section of this website and different frameworks are best used for evaluating differing aspects of healthcare as set out above. The steps involved in designing an evaluation are described below.
Steps in designing an evaluation Firstly it is important to give thought to the purpose of the evaluation, audience for the results, and potential impact of the findings. This can help guide which dimensions are to be evaluated – inputs, process, outputs, outcomes, efficiency etc. Which of these components will give context to, go toward answering the question of interest and be useful to the key audience of the evaluation? Objectives for the evaluation itself should be set (remember SMART) – S - specific – effectiveness/efficiency/acceptability/equity Having identified what the evaluation is attempting to achieve, the following 3 steps should be considered:
1. What study design should be used? When considering study design, several factors must be taken into account:
Study designs include: a) Randomised methods
b) Non randomised methods
c) Ecological studies
d) Descriptive studies
e) Health technology assessment
f) Qualitative studies
2. What measures should be used? The choice of measure will depend on the study design or indeed evaluation framework used as well as the objectives of the evaluation. For example, the Donabedian approach considers a programme or intervention in terms of inputs, process, outputs and outcomes.
The table below gives some further examples of measures that can be used for each aspect of the evaluation. Such an evaluation could measure process against outcomes, inputs versus outputs or any combination.
3. How and when to collect data? The choice of qualitative versus quantitative data collection will influence the timing of such collection, as will the choice of the evaluation being carried out prospectively or retrospectively. The amount of data that needs to be collected will also impact on timing, and sample-size calculations at the beginning of the evaluation will be an important part of planning. For qualitative studies, the sample must be big enough that enlargement is unlikely to yield additional insights e.g. undertaking another interview with a member of staff is unlikely to identify any new themes. Most qualitative approaches, in real life, would ensure that all relevant staff groups were sampled. For quantitative studies the following must be considered (using statistical software packages such as Stata):
If the evaluation is of a longitudinal design, the follow up time is important to consider, although in some instances may be dictated by availability of data. There may also be measures which are typically reported over defined lengths of time such as readmission rates which are often measured at 7 days and 30 days.
Trends in health services evaluation Evaluation from the patient perspective has increasingly become an established part of working in the health service. Assessment of service user opinion can include results from surveys, external assessment (such as NHS patient experience surveys led by the CQC) as well as outcomes reported by patients themselves (patient reported outcome measures) which from April 2009 are a mandatory part of commissioners’ service contracts with provider organisations and are currently collected for four clinical procedures; hip replacements, knee replacements, groin hernia and varicose veins procedures. What is the best design to study the effectiveness of an intervention?Traditional study designs such as randomized controlled trials (RCTs) can be ideal for testing the efficacy or effectiveness of interventions, given the ability to maximize internal validity.
What type of study is strongest for testing effectiveness of interventions?A true experiment or randomized controlled trial (RCT) is the strongest type of intervention study for testing cause and effect relationships.
How would you evaluate if an intervention is successful?Once you have implemented a planned intervention, you can look at ways to evaluate its success. Evaluation relies on knowing the outcomes and goals of a project and testing them against results. Effective evaluation comes from measurable data and clear objectives.
What is the research design of effectiveness?The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data.
|