Notes
Article history
This report is the 9th essay of the HS&DR report “Challenges, solutions and future directions in the evaluation of service innovations in health care and public health” https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160/#/abstract. The first version of the report was the output from a meeting in London in June 2015 and the subsequent management of the proceedings funded by The Health Foundation, The Medical Research Council, the National Institute for Health Research and Universities UK. It began editorial review in November 2015 and was accepted for publication in March 2016. The additional chapter is the result of a roundtable meeting in February 2017 funded by the National Institute for Health Research and the Medical Research Council. The authors of the whole report have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
This report presents independent work and the views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.
Declared competing interests of authors
Matt Sutton reports grants from the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Greater Manchester and is deputy chairperson of the NIHR Health Services and Delivery Research (HSDR) Board. He is also a NIHR senior investigator. Graham Martin is part funded/supported by the NIHR CLAHRC East Midlands and is a member of the NIHR HSDR Evidence Synthesis Sub Board and the NIHR Health Technology Assessment (HTA) programme National Stakeholder Advisory Group. Stephen Morris is part funded/supported by the NIHR CLAHRC North Thames at Bart’s Health NHS Trust and is a member of the NIHR HSDR Board and the NIHR HSDR Evidence Synthesis Sub Board. Samuel I Watson and Richard J Lilford are both part funded/supported by the NIHR CLAHRC West Midlands. Mark Sculpher is a NIHR emeritus senior investigator and a member of the Medical Research Council Methodology Research Programme Advisory Group. Steph Garfield-Birkbeck is an assistant director at the NIHR Evaluation, Trials and Studies Coordinating Centre at the University of Southampton and is a member of the NIHR HSDR programme and HTA programme secretariats. Rachel Meacock is an associate member of the NIHR HSDR Board. Andrew Street served as a member of the NIHR HSDR board until January 2017.
Disclaimer
This publication is linked to a series of essays published by the National Institute for Health Research Journals Library in 2016 [Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res 2016;4(16)].
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2018. This work was produced by Sutton et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Foreword
In 2016, the National Institute for Health Research (NIHR) Journals Library published a series of essays, in which a range of UK and international experts in health services research identified current developments and future challenges in methods to evaluate health, social care and public health innovations. It was recognised that this volume did not comprehensively address the full array of methods. One such gap was the economic evaluation of service innovations. A meeting of recognised experts supported by NIHR Collaborations for Leadership in Applied Health Research and Care, NIHR Health Services and Delivery Research and the Medical Research Council was held in 2017 to address this area. Matt Sutton and colleagues draw together here the key insights from that meeting. There are distinctive methodological challenges in the economic evaluation of service innovations, not least that randomised controlled trials are not usually feasible, and costs and benefits are diffuse and variable across the system. The authors provide a clear and authoritative summary of solutions identified to date and an agenda of future challenges.
Ray Fitzpatrick
Professor of Public Health and Primary Care
University of Oxford
Oxford
UK.
Essay 9
Introduction
Some of the most important decisions that are made in health care are those concerning how services should be configured and delivered. These decisions include how resources should be distributed between areas, populations, programmes of care and settings; the location and accessibility of services; payment systems and incentives; the size, composition and skills of the workforce; methods to improve compliance with safety and effectiveness guidelines; and service specialisation, co-ordination and integration. Collectively, these decisions constitute ‘service and delivery interventions’. For brevity, we refer to these as ‘service interventions’ for the remainder of this essay.
Recommended methods for the economic analysis of service interventions are less well articulated than for other types of interventions and there are no comprehensive guidelines. The National Institute for Health and Care Excellence (NICE) produced an interim methods guide for developing service guidance in 2014,1 which provided recommendations on how services should be organised around clinical interventions that have been deemed clinically effective and cost-effective. Existing Medical Research Council (MRC) guidance on complex interventions2 and natural experiments3 are primarily focused on evaluating effectiveness and pay relatively little attention to economic issues. Other guides to economic appraisal are focused on high-level policy evaluations. 4,5
Several essays within this collection discuss pertinent issues for the economic analysis of service interventions. Watson and Lilford (Essay 1 from Raine et al. 6) explain how multiple forms of evidence along proposed causal pathways can be synthesised to link service interventions to outcomes. Barratt et al. (Essay 2 from Raine et al. 6) highlight how trial methods have been developed to facilitate evaluation of complex interventions and large-scale transformations of services. Gillies et al. (Essay 3 from Raine et al. 6) describe the battery of methods that can be used to model causal effects and address bias in observational data, many of which emanate from econometrics.
In this essay, we focus on additional opportunities and challenges specifically for the economic analysis of service interventions. We begin by highlighting the distinctive issues involved in the economic analysis of service interventions. Many of these issues are also germane to economic analysis of diagnostics, clinical interventions and public health initiatives, but loom larger for service interventions. We then describe some challenges that these distinctive issues pose for economic analysis. Following this, using a range of examples, we highlight recent methodological developments in the economic analysis of service interventions. We conclude by identifying the key challenges and priorities for future research.
Scope and role for economic analysis
Health economics contains a wide range of topics. A recent classification of topics in health economics is provided by Wagstaff and Culyer,7 based on an analysis of the main content of four decades of health economics papers (Box 1). These include papers focusing primarily on the measurement and valuation of health, methods for the economic evaluation of interventions, defining and measuring efficiency and equity, demand for health care, supply of health services, human resources and equilibrating mechanisms such as market mechanisms and waiting times. Many papers, of course, consider more than one topic. In tackling each of these topics, health economists have developed a wide range of techniques that could be drawn on for the economic analyses of service interventions.
-
Health and its value.
-
Efficiency and equity.
-
Determinants of health and ill-health.
-
Public health.
-
Health and the economy.
-
Health statistics and econometrics.
-
Demand for health and health care.
-
Medical insurance.
-
Supply of health services.
-
Human resources.
-
Markets in health care.
-
Economic evaluation.
Overall, the purpose of such analyses is to guide decisions regarding efficient and fair resource allocation to achieve social objectives, including enhancing overall population health and its distribution. Based on the MRC framework for complex interventions,3 we can envisage four stages at which economic analysis can support decision-making. These are the:
-
design stage
-
implementation stage
-
evaluation stage
-
translation stage.
These stages will not necessarily be sequential and will often interact. In some instances, the evaluation stage may be very broad and involve implementation and translation elements, alongside the more standard evaluation elements of effects estimation, evidence synthesis, assessment of trade-offs and analysis of uncertainty.
At the design stage, economics is one of the disciplines that can contribute to the development of service interventions. There are, for example, interventions that derive primarily from economic theory, such as the form of health care financing8 and the design of financial incentives for care providers. 9 Economic analysis can contribute to the setting of prices in pay-for-performance systems. 10 Financing and payment design requires an understanding of the motivations of agents in the health-care system and how these agents will respond to incentives and constraints. Payment design also requires an understanding of the production process in health care, specifically which inputs are required, in what combinations and to what scale, to produce desired outcomes. To put it more broadly, a deeper understanding of the production process and the behaviour of agents in the care system should feed into the design of service and delivery interventions.
At the implementation stage, the focus is on how the intervention will be introduced initially. Economic analysis can contribute to clarifying the expected costs and benefits of the chosen intervention, identifying where the key evidence gaps are and where prospective measurement should focus, and evaluating the cost-effectiveness of different implementation strategies. 11,12 At this stage, the ‘headroom method’13 of economic analysis can be used to calculate the potential value of an intervention. 14 In addition, there are other approaches to providing an ex ante evaluation of the option value of a proposed intervention.
At the evaluation stage, economic analysis can contribute to the specification and weighting of the elements that will determine the overall cost-effectiveness of the intervention and to the estimation of the cost and benefit consequences, and how these compare with opportunity costs (benefits that could be achieved through alternative uses of the same resources). This may involve translating impacts on intermediate end points into health gains [such as quality-adjusted life-years (QALYs)], incorporating other relevant consequences (e.g. for equity) and identifying and measuring cost consequences. It will also involve a formal assessment of uncertainty in the evidence and implications for decisions and the value of further research, to inform considerations of scaling up and rolling out.
At the translation stage, the focus is on the implications of study results identified in one context for service design in another context. The concern is with wider roll-out and embedding the intervention in other health systems. This will make use of evidence to assist in this extrapolation, such as baseline data and relative risks in the original and future context. It will also make use of information on the use of inputs in natural unit rather than resources used at the context-specific unit costs. More broadly, the purpose of economic analysis at this stage is to inform considerations of how the service intervention will affect, and be affected by, the wider care system in which it will be introduced when adopted by other sites or at different times. This will include considering resource constraints in practice, the unintended consequences, spillovers onto other services and people, and the effects of changes in the use of inputs in other contexts.
Distinctive issues in the economic analysis of service interventions
It is tempting to envisage a spectrum based on the level and scale of intervention from clinical interventions to service interventions to policy interventions. This may suggest that the well-developed methodological guidance on economic evaluation for clinical interventions could be carefully adapted for service interventions. Economic analyses within clinical studies, for example, have included many of the features pertinent to service interventions, including analysing the determinants of costs and outcomes using regression analyses, eliciting preferences using economic approaches (such as discrete choice experiments) and investigating economic issues affecting implementation.
Several papers have considered the extent to which economic evaluation of different forms of intervention differ from a ‘typical Health Technology Assessment (HTA)’. These include public health,15 social care,16 antimicrobials,17 diagnostics,18 medical devices19 and genetics. 20 Together, these papers highlight that there is no typical HTA and, instead, there is a spectrum of challenges facing any form of evaluation. No challenge is unique to service interventions and there are lessons that can be learned across the spectrum of challenges facing analysts focusing in different areas.
This highlights that there is no clear demarcation between clinical interventions, service interventions and policy interventions. But there are differences in emphasis and the degree to which particular challenges are salient and have to be dealt with. There are a number of features that appear distinctive for service interventions (Box 2) and affect the focus of the analysis required. Lilford et al. 21 emphasise that, although there are some service interventions that are focused on specific processes, more generic service interventions and policy interventions have the potential to have an impact on several processes and hence exhibit more diffuse effects across multiple outcomes. Watson and Lilford (Essay 1 from Raine et al. 6) show how such causal chains can be modelled.
-
Intervention is more distal to the patient and acts through multiple processes.
-
Multiple, small effects across a wide range of patient outcomes.
-
Demand-side effects.
-
Supply-side effects.
-
Spillovers.
-
Equilibrium influences.
-
Heterogeneity between organisations in implementation and effectiveness.
-
Impact depends on context.
-
Impact varies over time.
-
Fast-paced nature of decision-making and changing wider context.
These more generic service interventions are likely to have multiple effects on large patient populations, which may each be small in size but aggregate to substantial effects. The difficulties of detecting and measuring multiple small effects pose even more of a challenge for service interventions that affect multiple providers, such as network or system interventions. The difficulty of detecting and measuring effects for large populations often makes primary data collection prohibitively costly. As a result, there is often a reliance on observational data sets with the concomitant challenges of attribution and causality and a need to synthesise multiple sources of evidence.
The consequences of service interventions at the level of the health-care provider are often more substantial than for clinical interventions. The associated changes in costs are therefore more complex. It is more frequently noted in the case of service interventions that they may free up resources for other use but do not reduce expenditure. This is exactly the rationale for considering opportunity costs in an economic evaluation rather than financial implications. Nonetheless, the usual assumption of using cost-weighted utilisation as a means of estimating relevant opportunity costs may be too simplistic for some service interventions. Average unit costs may not be accurate proxies for the implications of non-marginal changes in resource utilisation.
Any form of intervention is likely to have wider system impacts. They may divert resources away from other patients and/or free up resources for potential use by other patients. Beyond these direct effects on costs, interventions may affect costs and benefits indirectly. They may generate spillovers onto other people or other interventions if the tasks involved are substitutes or complements. They may also influence both the demand side and the supply side of the production process. On the demand side, it is necessary to understand patient preferences and how current and potential patients may respond to reconfigured services. On the supply side, it is necessary to consider the capacity of the system in terms of the availability of the required inputs. If it takes time for staff to be recruited or retained, there may be a period in which demand exceeds supply and a lag before the system comes to an equilibrium. We know relatively little about how different levels of labour input affect the capacity of the system in general. Moreover, the effects on some inputs, especially labour, are behavioural. Little is known about how staff will respond to increases or decreases for the demand for their input. There is substantial literature on the interaction between supply and demand, often known as ‘supplier-induced demand’. 22
A further set of distinctive features for service interventions relate to the heterogeneity of implementation, context and impact between places and over time. Heterogeneity exists in the implementation and delivery of the intervention, what sort of care the intervention is designed to replace and in the context into which the service innovation is introduced. Although these heterogeneity issues are also relevant for clinical interventions, the narrower scope, tighter protocols and eligibility criteria for recruitment limit their pertinence. Service interventions are not always clearly defined, they often evolve over time, and they may not use strict criteria to define their target population. Consequently, the extent of variation between organisations in how they implement service interventions is more substantial than clinical interventions. Context is more important for service interventions as this affects impact. This may lead health-care organisations to negotiate a range of variations and prices with providers for a given type of intervention, such as was found in a recent study of electronic prescribing systems. 23 The changing environment in which service interventions take place, the dynamic nature of service interventions and the less formalised decision-making process in service interventions are also distinctive features. The other essays in this collection6 (especially Essays 6–8 from Raine et al. 6) have also highlighted these issues.
Challenges for economic analysis of service interventions
Although randomised controlled trial (RCT) designs have been developed to evaluate complex interventions and large-scale transformations of services (Essay 2 from Raine et al. 6), economic analyses of service interventions have tended to rely on non-experimental designs and observational data for practical reasons of implementation and prohibitively high data collection costs. Gillies et al. (Essay 3 from Raine et al. 6) emphasise the challenges involved, including the requirements for risk adjustment and matching in comparative evaluations. Nonetheless, these approaches based on observational data have advantages of generalisability, more accurate reflection of routine practice and more comprehensive coverage than many RCTs. It is likely that combinations of experimental and observational data will be the most informative. 24 Irrespective of study design, it is important to seek a comprehensive understanding of how an intervention works. Theoretical understanding can aid judgements concerning both the internal validity of findings and their applicability to other contexts. 21,25 In order to acquire or enhance such theoretical understanding, it is desirable to collect information across a causal chain linking an ‘upstream’ intervention to its effect at the patient level ‘downstream’ (Essay 1 from Raine et al. 6).
The issue of how to allocate scarce resources and the notion of opportunity cost are ubiquitous to all forms of intervention. Therefore, economic evaluations should also be fundamental to economic analyses of service interventions. The fact that studies of clinical interventions are more often undertaken using study designs with greater internal validity (e.g. multicentre RCTs) has enabled economists to focus on the comparison of costs and benefits. Because evaluations of service interventions have tended to involve more challenging non-experimental designs, economists have tended to focus on attribution and causality and made important contributions to the robust estimation of impact. This is a matter of custom and practice, but also capacity. Nonetheless, the questions of how service changes affect costs and patient benefits remain important for service interventions and should be given more attention.
Evaluations of service delivery interventions often take place in a context where the service as a whole is taking measures to improve the relevant aspect of care. The result is a secular trend or ‘rising tide’ that might obscure the effects of an intervention in an evaluative study. 26 Promising interventions that have produced null results in such circumstances for which the system as a whole was undergoing rapid improvement included the Safer Patient Initiative in the UK,27,28 the ‘Matching Michigan’ study to reduce bloodstream infections29 and the Medical Early Response, Intervention and Therapy (MERIT) study to improve recognition of deteriorating patients on the ward. 30 In all these cases, there was evidence that the system on a whole was improving alongside the introduction of the service intervention.
Service interventions frequently have consequences for the demand side of the health-care system. Patient preferences, knowledge and constraints affect their choices and their behaviour. Service changes can affect what is offered to patients and the costs they incur in accessing care. These can have knock-on consequences for their use of services, for their families and carers31 and for their health behaviours. 32–34 All of these considerations contribute to the impact and the costs of service interventions and affect the generalisability of evidence from one setting to another.
In considering the total cost of the intervention, it is important to distinguish the set-up costs, that are incurred just once and may include research-only costs, from the running costs that would be expected in perpetuity. Many service interventions, especially at the organisational level, incur high upfront or set-up costs. These costs pose challenges for evaluation because they would ideally be shared over all users, which means making decisions about the lifetime of the intervention and the number of patients affected over the long term. They also pose a challenge for implementation because, even if the average cost per patient is acceptable, commissioners or providers might not have the resources to meet the high upfront costs.
There are also additional cost issues. The cost consequences may differ substantially between organisations, for example because different amounts of time from different types of staff may be devoted to the intervention. We may be interested in the impact that the scale of implementation has on costs. Finally, there are likely to be cost consequences across organisations (e.g. from secondary to primary care) and there may be differences between these organisations in their capacity to absorb additional costs.
Studies of service interventions often consider different study questions than whether or not the intervention is better than usual care. In many cases, there is a focus on whether or not an evaluation of a service intervention will provide generalisable evidence about whether or not a change to inputs or organisation causes changes in delivery and/or outcomes. Thus, there is less desire for evidence on ‘if it worked’ and more on the identification of causal relationships that can inform the design and implementation of future interventions, as policy and practice will have moved on. This is in part because there is no requirement prior to adoption to prove cost-effectiveness and less potential for roll-out of the same service in all contexts. Therefore, the focus is not about informing a discrete decision and providing evidence to support wider adoption, but on furthering understanding of the care system to inform future service changes.
As a consequence, there are two overall purposes for economic analysis of service interventions. As with other forms of intervention, economic analysis has an important role to play in providing a guide to decision-making by comparing costs and benefits and assessing relevant opportunity costs. But, in the case of service interventions, economic analysis also provides descriptive analysis alongside decision-making to help to understand processes rather than to evaluate decisions.
Recent developments
In this section we highlight some recent examples of methodological developments in the field of economic analyses of service interventions.
Ex ante modelling of expected costs and benefits is an important aid to decision-making,13 the design of service interventions and the design of future evaluations. Impact assessments were, for a period, routinely undertaken by the Department of Health. 35 They were a useful part of the policy formation process but are, regrettably, no longer required. The analysis of the policy of introducing 7-day hospital services by Meacock et al. ,36 Brown and Lilford’s37 evaluation of a government directive to wash hospital wards, and the evaluation of a proposed service to improve handover of patients between hospital and home by Yao et al. 14 are examples of this approach. Such ex ante assessments should be more routinely produced for proposed service interventions.
Broadly speaking, there are two approaches to producing summative assessments of the costs and benefits of entire programmes. The first involves direct estimation of the summative impacts through exploitation of some experiment or other source of variation in implementation. The second involves modelling the causal chain through component processes to derive an aggregate measure where direct estimation is not feasible.
A recent example of the first, direct approach was undertaken for the reconfiguration of stroke services. Morris et al. 38 used observational data and difference-in-differences techniques to examine the mortality and length-of-stay changes associated with reconfiguration of stroke services in London and Manchester. Hunter et al. 39 constructed decision-analytic models using data from population-based stroke registers, audits and published sources to show the service intervention reduced mortality for a reduced cost per patient, predominantly as a result of reduced hospital length of stay. Meacock et al. 40 showed how a similar approach could be taken for the Advancing Quality pay-for-performance programme41 adopted in the North West of England. They translated the mortality reductions identified by Sutton et al. 41 into gains in QALYs using a discounted and quality-adjusted life-expectancy tariff, and compared these to the costs to reach conclusions on the scheme’s cost-effectiveness. This was later developed with survival analysis to obtain more accurate estimates of the QALY gains. 42
Watson and Lilford (Essay 1 from Raine et al. 6) explain how the second approach can be parameterised. This can involve evidence synthesis and analysis of large observational data sets to derive parameters to be plugged into economic models. As an example, Elliott et al. 43 combined adherence improvement and intervention costs from a trial with Markov models for diseases targeted by the New Medicines Service tracking the effect of increased adherence on patient outcomes and health-care costs. Bayesian network approaches have also been developed. 44 There is potential value in combining the direct and modelling method processes. 21 Modelling outcomes of economic interest is a topic of increasing interest and lessons will be learned from the ongoing advances in decision-analytic modelling being developed by the International Society for Pharmacoeconomics and Outcomes Research. 45
There have also been advances in the measurement of patient and professional preferences that can further our understanding of demand-side and supply-side responses to service interventions. Discrete choice experiments have risen in popularity in a variety of applications,46 including examining patient aspects, such as whether or not convenience matters,47 and professional preferences for location48 and other aspects. 49 There is renewed focus on their external validity. 50–52 An alternative is to model revealed preferences when available. 53 In an innovative combination of data on stated preference and revealed behaviour, Scott and Sivey54 have shown that a general practitioner’s strength of preference for income is correlated with their response to competition.
Recent work has begun to analyse how variations in historic data on care expenditure and population outcomes can help us to understand the value of resources that may be affected by service interventions. The recent work on the cost-effectiveness threshold by Claxton et al. ,55 for example, has produced estimates of opportunity costs based on previous patterns of expenditure across the NHS in England. These supply-side estimates can be contrasted with the demand-side values proposed by NICE. Coupled with the analysis of variations in productivity across organisations by Castelli et al. ,56 this has the potential to provide us with organisation-specific estimates of the opportunity costs of additional investments required for service interventions.
Another area in which there have been recent advances is in the economics of implementation in health care. This literature is relevant to all interventions, but especially service interventions, which often incur high upfront costs and may not be straightforward to implement, even if shown to be cost-effective. For example, the literature on ‘policy cost-effectiveness’ argues that decision-makers should consider both the costs and effectiveness of implementation as well as the cost-effectiveness of the innovation to be implemented. 57,58 This was applied to the Quality and Outcomes Framework by Walker et al. 59
Future challenges and priorities
In this section we highlight gaps in the current methods and suggest what we need to focus on in future research.
Economic evaluation is one of the key contributions of health economics to decision-making. Methods for the health economic evaluation of medical devices and technologies, including pharmaceutical products, are well developed and integrated into the process of clinical commissioning and decision-making through bodies such as NICE. 60 However, economic evaluation has not been incorporated in such a way for other important questions concerning health service expenditure such as staffing policies, primary and secondary care organisation and integration, and patient safety interventions. It is debatable whether guidelines for economic analysis of service interventions are feasible or appropriate. Regardless, it will not be a simple issue of adapting the existing HTA methods to the evaluation of service interventions.
In Box 3, we set out an initial list of questions that analysts should consider when considering service interventions. These affect the analyses that can be performed and whether or not the nature of the service innovation suggests that special issues need to be considered. This list of questions could be developed into a more systematic list of considerations.
-
What is known about the causal chain from the intervention to patient outcomes?
-
Are there likely to be impacts beyond the patients targeted by the service intervention?
-
Is the service intervention likely to have an impact on other work undertaken by the care professionals involved?
-
Is the service intervention sufficiently large to affect the structure of costs?
-
Will the service intervention have an impact on costs across organisations?
-
Does the introduction of the service intervention provide an opportunity to produce generalisable learning on the production process?
-
Does the introduction of the service intervention provide an opportunity to produce generalisable learning on the behaviour of agents in the care system?
-
What is known about the participation process and how might this inform the design of the estimation of programme impacts?
-
How will different forms of analysis inform commissioners of services?
A challenge common to economic evaluation of all forms of interventions is whether or not we need to consider wider ‘outcomes’ beyond patient health, as captured by the QALY. Additional potential issues include access, quality, patient experience, sustainability, equity, capability and population engagement. It is widely accepted that these aspects are important, but it is less clear as to whether they are of value in themselves or because they affect patient outcomes, how they should be measured and how they should be traded off against health outcomes to derive a composite measure of programme benefit that can also be reflected in terms of opportunity cost.
Another common challenge is developing advanced methods for evaluating impacts in a non-experimental setting. This is the area of economic analysis of service interventions that is most advanced (Essay 3 from Raine et al. 6), but there remains a need for a better understanding of the assignment mechanism for programme evaluation. We need to understand why some organisations and professionals tend to participate and others do not. This is key to developing robust comparators for non-experimental evaluations. Economists should be involved at the earliest stage of decisions about the implementation (such as phased roll-out) of service interventions to ensure that the important end points are collected.
The key components of an economic evaluation are identification, measurement and valuation of the costs and benefits. The identification stage is intended to produce a comprehensive list of potential costs and benefits, but this has tended not to be systematic, with economists focusing primarily on the challenging issues of measurement and valuation. There is potential for complementary, parallel investigations to identify (1) variables that are likely to be affected, (2) potential impacts elsewhere in a complex system, (3) proposed mechanisms that might, for example, support the robustness of the approach to identify causal effects, (4) heterogeneity in engagement by different groups of patients and professionals, and (5) the boundaries of generalisability for a particular analysis. These will be best achieved by more active engagement in interdisciplinary research.
There is a belief that the potential for financial profit has driven the faster development of guidelines for the economic evaluation of medical devices and pharmaceuticals. The guidelines have sought to safeguard the NHS from products that are not cost-effective. A similar impetus may emerge for the evaluation of service interventions as the provider sector becomes more diverse.
There is a more basic challenge of how to capture the opportunity costs of service interventions, which are likely to comprise the costs of implementing the service intervention, the costs of delivering the service intervention and the impact that the service intervention has on consequent care costs. Further challenges are that these costs may differ between organisations and over time, and that service interventions are sufficiently large to cause non-marginal changes in resource use. Therefore, there is scope for methodological work on evaluating the impact of service interventions on costs.
More fundamentally, there is a need for economic analysis of service interventions to further our understanding of (1) the production process and (2) the behaviour of agents in care systems. These considerations should feed into how evaluations are undertaken, future ex ante analyses and how service interventions should be co-designed in the future. Service interventions provide an opportunity to generate additional evidence on these issues by offering a purposive source of variation in care delivery. Through these ‘experiments’, there is scope to develop and collate generalisable knowledge on the mechanisms operating within the care system.
Acknowledgements
This publication arises from a Health Economics and Health Services Research Roundtable chaired by Jo Rycroft-Malone [National Institute for Health Research (NIHR) Health Services and Delivery Research (HSDR) programme director, who is a member of the NIHR Journals Library Board] and supported by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC), the NIHR HSDR programme, and the MRC Methodology Research Programme. The Roundtable was attended by the authors and Paul Bird (University of Warwick), Chris Bojke (University of Leeds), Martin Chalkley (University of York), David Crosby (MRC), Brendon Delaney (Imperial College London), Julie Hankin (Nottinghamshire Healthcare NHS Foundation Trust), Alec Morton (University of Strathclyde), Mark Lambert (NHS England), Stephen Lorrimer (NHS England), Bhash Naidoo (NICE), Carol Ozuak (NIHR), Sam Rowley (MRC), Jo Rycroft-Malone (Bangor University), Jon Sussex (RAND Europe), Charles Tallack (NHS England) and Louise Wallace (NIHR).
Contributions of authors
Matt Sutton (Professor, Health Economics) wrote the first draft based on the presentations and discussion at the Roundtable and produced the final version of the essay.
Steph Garfield-Birkbeck (Assistant Director, NIHR Evaluation Trials and Studies Coordinating Centre), Graham Martin (Professor, Health Organisation and Policy), Rachel Meacock (Research Fellow, Health Economics), Stephen Morris (Professor, Health Economics), Mark Sculpher (Professor, Health Economics), Andrew Street (Professor, Health Economics), Samuel I Watson (Senior Research Fellow, Health Economics) and Richard J Lilford (Professor, Public Health) edited the essay and provided additional ideas and material.
All of the authors approved the final version of the manuscript.
Data sharing statement
This essay does not report on any new primary or secondary data and, therefore, there are no data to be shared.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.
References
- National Institute for Health and Care Excellence . Interim Methods Guide for Developing Service Guidance 2014 2014. www.nice.org.uk/process/pmg8 (accessed 14 January 2018).
- Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Medical Research Council Guidance . Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337. https://doi.org/10.1136/bmj.a1655.
- Craig P, Cooper C, Gunnell D, Haw S, Lawson K, Macintyre S, et al. Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. J Epidemiol Community Health 2012;66:1182-6. https://doi.org/10.1136/jech-2011-200375.
- Tan-Torres Edejer T, Baltussen R, Adam T, Hutubessy R, Acharya A, Evans DB, et al. Making Choices in Health: WHO Guide to Cost-Effectiveness Analysis. Geneva: World Health Organization; 2003.
- HM Treasury . The Green Book: Appraisal and Evaluation in Central Government 2011.
- Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res 2016;4.
- Wagstaff A, Culyer AJ. Four decades of health economics through a bibliometric lens. J Health Econ 2012;31:406-39. https://doi.org/10.1016/j.jhealeco.2012.03.002.
- Manning WG, Newhouse JP, Duan N, Keeler EB, Leibowitz A, Marquis MS. Health insurance and the demand for medical care: evidence from a randomized experiment. Am Econ Rev 1987;77:251-77.
- Moreno-Serra R, Wagstaff A. System-wide impacts of hospital payment reforms: evidence from Central and Eastern Europe and Central Asia. J Health Econ 2010;29:585-602. https://doi.org/10.1016/j.jhealeco.2010.05.007.
- Kristensen SR, Siciliani L, Sutton M. Optimal price-setting in pay for performance schemes in health care. J Econ Behav Organ 2016;123:57-7. https://doi.org/10.1016/j.jebo.2015.12.002.
- Fenwick E, Claxton K, Sculpher M. The value of implementation and the value of information: combined and uneven development. Med Decis Making 2008;28:21-32. https://doi.org/10.1177/0272989X07308751.
- Hoomans T, Severens JL. Economic evaluation of implementation strategies in health care. Implement Sci 2014;9. https://doi.org/10.1186/s13012-014-0168-y.
- Girling A, Lilford R, Cole A, Young T. Headroom approach to device development: current and future directions. Int J Technol Assess Health Care 2015;31:331-8. https://doi.org/10.1017/S0266462315000501.
- Yao GL, Novielli N, Manaseki-Holland S, Chen YF, van der Klink M, Barach P, et al. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf 2012;21:i29-38. https://doi.org/10.1136/bmjqs-2012-001210.
- Weatherly H, Drummond M, Claxton K, Cookson R, Ferguson B, Godfrey C, et al. Methods for assessing the cost-effectiveness of public health interventions: key challenges and recommendations. Health Policy 2009;93:85-92. https://doi.org/10.1016/j.healthpol.2009.07.012.
- Social Care Institute for Excellence n.d. www.scie.org.uk/ (accessed 14 January 2018).
- Karlsberg Schaffer S, West P, Towse A, Henshall C, Mestre-Ferrandiz J, Masterton R, et al. Assessing the Value of New Antibiotics: Additional Elements of Value for Health Technology Assessment Decisions. London: The Office of Health Economics; 2017.
- Schaafsma JD, van der Graaf Y, Rinkel GJ, Buskens E. Decision analysis to complete diagnostic research by closing the gap between test characteristics and cost-effectiveness. J Clin Epidemiol 2009;62:1248-52. https://doi.org/10.1016/j.jclinepi.2009.01.008.
- Drummond MF, Griffin A, Tarricone R. Economic evaluation for devices and drugs – same or different?. Value Health 2009;12:402-4. https://doi.org/10.1111/j.1524-4733.2008.00476_1.x.
- Buchanan J, Wordsworth S, Schuh A. Issues surrounding the health economic evaluation of genomic technologies. Pharmacogenomics J 2013;14:1833-47. https://doi.org/10.2217/pgs.13.183.
- Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ 2010;341. https://doi.org/10.1136/bmj.c4413.
- Léonard C, Stordeur S, Roberfroid D. Association between physician density and health care consumption: a systematic review of the evidence. Health Policy 2009;91:121-34. https://doi.org/10.1016/j.healthpol.2008.11.013.
- Cresswell K, Coleman J, Slee A, Williams R, Sheikh A. ePrescribing Programme Team . Investigating and learning lessons from early experiences of implementing eprescribing systems into NHS hospitals: a questionnaire study. PLOS ONE 2013;8. https://doi.org/10.1371/journal.pone.0053369.
- Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ 1996;312:1215-18. https://doi.org/10.1136/bmj.312.7040.1215.
- Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf 2015;24:228-38. https://doi.org/10.1136/bmjqs-2014-003627.
- Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf 2016;25:303-10. https://doi.org/10.1136/bmjqs-2015-004372.
- Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ 2011;342. https://doi.org/10.1136/bmj.d195.
- Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ 2011;342. https://doi.org/10.1136/bmj.d199.
- Bion J, Richardson A, Hibbert P, Beer J, Abrusci T, McCutcheon M, et al. ‘Matching Michigan’: a 2-year stepped interventional programme to minimise central venous catheter-blood stream infections in intensive care units in England. BMJ Qual Saf 2013;22:110-23. https://doi.org/10.1136/bmjqs-2012-001325.
- Hillman K, Chen J, Cretikos M, Bellomo R, Brown D, Doig G, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 2005;365:2091-7. https://doi.org/10.1016/S0140-6736(05)66733-5.
- Al-Janabi H, van Exel J, Brouwer W, Coast J. A framework for including family health spill overs in economic evaluation. Med Decis Making 2016;36:176-86. https://doi.org/10.1177/0272989X15605094.
- Fichera E, Sutton M. State and self investments in health. J Health Econ 2011;30:1164-73. https://doi.org/10.1016/j.jhealeco.2011.09.002.
- Fichera E, Gray E, Sutton M. How do individuals’ health behaviours respond to an increase in the supply of health care? Evidence from a natural experiment. Soc Sci Med 2016;159:170-9. https://doi.org/10.1016/j.socscimed.2016.05.005.
- Fichera E, Emsley R, Sutton M. Is treatment ‘intensity’ associated with healthier lifestyle choices? An application of the dose response function. Econ Hum Biol 2016;23:149-63. https://doi.org/10.1016/j.ehb.2016.09.001.
- Shah K, Praet C, Devlin N, Sussex J, Appleby J, Parkin D. Is the aim of the English health care system to maximize QALYs?. J Health Serv Res Policy 2012;17:157-63. https://doi.org/10.1258/JHSRP.2012.011098.
- Meacock R, Doran T, Sutton M. What are the costs and benefits of providing comprehensive seven-day services for emergency hospital admissions?. Health Econ 2015;24:907-12. https://doi.org/10.1002/hec.3207.
- Brown CA, Lilford RJ. Should the UK government’s deep cleaning of hospitals programme have been evaluated?. J Infect Prev 2009;10:143-7. https://doi.org/10.1177/1757177409106227.
- Morris S, Hunter RM, Ramsay AI, Boaden R, McKevitt C, Perry C, et al. Impact of centralising acute stroke services in English metropolitan areas on mortality and length of hospital stay: difference-in-differences analysis. BMJ 2014;349. https://doi.org/10.1136/bmj.g4757.
- Hunter RM, Davie C, Rudd A, Thompson A, Walker H, Thomson N, et al. Impact on clinical and cost outcomes of a centralized approach to acute stroke care in London: a comparative effectiveness before and after model. PLOS ONE 2013;8. https://doi.org/10.1371/journal.pone.0070420.
- Meacock R, Kristensen SR, Sutton M. The cost-effectiveness of using financial incentives to improve provider quality: a framework and application. Health Econ 2014;23:1-13. https://doi.org/10.1002/hec.2978.
- Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med 2012;367:1821-8. https://doi.org/10.1056/NEJMsa1114951.
- Meacock R, Sutton M, Kristensen SR, Harrison M. Using survival analysis to improve estimates of life year gains in policy evaluations. Med Decis Making 2017;37:415-26. https://doi.org/10.1177/0272989X16654444.
- Elliott RA, Tanajewski L, Gkountouras G, Avery AJ, Barber N, Mehta R, et al. Cost effectiveness of support for people starting a new medication for a long term condition through community pharmacies: an economic evaluation of the New Medicine Service (NMS) compared with normal practice. PharmacoEconomics 2017;35:1237-55.
- Lilford RJ, Girling AJ, Sheikh A, Coleman JJ, Chilton PJ, Burn SL, et al. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res 2014;14. https://doi.org/10.1186/1472-6963-14-314.
- Crown W, Buyukkaramikli N, Thokala P, Morton A, Sir MY, Marshall DA, et al. Constrained optimization methods in health services research – an introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force. Value Health 2017;20:310-19. https://doi.org/10.1016/j.jval.2017.01.013.
- Clark MD, Determann D, Petrou S, Moro D, de Bekker-Grob EW. Discrete choice experiments in health economics: a review of the literature. PharmacoEconomics 2014;32:883-902. https://doi.org/10.1007/s40273-014-0170-x.
- Higgins A, Barnett J, Meads C, Singh J, Longworth L. Does convenience matter in health care delivery? A systematic review of convenience-based aspects of process utility. Value Health 2014;17:877-87. https://doi.org/10.1016/j.jval.2014.08.2670.
- Gosden T, Bowler I, Sutton M. How do general practitioners choose their practice? Preferences for practice and job characteristics. J Health Serv Res Policy 2000;5:208-13. https://doi.org/10.1177/135581960000500404.
- Mandeville KL, Lagarde M, Hanson K. The use of discrete choice experiments to inform health workforce policy: a systematic review. BMC Health Serv Res 2014;14. https://doi.org/10.1186/1472-6963-14-367.
- Lancsar E, Swait J. Reconceptualising the external validity of discrete choice experiments. PharmacoEconomics 2014;32:951-65. https://doi.org/10.1007/s40273-014-0181-7.
- Krucien N, Gafni A, Pelletier-Fleury N. Empirical testing of the external validity of a discrete choice experiment to determine preferred treatment option: the case of sleep apnea. Health Econ 2015;24:951-65. https://doi.org/10.1002/hec.3076.
- Rakotonarivo OS, Schaafsma M, Hockley N. A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods. J Environ Manage 2016;183:98-109. https://doi.org/10.1016/j.jenvman.2016.08.032.
- Geue C, Skåtun D, Sutton M. Economic influences on GPs’ decisions to provide out-of-hours care. Br J Gen Pract 2009;59:e1-7. https://doi.org/10.3399/bjgp09X394806.
- Scott A, Sivey PM. Motivation and Competition in Health Care 2017. http://dx.doi.org/10.2139/ssrn.2905491.
- Claxton K, Martin S, Soares M, Rice N, Spackman E, Hinde S, et al. Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold. Health Technol Assess 2015;19. https://doi.org/10.3310/hta19140.
- Castelli A, Street A, Verzulli R, Ward P. Examining variations in hospital productivity in the English NHS. Eur J Health Econ 2015;16:243-54. https://doi.org/10.1007/s10198-014-0569-5.
- Mason J, Freemantle N, Nazareth I, Eccles M, Haines A, Drummond M. When is it cost-effective to change the behavior of health professionals?. JAMA 2001;286:2988-92. https://doi.org/10.1001/jama.286.23.2988.
- Thompson C, Pulleyblank R, Parrott S, Essex H. The cost-effectiveness of quality improvement projects: a conceptual framework, checklist and online tool for considering the costs and consequences of implementation-based quality improvement. J Eval Clin Pract 2016;22:26-30. https://doi.org/10.1111/jep.12421.
- Walker S, Mason AR, Claxton K, Cookson R, Fenwick E, Fleetcroft R, et al. Value for money and the Quality and Outcomes Framework in primary care in the UK NHS. Br J Gen Pract 2010;60:e213-20. https://doi.org/10.3399/bjgp10X501859.
- National Institute for Health and Care Excellence . Guide to the Methods of Technology Appraisal 2013 2013. www.nice.org.uk/process/pmg9/chapter/foreword (accessed 14 January 2018).
List of abbreviations
- HSDR
- Health Services and Delivery Research
- HTA
- Health Technology Assessment
- MRC
- Medical Research Council
- NICE
- National Institute for Health and Care Excellence
- NIHR
- National Institute for Health Research
- QALY
- quality-adjusted life-year
- RCT
- randomised controlled trial