Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 12/5002/18. The contractual start date was in October 2013. The final report began editorial review in February 2016 and was accepted for publication in July 2016. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Disclaimers
This report contains transcripts of interviews conducted in the course of the research and contains language that may offend some readers.
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2017. This work was produced by Wilson et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Chapter 1 Context
The NHS is facing severe funding constraints both now and in the medium term. A funding gap of up to £30B has been forecast by 2020–1. 1 In challenging times, innovation is increasingly advocated as crucial to the long-term sustainability of health services, and the greatest potential for savings may be found by increasing efficiency and reducing variations in clinical practices. 1,2 However, it is important that the NHS takes steps to ensure that only the most effective, best-value health-care interventions and service improvements are adopted and that procedures and practices that have been shown to be ineffective are no longer used.
To do this well, commissioners need to be fully aware of the strength of the underlying evidence for interventions and new ways of working that promise to deliver more value from the finite resources available. The Health and Social Care Act 20123 has now embedded research use as a core function of the commissioning arrangements of the health service. The Secretary of State for NHS England (previously the NHS Commissioning Board) and each Clinical Commissioning Group (CCG) must now, in the exercise of its functions, promote (1) research on matters relevant to the health service and (2) the use in the health service of evidence obtained from research.
NHS commissioners therefore have a key role in improving uptake and use of knowledge to inform commissioning and decommissioning of services, and there is a substantive evidence base on which they can draw. In the UK there has been significant and continued investment in the production of research evidence on the effectiveness and cost-effectiveness of interventions to inform health-care decisions and choices. However, uptake of this knowledge to increase efficiency, reduce practice variations and to ensure best use of finite resources within the NHS is not always realised. This is in part through system failings to fully implement interventions and procedures of known effectiveness. 4,5 There has also been rapid, sometimes policy-driven, deployment of unproven interventions despite known uncertainties relating to costs, impacts on service utilisation and clinical outcomes, patient experience and sustainability;6 the NHS has also been slow to identify and disinvest in those interventions known to be of low or no clinical value. 7
Despite advances in the conduct and reporting of systematic reviews and recognition of their importance in health-care decision-making,8,9 their potential impact on processes is not yet realised. Although it is widely acknowledged that different sources of knowledge combine in evidence-informed decision-making10 and that the process itself is highly contingent and context dependent,11 a number of challenges have undermined the usefulness of systematic reviews in decision-making contexts. 8,12–17 These barriers include difficulties in locating and appraising relevant reviews; the review reports’ lack of timeliness or user-friendliness; and the real or perceived failure of reviews to address relevant questions, contextualise the findings, or make actionable policy recommendations.
One way in which these barriers can be overcome is through the provision of resources that adapt and present the findings of systematic reviews in a more directly useful form. Three types of review-derived products (summaries of systematic reviews, overviews of systematic reviews and policy briefs) aimed at policy-makers and other stakeholders have been postulated. 18 Summaries encapsulate take-home messages and add value by, for example, assessing the findings’ local applicability. Overviews of systematic reviews identify, select, appraise, and synthesise all known systematic reviews in a given topic area. Policy briefs identify, select, appraise and synthesise systematic reviews, other research studies, and context-specific data to address all aspects of a policy question. Alongside presentational issues, it has also been proposed that efforts should focus on the environment within which decision-makers work. 14 It is recognised that structural supports and facilitated strategies are required to ensure the capacity to acquire, assess, adapt and apply evidence obtained from research in decision-making. However, the best way to deliver this may be context specific, and evidence of effectiveness of interventions and strategies is lacking.
Public health specialists have traditionally supported and facilitated the use of research evidence in a commissioning context. 19,20 Those trained in public health and working in commissioning were more likely to report using empirical evidence than other senior commissioners, who were more likely to use colloquial evidence generated locally. 20 With the relocation of the specialty to local authorities, public health input now has a more limited role in commissioning processes. CCGs will need access to a variety of different evidence sources and expert involvement to ensure that evidence obtained from research continues to be incorporated into decisions made for their populations. 20 However, who is responsible for ensuring the absorptive capacity for research use,21,22 and that CCGs recognise and understand valuable research-based knowledge, is less clear. Although the Health and Social Care Act 20123 outlines research use as a statutory duty, operational guidance to commissioners also appears to significantly underplay the potential of research, and there are no explicit requirements relating to the use of evidence obtained from research. 23
An initiative aiming to enhance the uptake of evidence obtained from research in decision-making was developed as an adjunct to the implementation theme of the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Leeds, York and Bradford. 24 The Centre for Reviews and Dissemination (CRD) developed a demand-led knowledge translation service aimed at NHS commissioners and senior managers in provider trusts. The service attempted to address known barriers to systematic review uptake and use and aimed to make best use of existing sources of synthesised research evidence to inform local decision-making. Rapid evidence briefings were produced in response to requests from local NHS decision-makers who required an independent assessment of evidence to inform a specific ‘real world’ decision or problem. The rationale for this demand-led service was that addressing real decisions or problems in collaboration with those directly affected should mean that research evidence is more likely to be used and have an impact on decision-making.
Development of the service was informed by a scoping review of existing resources,25 previous CRD experience in producing and disseminating the internationally renowned Effective Health Care and Effectiveness Matters series of bulletins and initial iterative interactions with decision-makers on a range of mental health topics. We sought to address a number of known content, format and communication barriers to research use. 8,12,13,15–17 We targeted answering policy-relevant questions, ensuring timeliness of response, and delivering non-technical summaries with key messages, tailored to the relevant audience. As interactions between researchers and decision-makers might be expected to facilitate the ongoing use of research knowledge in decision-making we also instigated a process of ‘linkage and exchange’. 26 Although evidence was lacking on how best to do this13 and the time and resource costs required for both sides was unclear, the benefit of interactions between managers and researchers was theoretically grounded. Specifically, ongoing positive intergroup contact27 can be effective at generating positive relations between members of two parties when there is institutional support, equal status between those involved, and co-operation in order to achieve a common goal. 28 Contact has most benefit if those involved identify both with their own group and the overarching organisation to which they both belong. 29
The evidence briefing service adopted an approach that was both consultative and responsive and involved building relations and having regular contact (face to face and e-mail) with a range of NHS commissioners and managers. This enabled the team to discuss issues and, for those that required a more considered response, formulate questions from which contextualised briefings could be produced and their implications discussed. In doing so, we utilised a framework designed to clarify the problem and frame the question to be addressed. 30 Each evidence briefing produced would summarise the quality and the strength of identified systematic reviews and economic evaluations, but go beyond effectiveness and cost-effectiveness to consider local applicability, implications relating to service delivery, resource use, implementation and equity.
The evidence briefing service had some early impacts, notably including work to inform service reconfiguration for adolescent eating disorders and enabling commissioners to invest in more services on a more cost-effective outpatient basis. 31 Later work that assessed the effects of telehealth technologies (use of communication and information technologies that aim to provide health care at a distance) for patients with long-term conditions informed a decision to disinvest from a costly and much criticised technology deployment. Full details of the early briefings produced under the auspices of the NIHR CLAHRC for Leeds, York and Bradford can be found at www.york.ac.uk/crd/publications/evidence-briefings/.
Although feedback from users was consistently positive, the evidence briefing service had been developmental and no formal evaluation had been conducted. The service as constituted was also a resource-intensive endeavour and made use of the considerable review capacity and infrastructure available at the CRD. As such, we needed to establish how much value was added over alternative or more basic approaches. This was especially important as passive dissemination of systematic review evidence can have impact particularly when there is a single clear message and there is awareness by recipients that a change in practice is required. 15
As part of our developmental work we conducted a systematic review of products and services aimed at making the results of systematic reviews more accessible to health-care decision-makers. 25 This highlighted a lack of formal evaluation in the field. Indeed, most identified evaluations focused on perceived usefulness of products and services and not on actual impact. This study therefore aimed to address a clear knowledge gap and to help clarify which elements of the service were of value in promoting the use of research evidence and may be worth pursuing further.
This research was also timely because of the current and future need to use research evidence effectively to ensure optimum use of resources by the NHS, both in accelerating innovation and in stopping the use of less effective practices and models of service delivery. It therefore addressed a problem that faces a wide variety of health-care organisations, namely how to best build the infrastructure it needs to acquire, assess, adapt and apply research evidence to support its decision-making. For CCGs, this includes fulfilling its statutory duties under the Health and Social Care Act 2012. 3
Chapter 2 Methods
Primary research question
Does access to a demand-led evidence briefing service improve uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives?
Secondary research questions
Do evidence briefings tailored to specific local contexts inform decision-making in other CCGs?
Does contact between researchers and NHS commissioners increase use of research evidence?
This was a controlled before-and-after study involving CCGs in the North of England. The original protocol is available online (see www.nets.nihr.ac.uk/projects/hsdr/12500218) and has also been published in the journal Implementation Science. 32 There were three phases:
-
phase 1 – pre intervention: recruitment and collection of baseline outcome data (survey)
-
phase 2 – intervention: delivery of study interventions
-
phase 3 – post intervention: collection of outcome measures (survey) and qualitative process evaluation data (interviews, observations and documentary analyses).
Setting, participants and recruitment
Nine CCGs from one geographical area in the north of England were the original focus of this study. The recruitment process is presented as a flow diagram in Figure 1.
When designing the study, we had anticipated that we would invite nine or ten CCGs from the geographical area based on the 2012/13 primary care trust (PCT) cluster arrangements. By the start of the study, some consolidation in the proposed commissioning arrangements had occurred in the transition from PCTs to CCGs and so the Accountable Officers of the resulting seven CCGs were contacted, told the nature of the study and invited to participate. Of these, six agreed to participate. One CCG declined, intimating that it could not participate in any intervention. No CCG asked for financial reimbursement for taking part in the study.
Clinical Commissioning Groups that agreed to participate were asked to provide details of all governing body and executive members, clinical leads and any other individuals deemed as being involved in commissioning decision-making processes. These individuals were then contacted by the evaluation team and informed of the study aims.
We had originally intended to randomly allocate CCGs to interventions. However, a combination of expressed preferences (one CCG indicated that it would like to be a ‘control’) and the prospect of further consolidation in commissioning arrangements meant that this was not feasible. Taking these factors into account, two CCGs were allocated to receive on-demand access to the evidence briefing service, three coterminous CCGs (who were likely to merge) received on-demand access to advice and support from the CRD team and one to a ‘standard service’ control arm.
After the initial allocation, we were approached by a research lead from a CCG in a neighbouring geographical area who had heard about the study and indicated that he and colleagues in other CCGs were also keen to participate.
The research team then had discussions with representatives of five CCGs at two research collaborative meetings. At these meetings, we explained that any CCGs willing to participate would be recruited as ‘standard service’ controls, but would be offered the opportunity to receive on-demand access to the CRD evidence briefing service after the follow-up phase was complete. Three CCGs agreed to participate. A fourth CCG initially agreed to participate but failed to provide contact details for any personnel involved in commissioning processes, despite repeated requests from the research team to do so. As we would therefore be unable to collect baseline data, rather than delay the start of the intervention phase, the team informed the CCG that it would have to be excluded from the study.
Characteristics of participating Clinical Commissioning Groups
In total, nine CCGs agreed to participate and were able to provide contact details for personnel involved in commissioning processes.
A1
The CCG covers a population of around 150,000 with 27 member practices. The CCG is strongly aligned to the local authority, with which it is coterminous, and also works closely with a range of other organisations such as NHS England, local NHS providers and neighbouring CCGs.
It is in one of the 20% most deprived local authorities in the country with considerable inequality between the most and least affluent areas within the borough; deprivation is, therefore, higher than the England average. Average life expectancy is also lower than the England average. Around 23% of children and 26% of adults are classified as obese. Rates of recorded diabetes, alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average.
The CCG is the lead commissioner for the local NHS trust, which provides general hospital services and hosts many community services for a wide geographic area. Many specialist hospital services are provided by general and teaching hospitals outside the district. The CCG is small, as it has delegated most of its commissioning functions to the local Commissioning Support Unit (CSU). The CCG nonetheless demonstrates an interest in extending its commissioning reach, as it has taken on joint commissioning responsibility for primary medical care with NHS England from 2015/16. This is intended to give greater commissioning power to the CCG and will help to drive the development of new integrated models of care, such as multispecialty community providers and primary and acute care systems. The CCG is also a pioneer site for developing integrated care.
The CCG has worked in partnership with the local authority and third-sector providers to complete the Better Care Fund plan, which identifies four key transformation schemes. It has received over £12M in Better Care funding for 2015/16 to assist in delivering greater integration of services.
A2
The CCG covers a population of around 300,000, and has 45 member practices. Deprivation is lower than the England average and average life expectancy is lower than the England average. Around 17% of children and 26% of adults are classified as obese. Rates of recorded diabetes, alcohol-related hospital stays, smoking-related deaths and early cancer deaths are higher than the England average. Early cardiovascular deaths are slightly lower than the England average.
The CCG is coterminous and works closely with the local authority, as demonstrated by a partnership agreement for the management of continuing health-care patients. This reflects a stated aim about the need to join up patient care not just in health, but also in social care. CCG plans are also closely aligned with the priority areas of the Health and Wellbeing Board, and a Joint Health and Wellbeing Strategy has been developed with partners. The CCG has been involved in overseeing commissioning of a Specialist Emergency Care Hospital, the first purpose-built emergency care hospital in England, which opened in June 2015.
In 2015, the CCG began to cocommission primary medical care through a joint commissioning arrangement with NHS England. In addition, the CCG is part of a NHS vanguard site that is testing the new integrated primary and acute care systems. The CCG also received £22M in Better Care funding in 2015/16 to support the integration of health and social care.
B1–B3
During the course of the study, three participating CCGs merged to form a single statutory body with > 60 member practices. The new CCG covers a population of around 500,000. Deprivation is higher than the England average and average life expectancy is lower than the England average across these populations. In part of the locality, 23% of children and 22% of adults are classified as obese; rates of alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average. Rates of recorded diabetes are lower than the England average. In a second locality, 22% of children and 23% of adults are classified as obese; rates of recorded diabetes, alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average.
The strategic aim of the CCG is to improve the health and well-being of the population through a range of measures underpinned by the key principles of prevention. These include early intervention, integrated and co-ordinated primary, community, secondary and social care services, and timely access to secondary care services for those requiring hospital admissions. The CCG is the host commissioner for a large teaching hospital trust, which provides general hospital services, prescribed specialised hospital services and community-based services. The CCG is also host commissioner for a second hospital trust, which principally provides hospital services.
The original constituent CCGs received a combined £35M in Better Care funding in 2015/16: one CCG (B1) also received £2M in the second wave of funding from the Prime Minister’s Challenge Fund for improving access to general practice. The CCG now shares joint commissioning responsibility for primary medical care with NHS England.
C1
The CCG covers a population of > 250,000 and is made up of 51 member practices which cover five localities. The CCG faces challenges including a growing ageing population with escalating health needs, poor health compared to the rest of the England and excess deaths, particularly from heart disease, cancer and respiratory problems. The local community is affected by lifestyle factors such as obesity, smoking and alcohol abuse which pose a major risk to health and well-being.
Deprivation is higher than the England average and average life expectancy is lower than the England average. Twenty-one per cent of children and 27% of adults are classified as obese. Rates of recorded diabetes, alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average.
The CCG works closely with the coterminous local authority and aims to tackle jointly identified local needs by working closely with the local community and engaging with a wide range of local partners to ensure the very best health and social care. To this end, the CCG also sits on the local Health and Wellbeing Board.
The CCG is one of the largest for its population size, having chosen to discharge the bulk of its commissioning responsibilities in-house, with a minority being undertaken by the CSU. The CCG is host commissioner for a large district general hospital, and a specialist eye hospital, which between them also provide many prescribed specialised services that are commissioned by NHS England. The vast majority of the CCG’s expenditure on hospital services is within the local health-care system.
The CCG received £22M in Better Care funding to support the integration of health and social care. Under cocommissioning arrangements, the CCG has assumed full responsibility for commissioning general practice services.
C2
The CCG covers a population of > 250,000 made up of 40 member practices. The CCG covers a large and diverse geographical area, which includes some of the most deprived communities in England and some of the most rural areas of the country.
In one locality within the CCG, the average life expectancy for both men and women is lower than the England average. A large proportion of the population is aged ≥ 50 years and this is set to rise. Meanwhile, rates of coronary heart disease, hypertension and depression are higher than the England average. There is a similar picture in another locality with regard to ageing and life expectancy, although there are higher rates of coronary heart disease, hypertension and obesity. This is also mirrored in a third locality, which also has greater deprivation, as 74% of lower super output areas are in the 30% most deprived nationally and 30% are in the 10% most deprived.
Under cocommissioning arrangements, the CCG has assumed full responsibility for commissioning general practice services and therefore has delegated responsibility for commissioning. A key element of the CCG’s 2-year operational and 5-year strategic plan is the Better Care Fund, which sees a single pooled budget across the CCG and other key stakeholders, including the local authority. The CCG received £21M in Better Care funding in 2015/16.
C3
The CCG covers a population of around 300,000 with 40 member practices. The CCG is coterminous with two local authorities. Deprivation is higher than the England average and average life expectancy is lower than the England average. Twenty-one per cent of children and 31% of adults are classified as obese; rates of alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average; rates of recorded diabetes are equivalent to the England average. In one locality, 21% of children and 26% of adults are classified as obese; rates of alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average; rates of recorded diabetes are lower than the England average.
Under cocommissioning arrangements, the CCG jointly commissions general practice services with NHS England. The CCG also draws on the CSU to provide a wide range of functions to enable delivery on priorities. The CCG works as part of the Health and Wellbeing Board for each local authority. The CCG recognises the importance of collaboration as highlighted by local action plans for single pooled budgets for health and social care services as part of the Better Care Fund, funding for which amounted to £19M in 2015/16.
C4
The CCG covers a population of around 300,000 with 46 member practices. The CCG is coterminous with two local authorities. Deprivation is higher than the England average and average life expectancy is lower than the England average. In one locality 23% of children and 24% of adults are classified as obese and in a second locality, 23% of children and 28% of adults are classified as obese. Rates of recorded diabetes, alcohol-related hospital stays, smoking-related deaths, early cardiovascular deaths and early cancer deaths are higher than the England average.
The CCG aims to tackle health inequalities and ensure that everyone has the right access to care at the right time, regardless of where they live in the area. There is recognition that this requires collaborative working and relationships are being developed with local partners including member practices, local authorities, Healthwatch and local third-sector providers. A key priority has been the development of a joint vision to improve services for the vulnerable and elderly.
The CCG received £20M in Better Care funding in 2015/16. The CCG jointly commissions general practice services with NHS England.
Baseline and follow-up assessment
We collected data for our two primary outcome measures (perceived organisational capacity to use research evidence and reported research use) at baseline (phase 1) and again 12 months after the intervention period was completed (phase 3).
Main study Clinical Commissioning Groups
The survey instrument (see Appendix 1) was the means by which we collected these data. It was designed to collect four sets of information that assess the organisations’ ability to acquire, assess, adapt and apply research evidence to support decision-making. Section 1 was based on a tool originally devised by the Canadian Health Services Research Foundation33,34 and then modified by the SUPPORT (SUPporting Policy relevant Reviews and Trials) Collaboration. 35 The SUPPORT Collaboration included additional domains designed to assess the extent to which the general organisational environment supported the linking of research to action;36 specifically the production of research, efforts to communicate research findings (‘push’), and efforts to facilitate the use of research findings (‘user pull’).
Section 2 was based on a modified version of a tool designed to be administered as part of a planned trial evaluating the effects of an evidence service specifically designed to support health system policy-makers in finding and using research evidence. 37,38 This Canadian tool was itself based on the theory of planned behaviour, a widely used theoretical framework for understanding and predicting behaviours. 39 We used this to assess the intentions of individual CCG staff to use research evidence in their decision-making. The theory of planned behaviour is useful for examining intentions and behaviours of CCG decision-makers as it provides a (validated) model of how the social action involved in using research is shaped by three key variables: attitudes (i.e. beliefs and judgments), subjective norms (i.e. normative beliefs and judgments about those beliefs) and perceived behavioural control (i.e. the perceived ability to enact the behaviour). These three variables drive intentions to behave, which in turn shape future behaviour. 40–42 Lavis et al. 37 and Wilson et al. 38 highlight a compelling rationale for the utility of the theory of planned behaviour as an explanatory framework for at least some of the variability (in the influence on intentions and behaviour) in health care professionals and – in theory – policy-makers:
-
About 39% of the variance in intention and about 27% of the variance in behaviour can be explained by theory of planned behaviour constructs.
-
Producing valid and reliable measures of key theory of planned behaviour constructs for use with health-care professionals is feasible.
-
The proportion of the variance in health-care professionals’ behaviour explained by their intentions was similar in magnitude to that found in the broader literature.
-
The agency relationship – between health-care professionals and patients – is not dissimilar to the agency relationship between policy-makers and others.
It was clear from preliminary discussions and our previous contact with CCG decision-makers that they were aware of the desirability of using research and often expressed an intention to use research (indeed, this was one of the principal drivers for our research), but that other mediating factors impacted on their ability to enact these intentions. Using the theory of planned behaviour allowed us to model an important proportion of at least some of the drivers for any eventual behaviour reported or observed.
Section 3 was designed to evaluate the changes to the nature of the (proposed) interactions, both within the participating sites and between commissioners and researchers. Participants are asked how much contact they have had with researchers in their job (quantity), and how successful the interaction (quality) had been, using an existing modified measure. 43 This section included questions regarding the extent to which the interactions were perceived as friendly and co-operative, and as helping to achieve the goals of both managers and researchers. The extent to which those involved in the interaction are perceived as being on an equal footing, without either group dominating, and the extent to which the contact is perceived as being supported by the CCGs, and the NHS more generally, was examined. Participants were also asked to indicate the extent to which their status as a NHS manager/lead is important to them (in-group identification) and to what extent they see themselves and researchers as part of one overarching group committed to achieving the same things (superordinate identification). In addition, we included measures of perceptions of researchers in general using a generalised intergroup attitude scale. 44
Section 4 captured information on individual respondent characteristics, which was collected to help understand any variation in responses.
The language used in all sections was adapted to match the NHS commissioning context and readability was first piloted with the study advisory group. The sections were ordered by importance beginning with the primary outcome measure, the organisational use of evidence. The instrument was then piloted to assess ease of completion, time to complete, appropriateness of language and face validity with a small group of commissioning staff from outside the study setting. Feedback suggested that the questionnaire was comprehensive but feasible, especially as its administration would be solicited rather than unsolicited. As a result of the feedback and in anticipation of some fall in responses as a result of fatigue, we deliberately chose to prioritise the primary outcome measure as the first section on the questionnaire.
National survey of Clinical Commissioning Groups
A second survey instrument that included only the questions from Section 1 in the main case site survey was used to collect data from other CCGs across England. This was delivered at baseline and then again post intervention.
Survey administration: main sites
Each participating CCG supplied a list of names and e-mail addresses for potential respondents. These were checked by a member of the evaluation team and where inaccurate or missing details were identified, these were sourced and corrected. Survey instruments were sent by personalised e-mail to identified participants via an embedded URL. The online questionnaire was hosted by the Survey Monkey website (www.surveymonkey.com). Reminder e-mails were sent out to non-respondents at 2, 3 and 4 weeks. A paper version of the questionnaire was also posted out and telephone call reminders were made by the research team. In addition, the named contact in each CCG sent an e-mail to all their colleagues, encouraging completion.
Survey administration: national Clinical Commissioning Groups
As CCGs were new and evolving entities at the time of the study, we needed to be able to determine if any changes viewed from baseline were linked to the intervention(s) and were not just a consequence of the development of the CCG(s) over the course of the study. To guard against this maturation effect/bias, and to test the generalisability of findings, we administered the instrument to all English CCGs to assess their organisational ability to acquire, assess, adapt and apply research evidence to support decision-making. The most senior manager (chief operating officer or chief clinical officer) of each CCG was contacted and asked to complete the instrument on behalf of their organisation. For the national survey we used publicly available information (NHS England and CCG websites) supplemented by telephone calls to CCG headquarters to construct our sampling frame consisting of every CCG in England.
Interventions
Participating CCGs received one of three interventions aimed at supporting the use of research evidence in their decision-making.
-
Intervention A: contact plus responsive push of tailored evidence.
-
Intervention B: contact plus an unsolicited push of non-tailored evidence.
-
Intervention C: unsolicited push of non-tailored evidence (‘standard service’).
Intervention A: contact plus responsive push of tailored evidence
Clinical Commissioning Groups in this arm received on-demand access to an evidence briefing service provided by research team members at the CRD. In response to questions and issues raised by a CCG, the CRD team synthesised existing evidence together with relevant contextual data to produce tailored evidence briefings to a specified time scale agreed with the CCG. Full details of the evidence briefing production process are presented in Chapter 3. Based on developmental work undertaken as part of the NIHR CLAHRC for Leeds, York and Bradford, the project was resourced so that the team could respond to six to eight substantive issues during the intervention phase.
The CRD intervention team was formulated to provide regular advice and support on how to seek solutions from existing evidence resources, commissioning question framing and prioritisation. Advice and support was to be delivered via telephone or e-mail or face to face. As this was planned as a demand-led service CCGs in this arm could contact the intervention team at any time to request their services. Contact initiated by the CRD intervention team was made on a monthly basis and was expected to include discussion of progress on ongoing topics, identification of further evidence needs and discussion of any issues around use of evidence. The team also flagged any new systematic reviews and other synthesised evidence relevant to CCG priorities.
The evidence briefing team also offered to provide training on how to acquire, assess, adapt and apply synthesised existing evidence. Training (which was dependent on demand/uptake) would depend on the needs of the CCG but it was anticipated that this could cover question framing, priority setting, identifying and appraising systematic review evidence, assessing uncertainty and generalisability.
Intervention B: contact plus an unsolicited push of non-tailored evidence
Clinical Commissioning Groups allocated to this arm received on-demand access to advice and support from the CRD as those allocated to receive on-demand access to the evidence briefing service. However, the CRD intervention team did not produce evidence briefings in response to questions and issues raised but instead disseminated the evidence briefings generated in the responsive push intervention.
Intervention C: ‘standard service’ unsolicited push of non-tailored evidence
The third intervention constituted a ‘standard service’ control arm; thus, an unsolicited push of non-tailored evidence. In this, the CRD intervention team used their normal push-and-pull processes to disseminate the evidence briefings generated in intervention A and any other non-tailored briefings produced by the CRD over the intervention period.
The intervention phase ran from the end of April 2014 to the beginning of May 2015. As this study was evaluating uptake of a demand-led service, the extent to which the CCGs engaged with the interventions was determined by the CCGs themselves.
Quantitative analysis
The primary analysis measured the impact of study interventions on two main outcomes (perceived organisational capacity to use research evidence and reported research use) at two time points: baseline (pre intervention) and 1 year later (post intervention). The key dependent variable was CCG-perceived organisational capacity to use research evidence in their decision-making as measured by Section 1 of the survey instrument (see Appendix 1). We also measured the impact of interventions on our second main outcome of perceived research use (see Appendix 1, Section 3) and CCG members’ intentions to use research (see Appendix 1, Section 2). These were treated as continuous variables and for each we calculated the overall mean score, any subscale means, related standard deviations and 95% confidence intervals (CIs) at two time points pre and post intervention.
Secondary analysis assessed any relationships between the model of evidence briefing service (intervention) received and three further continuous independent variables measuring individual demographic characteristics (e.g. job role, clinical or other qualifications) and the quality and frequency of contact with researchers on the two outcome measures.
In our original protocol we (rather optimistically) held out the possibility that the data might allow for a more complex multivariate analysis, which would take into account clustering effects associated with CCGs or NHS Regions. There were insufficient data of sufficient quality to allow for such an analysis. When measures were non-normal, we transformed the data (logarithmically) where necessary and possible. Analysis was undertaken using IBM Statistical Product and Service Solutions (SPSS) Statistics, version 22.0 (IBM Corporation, Armonk, NY, USA) and Stata statistical software version 14 (StataCorp LP, College Station, TX, USA).
We undertook a number of statistical comparisons:
Chi-squared tests of independence were performed to examine the relation between the model of evidence briefing service received and the biographical characteristics of respondents.
To examine the hypothesis that CCGs would differ in their capacity to acquire, assess, adapt and apply research evidence to support decision-making as a result of receiving one of the interventions, we undertook a factorial analysis of variance [(ANOVA) SPSS, version 22.0, general linear model procedure], comparing the main effect of a single independent variable (CCG status) on a dependent variable (capacity to acquire, assess, adapt and apply research evidence to support decision-making) ignoring all other independent variables (i.e. the effect ignoring the potential for confounding from other independent factors). Thus, we assessed the main effects of time and intervention received and the interaction effect (effects of all independent variables on a dependent variable) of both time elapsed and of the intervention on domain scores. Thus, we had one independent variable (the type of intervention) and one repeated measures variable [the total score and domain subscore(s) at baseline and 1 year later].
To examine the hypothesis that the intervention would impact on CCG’s collective intention to use research evidence for decision-making, a factorial ANOVA using the SPSS (version 22.0) general linear model repeated measures procedure was conducted to compare the main effects of time and evidence briefing service received and the interaction effect of time and evidence briefing on intention to use research evidence (using a measure derived from the theory of planned behaviour – see the ‘intention’ component of the study questionnaire, Qs 41–43). As the theory of planned behaviour (in the context of this study) predicts that intention to use research evidence for decision-making will be positively correlated with attitude, group norms and perceived behavioural control in the CCGs according to the intervention it received, we also examined the main effects of time and evidence briefing service received and the interaction effect of time and evidence briefing on these variables.
To examine the effects of (1) perceived contact and (2) the amount of perceived contact with the evidence briefing service, (3) institutional support for research, (4) a sense of being equal partners during contact, (5) common in-group identity, (6) achievement of goals and (7) perceptions of researchers generally, we undertook a mixed 3 (intervention: A vs. B vs. C) × 2 (time: baseline vs. outcome) ANOVA using SPSS (version 22.0), with the intervention as a between-subjects independent variable, and repeated measures on the second factor, time.
Missing data
Missing data and attrition between baseline and 1-year follow-up were issues. Although only ≈16–20% of questionnaires had missing data at baseline, at follow-up more than half the responses were missing or incomplete. As analysing only the data for which we had complete responses would have led to potentially biased results,45 and as anticipated at bid and protocol stages, the use of multiple imputation techniques was required. 46 SPSS multiple imputation processes were used. We assumed that data were missing at random (visual comparison of original versus imputed data and significance testing of response and non-response data impact on outcome variables – see Chapter 4). Five imputed data sets were created and the data imputed were the dependent variables of the capacity score derived from Section 1 of the survey instrument, theory of planned behaviour variables and the measures of perceived quality and quantity of contact with researchers.
We used guidance on interpreting effect sizes in before-and-after studies to examine the clinical/policy significance of any changes. 47
Blinding
Baseline and follow-up assessments and the qualitative aspects of the research were undertaken by a separate evaluation team. The CRD evidence briefing team members were blinded from both baseline and follow-up assessments until after all the data collection was complete. The CRD evidence briefing team were made aware of baseline and follow-up response rates. Participating CCGs were also blinded from baseline and follow-up assessments and analysis.
Qualitative evaluation
To internally (within the context of the local health economy) validate the self-reported data collected in phases 1 and 3 and to explore the decision-making processes within each case site, we collected qualitative data. This was also an opportunity to explore CCGs’ experiences of working with the CRD intervention team and to feed back directly on the service it received. The qualitative data collected via observations and interviews were used to address the following questions:
-
What do commissioners consider to be ‘evidence’?
-
How is research evidence used in the commissioning decision-making processes in CCGs?
-
What is the perceived impact of a demand-led evidence briefing service on organisational use of research evidence?
-
What were commissioners’ experiences of the evidence briefing service?
-
How could the evidence briefing service be improved?
Change to protocol
Part of our original plan (outlined in the study protocol) was to collect and analyse documentary evidence of the use of evidence in decision-making using executive and governing body meeting agendas, minutes and associated documents. This component aimed to capture reported actual use of research evidence in decision-making, whereas our primary outcome measures focused on intention to do so. This was to be supplemented with interviews to explore perceived use of evidence and any unanticipated consequences.
Early in the intervention phase, it became apparent that this approach may not be feasible. With a few exceptions, we found a lack of recorded evidence of research use (a finding in itself), as executive and governing body meetings were mainly used to ratify recommendations and so would not tell us anything about sources or processes. With research use and decisions occurring elsewhere and often involving informal processes, we decided to undertake four case studies to explore use of research evidence in decision-making in the intervention sites. The case studies were three case site CCGs and one commissioning topic involving all CCGs across the region (low-value interventions). The Project Advisory Group approved this change in October 2014. Within the case studies, three types of data were collected: documents, observations and interviews.
Documents
Documentation was obtained from participating CCGs on request and through searches of publicly accessible documents on CCG websites. For the case studies, 55 policy documents, governing body papers and evidence documents supporting decision-making were sourced from CCGs. To understand how participants engaged with and used evidence in their decision-making, we utilised themes emerging from previous NIHR Health Services and Delivery Research (HSDR)-funded research examining ‘evidence’ use in commissioning processes. 48
Observations
In the absence of documentary evidence of decision-making, the aim of the observations was to capture the role and use of evidence in decision-making discussions and to identify topics to inform the subsequent interviews. One evaluation team researcher (KF) attended meetings at different stages of the decision-making process for one commissioning topic (low-value interventions) that cut across all CCGs. Relevant meetings were identified by key contacts within each organisation and included only formal decision-making contexts. Observation notes were taken during each meeting and non-participant observations were conducted with full knowledge and permission of attendees.
Interviews
To add richness and depth, in-depth qualitative interviews were undertaken with named contacts and key informants in participating CCGs. Interviews aimed to explore perceptions of the use of research evidence locally and experiences of interacting with the evidence briefing service, as well as any unanticipated consequences of the work. A topic guide was devised to explore engagement with the CRD intervention team and to capture aspects of influencing theories. This guide was piloted with general practitioner (GP) commissioners in a different CCG for feedback on language and operability of the guide. Feedback was positive and indicated that the guide was suitable for the purposes of the study. Interviews took place at the end of the intervention phase. The purposive sampling criteria included commissioners (board or executive team members and commissioning managers) who had had contact with the evidence briefing service.
Interview participants were invited to interview initially via e-mail and they received a participant information sheet electronically. In the case of non-response, e-mails were followed by telephone calls to the participant or via their personal assistant (where appropriate). A second e-mail was sent to those who could not be contacted by telephone. Participants were given the opportunity to ask questions about the research before agreeing to participate. Two evaluation team researchers (KF and CT) conducted the interviews face to face. All interviews were digitally recorded and transcribed by an external transcription company. Interviews were scheduled to last 1 hour.
Informed consent was obtained at the start of interviews. Participants were offered the opportunity to ask any questions about the process (but the researcher did not answer any questions relating to the evidence briefing service itself in order to avoid bias) prior to giving consent. Interviewees were given the opportunity to view direct quotations (and their immediate context) prior to publication.
In total, 39 participants were contacted and invited to participate. Of these, 21 agreed to participate, one delegated participation to a colleague and four agreed to discuss participation but despite repeated attempts were unable to schedule a time to do so. Seven participants declined (one no longer worked at the CCG, two declined because of time commitments and lack of knowledge of the evidence briefing service, one because of a job change and four gave no reason). The remaining six participants did not respond to repeated invitations.
Analysis and data integration
This was a mixed-methods study using a sequential explanatory strategy. Initial integration was of the three forms of qualitative data. Data from interview, observation and documentary analysis were uploaded into analysis software and combined to generate a descriptive account of the use of evidence in decision-making within each case. The primary point of data integration was the analysis stage in which themes generated by qualitative analysis were used to help us to understand variation in quantitative outcomes.
Qualitative analysis
Analysis was by constant comparison and used the qualitative data analysis software package NVivo, version 10 (QSR International Pty Ltd, Melbourne, VIC, Australia) to organise and manage the data. Our analytical approach was both deductive (developing themes from the research questions and survey instruments employed) and inductive (new themes emerging from the accounts of key informants). This process was iterative, the researcher returned to the original data several times, reviewing codes and revising each case-study narrative. During this process, data were integrated in three ways. First, interviews were categorised according to the intervention received and differences in the themes generated by each interview were compared and contrasted across each case. Once all data had been collected, one researcher (KF) developed a coding framework based on initial readings of the interview data without grouping by case. Cases were coded systematically with categories emerging from the data itself as well as from the research questions and theories and research literature relating to evidence-informed decision-making. These categories were reviewed by members of the research team (CT, ML, PW and KF) in order to focus the next iteration of coding. KF then reviewed and recoded all transcripts grouped as case studies. At the same time KF conducted text searches of all documentation and observation notes (text searches and manual review of observation notes) to understand the role of evidence obtained from research in decision-making. Identified terms were examined individually to understand the textual context of its use. Finally, themes generated by interviews were compared with those arising from documentary evidence to identify any conflict or consistency between local perceptions of the use of evidence and recorded use of evidence. Analysis of each type of data was integrated into case summaries for each of the three CCG case studies. For example, evidence of the use of research in documentation was used to explore support or refute descriptions given in interviews. Transcripts were randomly selected for review by CT to identify additional themes and to challenge conclusions made by KF.
Summaries describing the characteristics of each case and the local health economy were developed by two researchers (ML and LB). These were used to set the context of the case study and to inform discussion. Some themes were identified in advance from the research questions and theories and research literature relating to evidence-informed decision-making, others emerged from the data during analysis. The researcher was also alerted to concepts and themes while observing meetings during the intervention period. These were explored or reignited during the interview analysis period. The researcher sought confirmation or deviation from these concepts in transcripts and by revisiting notes from observations. Case summaries were developed that drew on data from all sources. Once these had been created, KF returned to the original data to identify any deviation from the narratives created. The second point of data integration was the analysis stage in which themes generated by qualitative analysis were used to help us to understand variation in quantitative outcomes.
Ethics and governance
This study was granted research ethics permission by the Department of Health Sciences, University of York Research Ethics Board. Appropriate research governance approval was also obtained.
Organisation-level consent granting permission to contact staff was obtained from each participating CCG. Individual participants had the opportunity to discuss any aspect of the study and their involvement in it with the research team at any stage of the study. Completion of questionnaires was anonymised and CCGs were informed of response rates but not of individuals’ participation. Interview participants and those present at observed meetings gave informed consent to their participation. None of the interventions involved any direct risks or burdens to the CCGs involved.
Patient and public involvement
The primary focus of this study was interventions targeted at NHS staff undertaking core roles within CCGs, so the active involvement of the public or service users in the design of this project was not sought. Patient and public involvement was provided through lay representation on the Project Advisory Group and through the development of the Plain English summary. We also committed to produce a summary of our findings in plain English and to ensure that these are shared with lay members of governing bodies in all of the participating CCGs.
Chapter 3 The evidence briefing service
The evidence briefing service was provided by team members at the CRD, University of York. In response to CCG requests, the team followed a well-established methodology to produce summaries of the available evidence together with the implications for practice within an agreed time frame. This chapter describes the introduction of the service to the intervention arm of the study, production of the briefings and the topics covered, including detailed examples.
Introducing the service
For the five participating CCGs allocated to receive contact via interventions A and B, we offered to come and explain the nature of the evidence briefing service and the aims of the study at the next available executive team meeting. Three of the five CCGs accepted the offer. Face-to-face meetings were arranged with representatives of the remaining two CCGs (who were also two of the three CCGs likely to merge). At each meeting, we outlined the aims of the study and highlighted the free advice and support for evidence-informed commissioning being made available from the CRD. Specifically, we offered help on clarifying issues, formulating questions and advice on how to make best use of the research evidence relevant to the commissioning challenges it faced. Recent work on telehealth undertaken for a CCG outside the study setting was used to illustrate how the evidence briefing process worked and what the CCG could expect in terms of a response to any questions it raised. At the meetings, we emphasised that participation in the study would help the CCG to fulfil its statutory duties under the Health and Social Care Act 2012,3 but also stressed that as this was a demand-led service; the extent to which the CCG engaged with the service was entirely at its own discretion.
After each meeting, a personalised e-mail was sent to all Executive Team members, clinical leads and commissioning managers within the CCG restating the aims of the study and the nature of the offer from the CRD.
For co-ordination purposes we suggested that each CCG nominate a senior person who we could liaise with and could act as the conduit for all CCG requests. Once named contacts were identified, they were invited to discuss areas of interest with their colleagues and get in touch and discuss their needs with the evidence briefing team. Each named contact was then met individually face to face to discuss the evidence briefing process, the nature of support being offered and to identify any initial CCG priorities.
Producing the evidence briefings
The process for producing evidence briefings followed that developed as part of the TRiP-LaB (Translating Research into Practice in Leeds and Bradford) theme of the NIHR CLAHRC for Leeds, York and Bradford and by the CRD as part of its core contract under the NIHR Systematic Reviews Programme. 30
On receipt of each request, an attempt was made to define the research question to be addressed in terms of population, intervention, comparator and outcome. 49 This was done via discussion with the named contact and/or the individual(s) making the request. Discussions rarely involved more than three named individuals as decision-making processes were found to be largely informal and rarely involved minuted meetings or gatherings of CCG staff. Most interactions around priority topics and questions were either telephone or e-mail based (> 500 e-mails relating to the formulation of questions and the production and dissemination of briefings were received or sent over the course of the study). Relevant contextual information, and, in particular, the background to the request being made, were also sought from the individuals making the request.
In some instances, interest in a topic was identified but a specific research question could not be framed initially. In such cases, we produced evidence notes, which aimed to provide a quick scope of the available evidence in the area. This then helped to frame the question(s) to be explored by subsequent, more focused, evidence briefings.
Identifying the content
As with our earlier developmental work,30 the evidence briefings were based on existing sources of synthesised, quality-assessed evidence and applied to the local context. Searches for relevant systematic reviews and economic evaluations were performed by the researchers responsible for each briefing. The core sources searched for evidence were:
-
Database of Abstracts of Reviews of Effects (DARE)
-
NHS Economic Evaluation Database (NHS EED)
-
Health Technology Assessment (HTA) database
-
International Prospective Register of Systematic Reviews (PROSPERO)
-
Cochrane Database of Systematic Reviews (CDSR).
During the course of the study, NIHR funding for the production of two databases, DARE and NHS EED, ceased. The CRD continued to conduct weekly searches, systematic reviews and economic evaluations until the end of December 2015. From January 2015 onwards, when searching for systematic reviews, the briefing researchers undertook additional searches of PubMed using the ‘Review’ filter and NHS Evidence using the ‘Systematic review’ filter.
For topics that were likely to be impacted by national guidance, we searched the National Institute for Health and Care Excellence (NICE) website. Additional sources were also searched for relevant policy reports and for other grey literature. These included the websites of The King’s Fund, Nuffield Trust, Health Foundation, Nesta, NHS England and the NIHR Journals Library. If systematic review evidence was limited, recent primary studies (published from 2010 onwards) were identified by searches of PubMed.
Data extraction and quality assessment
We stored the literature search results in a reference management database [EndNote (Thomson Reuters, CA, USA)]. One researcher screened all titles and abstracts obtained through the searches for potentially relevant content. Two researchers then independently made decisions on content most relevant to the questions to be addressed. Once selected, data were extracted into summary tables by one researcher and checked by another. Throughout this process discrepancies were resolved by consensus or where necessary by recourse to a third researcher.
Systematic reviews and economic evaluations included in DARE and NHS EED meet basic criteria for quality and a significant number have been critically appraised in a structured abstract. Where a critical abstract was not available, or was identified through other sources, we applied the well-established CRD critical appraisal processes for DARE and NHS EED (see www.crd.york.ac.uk/crdweb/HomePage.asp). For systematic reviews, the specific aspects assessed were the adequacy of the search; assessment of risk of bias of included studies; whether or not study quality was taken into account in the analysis and differences between studies accounted for; any investigation of statistical heterogeneity; whether or not the review conclusions were justified. When systematic review evidence was limited and primary research was identified, quality was assessed using the appropriate Critical Appraisal Skills Programme tool for the study design. 50 We included only evidence from primary studies that were judged to be well conducted. Quality assessments were performed by one researcher and checked by a second; discrepancies were resolved by consensus or recourse to a third researcher where necessary.
Presentation and dissemination
The presentational format for evidence briefings was based on our previous experience producing the renowned Effective Health Care and Effectiveness Matters series of bulletins (www.york.ac.uk/crd/publications/archive/)51,52 and from CRD guidance on disseminating the findings of systematic reviews. 49 With the exception of the independent appraisal of the evidence underpinning the proposed policies for musculoskeletal (MSK) procedures, evidence briefings took the following format:
-
front page bullet point summary of key actionable messages
-
background section describing the topic and the local context
-
evidence of effectiveness: a summary of systematic review findings (or primary studies if necessary); critical appraisal of the strength of the evidence; assessment of generalisability
-
evidence of cost-effectiveness: summary of economic evaluations and their findings; critical appraisal
-
implementation considerations based on the evidence, for example, implications for service delivery, patient and process outcomes, and health equity
-
references.
Evidence briefings were formatted using InDesign (Adobe, San Jose, CA, USA) desktop publishing software and were reviewed and edited by a second researcher and the principal investigator before being approved for circulation. Once approved, evidence briefings (and evidence notes) were e-mailed as an attachment to the named contact and to the individual(s) who made the initial request. The e-mail included the headline messages from the briefing, a request to circulate and an offer both to discuss the findings further (either by telephone or face to face) and to respond to any questions or clarifications that readers may have. Each evidence briefing was also e-mailed to the named contacts at other CCGs using the same format.
Each evidence briefing was published (with metadata) on the CRD website, and a record added to the HTA database. HTA database records contain full bibliographic details, hyperlinks and contact information for the organisation publishing the report. Indexing on the HTA database increased the likelihood that anyone searching for related terms on linked platforms such as The Cochrane Library, NHS Evidence, TRiP (Turning Research into Practice) Database and The Knowledge Network of NHS Scotland would identify any relevant evidence briefing as part of their search.
Questions addressed by the evidence briefing service
Over the course of the study we addressed 24 questions raised by the participating CCGs, 17 of which were addressed during the intervention phase (see Table 1). The majority of requests were focused on options for the delivery and organisation of a range of services and way of working rather than on the effects of individual interventions. Vignettes for each topic addressed are presented in Appendix 3. The evidence briefings are available at www.york.ac.uk/crd/publications/evidence-briefings/ (accessed 9 June 2016).
Types of evidence use
Requests for evidence briefings from the CCGs served different purposes. Four broad categories of research use have been proposed. 8,53,54 Conceptual use is when new ideas or understanding are provided and, although not acted on in direct and immediate ways, these influence thinking towards options for change. Instrumental use is when evidence directly informs a discrete yes/no, should we invest/disinvest decision-making process. Symbolic (or tactical) use refers to those instances in which research evidence is used to justify or lend weight to pre-existing intentions and actions. The final category is imposed when there are organisational, legislative or funding requirements that research be used.
For each evidence briefing and note produced, we employed these categories to classify the underlying purpose driving the type of research use. Although our interpretation is subjective, the classification presented in Table 1 is derived from a consensus-based approach. Most of the requests we received were categorised as conceptual. These were not directly linked to discrete decisions or actions but were requested to provide new understanding about possible options for future actions. Symbolic drivers for evidence requests included a pre-existing decision to close a walk-in centre, a successful challenge fund bid to implement self-care and decisions to implement GP telephone consultations. Questions categorised as instrumental related to explicit disinvestment or investment decisions. There were no instances that we considered to represent an imposed use of research.
Source | Topic | Question | Date asked | Output produced | Way in which the research was used |
---|---|---|---|---|---|
A1 | Urgent care services | Evidence for implementing an ‘urgent care hub’, consolidating out-of-hours provision on a single site adjacent to an A&E department, with front-door triage assessing patients for both facilities | November 2013 | Evidence briefing | Symbolic |
A1 | Supporting self-management: helping people manage long-term conditions | Rapid summary of the evidence relating to self-care | January 2014 | Evidence note | Symbolic |
All | Urgent care services | Evidence to inform urgent and emergency care systems | March 2014 | Evidence briefing | Conceptual |
A1 | Loneliness and social isolation | Interventions to reduce loneliness and social isolation, particularly in elderly people | April 2014 | Evidence briefing | Conceptual |
A1 | Supporting self-management: helping people manage long-term conditions | Self-care support for people with COPD | April 2014 | Evidence briefing | Conceptual |
All | Low-value interventions | Identify relevant recommendations from the NICE Do Not Do database | May 2014 | Evidence note | Conceptual |
A2, All | Low-value interventions |
|
July 2014 | Evidence briefing | Instrumental |
A1 | Community pharmacy minor ailments service | Identify evidence to inform a review of the community pharmacy minor ailments service | July 2014 | Evidence note | Conceptual |
A1 | Integrated community teams | Evidence for effects of integrated community teams including any examples of best practice | August 2014 | Evidence note | Conceptual |
A2 | Psychiatric liaison | Models of psychiatric liaison implemented in general hospital settings | July 2014 | Evidence note | Instrumental |
A1 | ‘One-stop shop’ screening model for diabetes | Does implementing a comprehensive one-stop shop annual review and screening model for diabetes have an adverse impact on either the quality or uptake? | September 2014 | Evidence note | Symbolic |
A2 | Frailty | What evidence/validated tools are there for frailty risk profiling in an A&E context? | October 2014 | Short e-mail note sufficient to address question. Later followed up with a related issue of Effectiveness Matters on recognising and managing frailty in primary care | Conceptual |
A2 | Unplanned admissions from care homes | What is the evidence for effects of interventions to reduce inappropriate admissions and deaths in hospital of patients from care homes | October 2014 | Evidence briefing | Conceptual |
A2 | Social prescribing | What is the effectiveness and cost-effectiveness evidence of social prescribing programmes in primary care? | October 2015 | Evidence note and then later updated into evidence briefing | Conceptual |
A1 | Supporting self-management: helping people manage long-term conditions | What is the evidence for the effects of mobile telephone apps to help people to manage their own care? | November 2015 | Evidence note | Instrumental |
A1 | Supporting self-management: helping people manage long-term conditions | What is the evidence for the effects of interventions to promote shared decision-making? | November 2015 | Evidence note | Conceptual |
A1 | Supporting self-management: helping people manage long-term conditions | What is the evidence for interventions to support promoting patient-centred care-planning consultations? | November 2015 | Evidence briefing | Conceptual |
A1 | Supporting self-management: helping people manage long-term conditions | Evidence for lay-led self-care education programmes generally as part of creating an environment and culture that supports self-care | November 2015 | Evidence briefing | Conceptual |
A1 | Supporting self-management: helping people manage long-term conditions | An evidence-based steer in how to give patients the confidence and skills to effectively self-manage their long-term conditions | November 2015 | Evidence briefing | Conceptual |
A2 | Accountable care organisations | What is the evidence for accountable care organisations? | April 2015 | Evidence note | Conceptual |
A2 | Enhancing access in primary care | What is the evidence for extended hours, telephone consultation/triage, and role substitution in enhancing access to primary care? | June 2015 | Evidence briefing | Conceptual |
A2 | Telehealth for COPD | What lessons can be learned from previous evaluations of the implementation of telehealth for COPD? | July 2015 | Evidence note | Instrumental |
A1 | Participatory democracy | What is the evidence for different methods of patient/public engagement in decision-making | August 2015 | Evidence note | Conceptual |
All | Low-value interventions: existing hernia and hysterectomy policies | Independent review of evidence for existing hernia and hysterectomy policies | August 2015 | Instrumental |
In addition to the evidence briefings and notes, we also circulated other CRD-generated content known to be of relevance and interest to participating CCGs. Effectiveness Matters is a short, four-page summary of research evidence about the effects of important interventions for practitioners and decision-makers in the NHS.
During the study period, a number of these bulletins were produced by the CRD in collaboration with the Improvement Academy of the Yorkshire and Humber Academic Health Science Network [www.york.ac.uk/crd/publications/effectiveness-matters (accessed 9 June 2016)]. When topics aligned with the stated areas of interest of the intervention CCGs, relevant issues of Effectiveness Matters were circulated to the named contacts for onward dissemination within the CCG. The issues of Effectiveness Matters that were circulated were as follows:
-
Dementia carers: evidence about ways of providing information, support and services to meet the needs of carers for people with dementia (May 2014)
-
Preventing pressure ulcers in hospital and community care settings (October 2014)
-
Preventing falls in hospital and community care settings (October 2014)
-
Recognising and managing frailty in primary care (January 2015)
-
Acute kidney injury: introducing the 5 ‘R’s approach (December 2015).
Other questions raised but not addressed
Other topics of interest were raised by CCGs around the beginning of the intervention period but were not addressed as individual evidence briefings or notes. Some were not pursued, as CCGs deemed other topics to be of higher priority, whereas others were constituent parts of other published or planned briefings. Details of questions raised are presented in Table 2.
Source | Topic | Covered by other outputs? |
---|---|---|
Urgent and emergency care | ||
A2 | Triaging minor ailments out of A&E | Covered by urgent care services briefing |
Elderly care | ||
A2 | Risk stratification for frail elderly | Covered by short e-mail note on validated tools for frailty risk profiling in an A&E context and Effectiveness Matters: Recognising and Managing Frailty in Primary Care (Spring 2015) |
B1–3 | Seamless falls service | Covered by Effectiveness Matters: Preventing Falls in Hospital and Community Care Settings (Autumn 2014) |
A1 | Falls pathway review | Covered by Effectiveness Matters: Preventing Falls in Hospital and Community Care Settings (Autumn 2014) |
A1 | Do regular reviews including an agreed care plan of management reduce unnecessary admissions and attendances and improve patient preferences for end-of-life care? | Circulated earlier CRD evidence briefing on advanced care planning |
Community-based care | ||
B1–3 | Multidisciplinary preventative community care including supported discharge, virtual wards and GP-led case management | Some aspects covered by unplanned admissions from care homes briefing |
Mental health | ||
A2 | Evidence to inform the delivery of new community-based care pathways for adult mental health services | Circulated earlier CRD evidence briefing on integrated pathways in mental health |
A2 | Child and adolescent mental health service early intervention and prevention | Not addressed |
A1 | Substance misuse liaison in urgent/emergency care | Not addressed |
Neurology | ||
A1 | For patients with a neurological diagnosis, does access to a local multidisciplinary hub help improve well-being and reduce unnecessary health-care attendances and long-stay admissions? | Not addressed |
Prescribing | ||
A2 | Reducing prescribing spend and waste | Not addressed |
A1 | Medicines management in care homes | Not addressed |
Training
The evidence briefing team offered to provide training on how to acquire, assess, adapt and apply synthesised evidence to those CCGs receiving interventions A and B. The formal offer was made at baseline to named contacts and separately to all staff. Separate informal offers were made to named contacts throughout the course of the intervention. Two CCGs (A2 and B3) did express interest in receiving training on identifying and using research evidence but were then unable to respond to requests to suggest dates and times for the training to take place. As such, the team instead opted to devise and circulate a two-page guide for commissioners on using evidence to support decision-making, based on the process for developing evidence briefings (see Appendix 5). The guide was circulated in April 2015 and made available on the CRD website at www.york.ac.uk/media/crd/Process%20flowchart_expanded%20FINAL.pdf (accessed 9 June 2016).
What follows are three exemplars of the types of conceptual, instrumental and symbolic research use experienced during the study.
Conceptual use of research evidence: social prescribing
In October 2014 we were approached by the A2 CCG and asked if we could provide evidence on the effectiveness of social-prescribing schemes. The CCG provided a short briefing report, which outlined that it was considering introducing a pilot scheme in one locality to improve the health and well-being of people with long-term conditions. It was envisaged that people would be referred to community-based services that would complement traditional medical interventions and that these would help people to manage their conditions better by learning new skills in self-management and avoid costly interventions in specialist care in the longer term. The CCG recognised that partnership with the voluntary sector could provide increased choice and value for money and that services could be more closely tailored to the needs of the community.
The CCG was particularly interested in any evidence that social prescribing reduced primary, secondary and community care workloads and service utilisation and in any evidence of cost-effectiveness (a specific request for cost per quality-adjusted life-year gained). After visiting a few high-profile schemes, it was concerned that its plans to introduce small-scale social prescribing on a ‘shoestring’ might not be effective and/or sustainable. It was hoped that a review of evidence would help to justify the small investment needed to get the scheme off the ground and would help to ensure that this would be a service investment that would pay back in the longer term.
A quick (10-day) response was requested, so we opted to compile a short evidence note rather than a full briefing. We searched the DARE, NHS EED and CDSR databases for relevant systematic reviews and economic evaluations. These initial searches revealed little relevant evidence, so we also conducted quick searches of MEDLINE, Applied Social Sciences Index and Abstracts (ASSIA), Social Policy and Practice, NICE, Social Care Institute for Excellence (SCIE) and NHS Evidence to locate details of any relevant guidance, case studies or service evaluations.
Overall, we found little supporting evidence to inform the commissioning of a social-prescribing programme. The identified evidence was characterised by brief descriptions of small-scale projects and failed to provide sufficient detail to judge either success or value for money. Rigorous evaluation of the cost-effectiveness of social-prescribing schemes was also lacking.
On feeding back these findings to the CCG, it highlighted that it knew of the existence of an evaluation of a scheme that it had visited but that was not included in the evidence note. The evaluation was of interest as the scheme had showed areas of improvement and possible savings. We explained that, as we had not yet searched for grey literature, the report had not been identified but that we would appraise it separately. Although detailed, the evaluation presented a number of significant limiting factors, which were in line with the overall findings of the evidence note. Specifically, the uncontrolled before-and-after evaluation failed to address the counterfactual potential confounders and the issue of regression to the mean. The report also lacked detail about the type of patients included in the analysis – what conditions they had, what interventions they received. The CCG indicated that it found the additional information ‘very helpful’ and noted the absence of evidence. The CCG opted to proceed with developing a pilot social-prescribing programme in conjunction with the local authority.
Before circulating the findings more widely, we decided to convert the evidence note into an evidence briefing. As we were aware that we had missed an evaluation, we conducted updates of our initial searches and undertook systematic searches for individual studies and for grey literature. Plans to convert this work into a systematic review were also registered with PROSPERO (CRD42015023501). The evidence briefing was circulated to all participating CCGs in March 2015.
Once publicly available, the evidence briefing generated media interest with the briefings headline message of a lack of evidence featuring in the Guardian newspaper. The team also received a number of enquiries from CCGs and Health and Wellbeing Boards located elsewhere in England and Scotland. All of the enquiries focused on evaluation and asked, given the absence of evidence, how should the effects of social-prescribing schemes be evaluated in ways that would improve the existing evidence base?
In July 2015, we were contacted by members of the public health team in a local council in the geographical area in which the study was based. They had been asked to summarise the latest evidence related to social prescribing and indicated that this was an area that the council in conjunction with B1 CCG was keen to explore further. They had found the evidence briefing through a search of NHS Evidence and were unaware that the briefing had previously been circulated to contacts within B1 CCG or of the plans of A2 CCG to develop a pilot programme.
They asked if we had any plans to update and if we would be willing to present the evidence base around social prescribing at a workshop being held for the Health and Wellbeing Board. The workshop took place in November 2015 and brought together local councillors, NHS organisations, third-sector agencies and representatives from social-prescribing schemes to explore if and how best to take social prescribing forward in the area. Our contribution was to present on ways to improve the measurement and evaluation of social-prescribing schemes. The Health and Wellbeing Board indicated that it intended to proceed with developing plans for a pilot social-prescribing programme in 2016.
Instrumental use of research evidence: low-value interventions
In early discussions with the named contact for the B3 CCG about their priority areas, low-value interventions were identified as a major area of interest for all CCGs in the region.
Low-value interventions are those treatments that are considered to be of no or low clinical benefit or that are not cost-effective compared with treatment alternatives. A region-wide list of low-value procedures was established in 2010. The aim of the value-based commissioning policy (VBCP) list was to provide local GPs with clear criteria for funding and referral, and to ensure that policies were applied consistently across all PCTs in the region. Each PCT had run its own individual funding request (IFR) process to handle any requests (on exceptionality grounds) that fell outside the commissioning criteria. The regional VBCP list included 39 procedures and was last reviewed and updated in 2012. This last update pre dated the transition of commissioning arrangements from PCTs to CCGs.
The initial exchange of e-mails focused on the practical challenges in identifying and implementing low-value policies, and a paper on the experiences from a NICE and Cochrane project was circulated. 55 The named contact revealed that they were about to start heading up a project group representing a cluster of seven CCGs to try to look at procedures of limited clinical value being undertaken in secondary care. The aim of the collective work was to consider the inclusion of a wider range of procedures on the regional VBCP list. The group also planned to assess the usability of data monitoring reports and to look at how IFR policies were being implemented with a view to developing strategies to ensure more effective implementation of policies. The CRD intervention team was invited to attend the meetings.
As a first contribution, we offered to identify any further policies that might be considered going forward. A systematic search of the NICE Do Not Do Database and Cochrane Quality and Productivity topic reports was conducted and yielded a list of 36 potential topics for consideration. These were presented to the Group in a summary form in July 2014.
At the September meeting of the VBCP Implementation Group, the Lead for the A2 CCG presented a MSK resource pack they had compiled. They indicated that it had been developed as part of a cost-saving exercise and were anticipating that its implementation would reduce the number of referrals as well as ensure appropriateness of referrals from GPs. The resource pack included 16 policies, 14 of which were not included in the regional VBCP list. The pack had been compiled from existing policies identified at other CCGs across the country. It was also ‘sense checked’ by a consultant in public health who had been involved in the compilation of the original regional VBCP list. Discussions had taken place with colleagues in the local provider trust who, it was reported, had not expressed any concerns about the proposed polices.
The A2 CCG had already asked its member practices to implement the new MSK procedures in addition to the existing 39 procedures. However, a regional web-based system to manage IFRs from GPs in all CCGs had recently been introduced. This meant that any IFRs derived from this new list would have to be processed separately. GPs making IFRs from the new list were being asked to complete a paper-based checklist and incorporate this into the patient notes and referral request. The A2 CCG therefore hoped that all the other local CCGs would adopt the MSK policies and that these would become incorporated into the regional VBCP list and the web-based system. It asked if this could be taken forward for consideration by individual CCGs and if an indicative stance could be provided at the next meeting.
There was some discussion about the provenance of the resource pack and the lack of involvement of other CCGs in its development. To assist the deliberation process, the CRD intervention team offered to undertake an independent and systematic appraisal of the evidence underpinning the proposed policies for MSK procedures. It was agreed that a preliminary assessment would be presented at the next meeting.
As no established local process appeared to be in place for assessing the evidence for proposed policies, the CRD team devised a simple process to appraise the 14 policies not included in the regional VBCP list. Figure 2 illustrates the process. We searched DARE, HTA and CDSR for potentially relevant systematic reviews and conducted web searches to identify relevant NICE or national specialty guidance. Taking each policy in turn we asked the following questions:
-
Is the proposed policy or change based on NICE or nationally recognised specialty guidance?
-
Is the guidance up to date?
-
Does the wording of the proposed policy or change match current evidence?
A one-page summary of the CRD preliminary assessments was presented to the VBCP Implementation Group at their next meeting. For clarity, we used a traffic light system to indicate the extent to which each proposed policy was supported by guidance and/or evidence from systematic reviews. Nine policies were rated green. This rating indicated that these were supported by national guidance recommendations and/or good-quality evidence from systematic reviews. Five policies were rated amber. An amber rating indicated that there was no explicit national guideline recommendation but that proposed policy reflected current evidence (low- or moderate-quality evidence). None of the proposed policies received a red rating. A red rating would have been used for any proposed policy that contradicted national guidance and/or was not supported by evidence from good-quality systematic reviews.
The preliminary assessments were well received by the CCG and the ‘York review’ was deemed to have provided reassurance. There appeared to be a consensus that moves to include these policies should be part of the natural progression of the regional VBCP list. The CCG also felt that the inclusion of the additional policies could potentially assist in waiting list management.
Individual CCGs would need to ratify the revised VBCP list. At the meeting there was recognition that the project group represented only seven of the 14 CCGs that would need to ratify the revised list if it was to be adopted. As a similar implementation group existed for the other seven CCGs, individual CCG ratification would be sought via that panel. The proposed revisions would also be sent to a geographically distant CCG that had been invited to join the group but could not attend. The team prepared a briefing including the traffic light indications and process flow diagram for circulation with the revised VBCP list.
At this point, moves to merge the two project groups into one also began. This revised grouping met in April 2015. The sign-off for the incorporation of the proposed policies into the regional VBCP list of commissioning intentions for 2016 occurred at the October 2015 meeting.
Symbolic use of research evidence: self-care
In 2013, the National Collaboration for Integrated Care and Support committed to support a number of local integration pioneers that would test new models of commissioning, organising and delivering integrated care and support at scale and pace. 56
Clinical Commissioning Group A1 was part of a successful pioneer bid that planned to implement a comprehensive 5-year programme promoting self-care through all health, care and community services. The local programme described as complex and transformational, aims to deliver self-care as the accepted norm of practice across the whole system. The programme has three objectives: (1) to shift the culture from helping the public to helping the public to help themselves, (2) to help staff to support local people’s ability to better manage their long-term conditions and day-to-day lives and (3) to reduce over-reliance on statutory services. Four key performance indicators were to provide a focus for measuring the impact of the programme. These were:
-
proportion of people who use services who have control over their daily life
-
proportion of people feeling supported to manage their condition
-
proportion of pregnant woman smoking at time of delivery
-
unplanned hospitalisation for chronic ambulatory care sensitive conditions.
The programme was in its early stages when the intervention phase started and the A1 named contact asked for help in ‘providing evidence to support or contradict some of the assumptions we are making’ and whether or not the team could provide a ‘quick and dirty’ appraisal of the evidence relating to the following questions as ‘quickly and as briefly’ as possible?
-
Does self-care improve well-being?
-
Does self-care improve health outcomes?
-
Does self-care reduce demand on unplanned health services?
-
Does self-care reduce demand on unplanned social services?
-
What are the strategies for encouraging self-care among staff and the public?
Answers to the first four questions were deemed to be most crucial to help build up a case for change and to develop a vision. The CCG also highlighted a King’s Fund report that it felt would give an idea of what it was trying to achieve by involving social care and the third-sector agencies as well. 57
It also forwarded a Nesta report, ‘The Business Case for People Powered Health’,58 that it thought would probably help the evidence appraisal.
An initial search of DARE, NHS EED, HTA and CDSR and the websites of the Health Foundation, The King’s Fund and Nesta revealed a large number of potentially relevant reports, systematic reviews and economic evaluations. As a quick (10-day) response was requested, we opted to compile a short evidence note based on an appraisal of two overview reports, one from the Health Foundation and one from the Nesta report.
Our two-page summary highlighted that both reports made some attempt to systematically identify and appraise relevant evidence, although neither adopted a very rigorous or reproducible approach. Overall, it seems that there is reasonable evidence that self-management support and related interventions can produce improvements in outcomes for patients with long-term conditions (including most of the outcomes specified by the CCG). However, there is a lot of uncertainty around the magnitude of benefits, the cost of interventions, and which patient/population groups would benefit most. Detail about specific self-management interventions, their delivery settings and what was actually implemented was also lacking.
We suggested that rather than adopting a whole-systems approach to self-care from the outset, it may be more beneficial to focus on the groups and conditions that would benefit most. Priority should then be given to identifying the self-care interventions most likely to be effective in these groups and to considering ways of overcoming the documented barriers to implementation. The offer was made to discuss how best to further interrogate the evidence base once the CCG and Operating Group had had an opportunity to digest and discuss this initial sift.
Although the initial request can be viewed as using evidence symbolically to lend weight to an existing course of action, much of what followed in terms of request was more conceptual in nature. The CCG made a series of requests, which, although not directly linked to discrete decisions or actions, could be seen to influence their understanding and thinking on how best to deliver self-care.
We were aware that self-care for chronic obstructive pulmonary disease (COPD) was an area of interest for other participating CCGs (B1–3). Following on from this initial sift and further discussions with the A1 named contact about priorities, we offered to produce an evidence briefing focused on self-care support for people with COPD (separate work on loneliness and social isolation also emerged from these discussions). We suggested a scope that looked at multicomponent interventions (including elements such as education, telephone support and action plans) and pulmonary rehabilitation. At a face-to-face meeting this scope was agreed. The A1 named contact also asked that, rather than using our current briefing format, could we instead consider a simpler summary format to aid group discussion. An infographic or pictorial representation (i.e. smiley faces) was requested, but we offered instead to produce a clearer one-page evidence summary as part of the briefing.
The evidence briefing was circulated to all participating CCGs in July 2014 with a headline message that there was consistent evidence that multicomponent interventions reduce respiratory-related hospital admissions and improve quality of life for people with COPD.
In November 2014, the A1 CCG named contact got in touch again to say that the Pioneer Programme had been generating interest among staff and the public in self-management through a series of workshops titled ‘Changing the Conversation’. He stressed that the programme had a wider focus than the management of long-term conditions and was aiming to encompass a spectrum of activities from promoting healthy lifestyles, to expert patient/shared decision-making through to self-care interventions that could be stratified according to population group. The named contact said that he was struggling to formulate a clear research question and was not sure what to do next in terms of giving patients the confidence and skills to effectively self-manage their physical, mental and social health issues. The CCG was aware of expert patient courses, but said that it could really do with an evidence-based steer on what sorts of programmes or structures it might commission to help people to manage their own care. The named contact stressed that the CCG wanted to avoid investing badly, but was especially interested in anything that could be ‘community led’.
One topic, the effectiveness of mobile telephone applications (apps), was identified as urgent, as the programme group were considering whether or not to commission an app and were meeting with developers the following week. We produced an evidence note based on three systematic reviews and one rapid scope of the literature. Our one-page summary highlighted that despite growing popularity and availability, much of the available evidence is small scale and focuses on development, user testing and feasibility, and that evidence is lacking on the effects of mobile telephone apps on health-related outcomes. After the meeting with the developers, the A1 named contact indicated that they had decided not to pursue mobile telephone apps as an intervention option.
After an initial sift of the evidence base, we suggested that rather than producing one large briefing, we would offer to scope the available evidence under the following broad themes:
-
self-management support for long-term conditions
-
provision of education and supportive (lay-led) interventions to increase patients’ skills and confidence in managing their own health
-
self-care interventions targeting frail elderly populations
-
self-care interventions generally
-
interventions that promote shared decision-making.
The A1 CCG named contact indicated that the staff were planning to have a brainstorming session to consider all the interventions possible. He said that they had learned from our advice regarding mobile telephone apps and were going to consider how to target certain interventions to certain populations or parts of the system. He also mentioned that they had been running workshops to introduce the concept of supported self-management to GP staff and then introduce some of the skills needed by staff to promote it. They were considering using action learning sets of keen staff who wish to implement their learning and need support in doing so and thus would be interested in the evidence for this approach (or for others) that would help create an environment and culture that supports self-care.
The first briefing we produced in December 2014 focused on lay-led education programmes and was based on two systematic reviews and a scoping review. We highlighted that the evidence suggests lay-led self-care education leads to small, short-term improvements in self-efficacy, self-rated health, cognitive symptom management and frequency of exercise, but that there was no evidence for improved health-related quality of life, or reduced primary care and emergency department visits. On the last point we were asked to clarify if this meant that no evidence of effect or an absence of evidence. We clarified that it was the former, but with the caveat that participants were relatively healthy/well managed at the outset, so it was possible that differences at 6–12 months would be less likely.
Supplementary comments were made in relation to the suggestion that men may want different things to women and that the programme should consider this, or it may inadvertently widen health inequalities. The public health consultant thought that it was really important to ensure that the programme does not widen the inequality gap, although she did not think that it was targeting according to need currently.
The nature of the study participants was also considered. The public health consultant noted that the underlying message appears to be one of careful targeting. The recruits to the studies were already self-reporting as being in good health – and so she asked if those who reported less good health benefited more or less. She was keen to ensure that they did not just end up recruiting the worried well. The limited but potentially positive evidence in relation to health champions and similar roles was also noted and it was mentioned that this was part of the Every Contact is a Health Improvement Contact work from front-line staff and being evaluated as part of an AHSN bid and that there may be some potential to scale.
We mentioned that we have not been able to identify any relevant evidence on action learning sets but have signposted a ‘how to’ toolkit from the Faculty of Public Health which may be helpful. The named contact asked if we could revisit action learning sets when we looked at staff-orientated interventions.
The named contact also mentioned that they were likely to adopt some interventions when there was no evidence one way or the other and could the group call on our help in evaluating them? We said that we could provide advice on what to measure and we may want to have a separate meeting focused on that.
The next Evidence Note produced in January 2015 focused on interventions to promote shared decision-making. Based on five systematic reviews and one overview of reviews, it suggested that, if tailored, appropriately shared decision-making can have beneficial effects on patient-centred outcomes. We offered to look at some of the more successful interventions in more depth if the group wished.
We started work on an evidence briefing on interventions to support self-management in people with long-term conditions, and were asked to present key messages at a forthcoming Programme Operating Group development session. Two NIHR HSDR-funded reviews formed the basis of the briefing. 59,60 The Reducing Care Utilisation through Self-management Interventions (RECURSIVE) review focused on the effect of self-management on health services utilisation and costs; the Practical systematic Review of Self-Management Support for long-term conditions (PRISMS) review summarised the key components of self-management and looked at issues around implementation. The named contact circulated the PRISMS review to the group while the CRD team were preparing the briefing.
At the meeting, further clarification around who does and does not engage with self-management programmes was requested as well as more detail on the barriers to and facilitators of patient participation. Evidence relating to self-management in a social care context was also sought. The presentation with its long-term conditions focus also generated quite a bit of discussion/concern within the wider programme group around whether or not there was to be a reassessment of the scope of the Pioneer project. Was it now going to focus solely on long-term conditions, or would the goal remain self-care across the full spectrum of public experiences from those who have limited contact with services to those with regular and increasing contact? It was emphasised that the project remained committed to the latter, while also recognising that there is potential overlap with other (e.g. Change4Life)61 initiatives.
A marketing company also presented ideas for the Pioneer project at the meeting and the A1 named contact asked about the effectiveness of public health mass media campaigns. Post event, the CRD researcher responded via e-mail to say that much of the evidence relates to smoking cessation-type interventions and, although there appears to be reasonable evidence for raising awareness, it was decidedly mixed for changing individual behaviours. At this meeting the named contact again mentioned how the CCG much preferred the shorter format for briefings.
Prior to a ‘shaping self-care’ event in March, the research team circulated the final self-care evidence briefing on patient-centred consultations, which we were informed were being increasingly advocated by NHS England. We highlighted that there is consistent evidence that most interventions promoting patient-centred approaches lead to improvements in the patient-centeredness of consultations and that investment in training and skills development for health professionals appears key.
After the intervention phase was complete, a local public health consultant informed the CRD team that he had in September 2015 been asked to revisit and summarise all the self-care briefings produced for the CCG. They did this in an informal presentation to around 20 colleagues from the CCG and Programme Operating Group. The presentation highlighted the key messages from each briefing and included pictorial representations for value for money, reduced admissions to emergency care, patient satisfaction, reduction in inequalities and the quality of evidence. Four areas for future focus were also proposed, namely (1) the staff culture, (2) patient choice, (3) collaborative action planning and (4) further exploration of COPD self-care intervention options.
Chapter 4 Clinical Commissioning Group capacity and intentions to use research
In this chapter we have abridged the intervention types used in tables and reporting for the sake of brevity. Chapter 2 outlines the details of each intervention and in this chapter the following conventions are used:
-
intervention A = access to the evidence briefing service
-
intervention B = contact plus an unsolicited push of non-tailored evidence
-
intervention C = ’control’ unsolicited push of non-tailored evidence.
Individuals from each participating CCG were to complete baseline and follow-up surveys (see Appendix 1) assessing the organisations’ ability to acquire, assess, adapt and apply research evidence to support decision-making. Each CCG supplied a list of e-mail addresses for potential respondents. A total of 181 baseline (A = 45; B = 61; C = 75) and 168 follow-up (A = 43; B = 60; C = 65) e-mail addresses were supplied by participating CCGs; none was undeliverable.
Any questionnaires not returned by 31 April 2014 (baseline) and 31 August 2015 (follow-up) were classed as non-responses.
Response rates
In total, 123 questionnaires were returned at baseline (A = 37; B = 54; C = 32), giving a response rate of 68%. Of these, 101 were completed, 13 were deemed to be incomplete (one section or fewer completed) and nine were from individuals declining to participate or indicating that they had departed the CCG.
At the 1-year follow-up, 76 questionnaires were returned (A = 23; B = 28; C = 25), giving a response rate of 44%. Of these, 71 were completed, two were deemed to be incomplete (one section or fewer completed) and three were from individuals declining to participate or indicating that they had departed the CCG.
Characteristics of respondents
Survey respondents reported holding a range of roles within the CCGs (Table 3). Most respondents were highly qualified, but only a minority reported having had prior experience in commissioning or undertaking research (Table 4). Sites with a lower response rate had a higher proportion of clinically qualified respondents [× 2 (2, n = 53) = 6.15; p = 0.05], but except for this difference, there were no significant differences in the characteristics of respondents receiving the three interventions.
Role | Frequency | Percentage |
---|---|---|
Executive team and/or directors | 44 | 33.8 |
Clinical lead and/or non-executive GP | 42 | 32.3 |
Commissioning manager | 15 | 11.5 |
Lay member | 5 | 3.8 |
Role not stated | 24 | 18.5 |
Total | 130 | 100.0 |
Characteristic | Intervention received (n) | |||
---|---|---|---|---|
A | B | C | ||
Formal responsibility for doing or managing research in CCG? | Yes, doing and managing | 5 | 2 | 2 |
Yes, managing | 3 | 3 | 7 | |
Yes, doing | 1 | 2 | 0 | |
Neither | 28 | 35 | 17 | |
Highest educational achievement? | School level | 2 | 0 | 0 |
Undergraduate | 17 | 27 | 12 | |
Master’s degree | 14 | 13 | 8 | |
Higher degree | 3 | 2 | 6 | |
Clinical qualifications? | No | 16 | 8 | 6 |
Yes | 21 | 34 | 20 | |
Worked as a researcher in an academic context | No | 34 | 42 | 24 |
Yes | 5 | 11 | 13 | |
Commissioned research | No | 29 | 47 | 32 |
Yes | 10 | 6 | 5 | |
Been a coapplicant or advisor on a research project | No | 30 | 44 | 30 |
Yes | 9 | 9 | 7 | |
Been employed as a researcher | No | 35 | 49 | 32 |
Yes | 4 | 4 | 5 |
Missing data
The proportions of missing data at baseline and follow-up for the variables measuring capacity to use research in decision-making and intention to use behaviour are presented in Table 5. Individuals with missing data did not differ significantly in scores from those for whom complete data existed.
Variable (score) | Complete data | Missing data, n (%) |
---|---|---|
Acquire (staff) pre EBS | 109 | 21 (16.15) |
Acquire (sources) pre EBS | 109 | 21 (16.15) |
Assess evidence (staff) pre EBS | 108 | 22 (16.92) |
Assess evidence (external expertise) pre EBS | 108 | 22 (16.92) |
Adapt pre EBS | 107 | 23 (17.69) |
Apply (leadership) pre EBS | 107 | 23 (17.69) |
Apply (decision-making) pre EBS | 106 | 24 (18.46) |
Acquire (staff) post EBS | 61 | 69 (53.08) |
Acquire (sources) post EBS | 61 | 69 (53.08) |
Assess evidence (staff) post EBS | 61 | 69 (53.08) |
Assess evidence (external expertise) post EBS | 61 | 69 (53.08) |
Adapt post EBS | 61 | 69 (53.08) |
Apply (leadership) post EBS | 62 | 68 (52.31) |
Apply (decision-making) post EBS | 62 | 68 (52.31) |
Pre-EBS TPB intention | 105 | 25 (19.23) |
Pre-EBS TPB attitude | 102 | 28 (21.54) |
Pre-EBS TPB norms | 105 | 25 (19.23) |
Pre-EBS TPB PBC | 105 | 25 (19.23) |
Post-EBS TPB intention | 62 | 68 (52.31) |
Post-EBS TPB attitude | 61 | 69 (53.08) |
Post-EBS TPB norms | 62 | 68 (52.31) |
Post-EBS TPB PBC | 62 | 68 (52.31) |
Original and imputed means for the main variables used in the analysis of capacity for and intention to use are presented in Table 6. As can be seen, the original and imputed means are similar. ANOVA of means in the original and imputed datasets reveal no significant differences.
Score variables | Original data | Imputation data | ||||
---|---|---|---|---|---|---|
Mean | n | SD | Mean | n | SD | |
Acquire (staff) pre EBS | 2.92 | 109 | 0.71 | 2.98 | 130 | 0.76 |
Acquire (sources) pre EBS | 3.20 | 109 | 0.70 | 3.16 | 130 | 0.74 |
Assess evidence (staff) pre EBS | 3.18 | 108 | 0.74 | 3.24 | 130 | 0.77 |
Assess evidence (external expertise) pre EBS | 3.32 | 108 | 0.76 | 3.28 | 130 | 0.76 |
Adapt pre EBS | 2.94 | 107 | 0.80 | 2.96 | 130 | 0.86 |
Apply (leadership) pre EBS | 3.38 | 107 | 0.62 | 3.33 | 130 | 0.65 |
Apply (decision-making) pre EBS | 3.45 | 106 | 0.61 | 3.47 | 130 | 0.62 |
Pre-CHSRF total score | 3.20 | 110 | 0.56 | 3.20 | 130 | 0.53 |
Acquire (staff) post EBS | 2.91 | 61 | 0.65 | 2.99 | 130 | 1.00 |
Acquire (sources) post EBS | 3.36 | 61 | 0.68 | 3.35 | 130 | 0.73 |
Assess evidence (staff) post EBS | 3.26 | 61 | 0.66 | 3.35 | 130 | 0.75 |
Assess evidence (external expertise) post EBS | 3.51 | 61 | 0.69 | 3.49 | 130 | 0.70 |
Adapt post EBS | 3.19 | 61 | 0.79 | 3.21 | 130 | 0.79 |
Apply (leadership) post EBS | 3.37 | 62 | 0.71 | 3.34 | 130 | 1.22 |
Apply (decision-making) post EBS | 3.49 | 62 | 0.66 | 3.52 | 130 | 0.97 |
Post-EBS CHSRF total score | 3.29 | 62 | 0.56 | 3.35 | 130 | 0.61 |
Pre-EBS TPB intention | 5.50 | 105 | 1.13 | 5.52 | 130 | 1.17 |
Pre-EBS TPB attitude | 6.18 | 102 | 0.80 | 6.19 | 130 | 0.78 |
Pre-EBS TPB norms | 5.13 | 105 | 0.97 | 5.18 | 130 | 1.04 |
Pre-EBS TPB PBC | 4.49 | 105 | 0.82 | 4.53 | 130 | 0.87 |
Post-EBS TPB intention | 5.51 | 62 | 0.97 | 5.47 | 130 | 1.20 |
Post-EBS TPB attitude | 6.11 | 61 | 0.73 | 5.98 | 130 | 1.00 |
Post-EBS TPB norms | 5.06 | 62 | 0.84 | 5.06 | 130 | 1.45 |
Post-EBS TPB PBC | 4.41 | 62 | 0.68 | 4.44 | 130 | 0.85 |
Benchmarking against the national picture
Section 1 of the survey instrument was used to collect benchmarking data from other CCGs across England (see Chapter 2 and Appendix 1). The most senior manager (Chief Operating Officer or Chief Clinical Officer) of each CCG was contacted and asked to consult with colleagues and complete the instrument on behalf of their CCG. At baseline we received usable responses from 79 CCGs (a response rate of 39%) and 1 year later at follow-up, we received usable responses from 31 CCGs (a response rate of 15%).
Table 7 illustrates that mean total scores for CCGs were in the area of some capacity to make use of research, but unlikely to be well equipped to do so or to do so often. The total score (overall capacity) increased marginally but not significantly over the year. With the exception of the ability to ‘adapt’ research through summarising in a more user-friendly way [baseline Canadian Health Services Research Foundation (CHSRF) M = 3.07, SD = 0.65; 1 year later M = 3.57, SD = 0.58; t(13) = –2.7; p = 0.02] no other significant differences were observed. However, it is important to be cautious in interpreting this,62 as our 1-year follow-up rates were very low and a difference of this magnitude is unlikely to be behaviourally significant.
Baseline/follow-up | Total, mean (SD) | Domain subscale | ||||||
---|---|---|---|---|---|---|---|---|
Acquire, mean (SD) | Assess, mean (SD) | Adapt, mean (SD) | Apply, mean (SD) | |||||
Are we able to acquire research? | Are we looking for research in the right places? | Can we tell if research is valid and high quality? | Can we tell if the research is relevant and applicable? | Can we summarise results in a user-friendly way? | Do we lead by example and show how we value research? | Do our decision-making processes have a place for research? | ||
Baseline (n = 79) | 3.27 (0.53) | 2.90 (0.69) | 3.37 (0.63) | 3.17 (0.71) | 3.35 (0.74) | 3.07 (0.65) | 3.46 (0.65) | 3.44 (0.69) |
Follow-up (n = 31) | 3.34 (0.47) | 3.05 (0.71) | 3.47 (0.67) | 3.02 (0.75) | 3.58 (0.62) | 3.57 (0.58) | 3.15 (0.64) | 3.53 (0.43) |
Significance (p) of change in 12 months | 0.73 | 0.48 | 0.86 | 0.55 | 0.12 | 0.02 | 0.21 | 0.41 |
Did the evidence briefing service improve Clinical Commissioning Groups’ ability to acquire, assess, adapt and apply research evidence to support decision-making?
We examined the hypothesis (Table 8) that CCGs would differ in their capacity to acquire, assess, adapt and apply research evidence to support decision-making as a result of receiving one of the interventions.
Domain | Intervention received | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
A (n = 39) | B (n = 53) | C (n = 38) | ||||||||||
Baseline | Follow-up | Baseline | Follow-up | Baseline | Follow-up | |||||||
Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | |
Total | 3.24 | 3.07 to 3.41 | 3.32 | 3.12 to 3.51 | 3.14 | 2.99 to 3.28 | 3.31 | 3.14 to 3.48 | 3.26 | 3.08 to 3.43 | 3.42 | 3.22 to 3.62 |
Acquire (staff) | 2.95 | 2.70 to 3.18 | 2.91 | 2.58 to 3.22 | 2.84 | 2.64 to 3.05 | 3.02 | 2.75 to 3.29 | 3.29 | 2.94 to 3.43 | 3.03 | 2.71 to 3.35 |
Acquire (sources) | 3.21 | 2.97 to 3.44 | 3.36 | 3.13 to 3.56 | 3.13 | 2.93 to 3.33 | 3.35 | 3.15 to 3.55 | 3.15 | 2.91 to 3.39 | 3.34 | 3.11 to 3.58 |
Assess evidence (staff) | 3.04 | 2.8 to 3.29 | 3.34 | 3.09 to 3.58 | 3.28 | 3.07 to 3.49 | 3.42 | 3.22 to 3.62 | 3.36 | 3.12 to 3.61 | 3.27 | 3.03 to 3.51 |
Assess evidence (external expertise) | 3.41 | 3.16 to 3.64 | 3.57 | 3.46 to 3.79 | 3.28 | 2.53 to 2.99 | 3.41 | 3.22 to 3.60 | 3.15 | 2.90 to 3.39 | 3.51 | 3.29 to 3.74 |
Adapt | 3.09 | 2.82 to 3.36 | 3.29 | 3.04 to 3.54 | 2.76 | 2.53 to 2.99 | 3.12 | 2.91 to 3.34 | 3.10 | 2.83 to 3.37 | 3.24 | 2.98 to 3.49 |
Apply (leadership) | 3.45 | 3.25 to 3.66 | 3.31 | 2.93 to 3.70 | 3.22 | 3.05 to 3.70 | 3.16 | 2.83 to 3.49 | 3.37 | 3.16 to 3.58 | 3.62 | 3.23 to 4.01 |
Apply (decision-making) | 3.53 | 3.33 to 3.72 | 3.46 | 3.16 to 3.77 | 3.44 | 3.28 to 3.62 | 3.43 | 3.17 to 3.69 | 3.43 | 3.23 to 3.63 | 3.72 | 3.40 to 4.02 |
Overall capacity to acquire, assess, adapt and apply research evidence to support decision-making
The total capacity to acquire, assess, adapt and apply research evidence to support decision-making appeared to improve slightly over time, both among our national survey (see Table 8) and irrespective of the presence of any intervention (see Table 9 and Figure 3). The main effect of time in the factorial ANOVA yielded an F-ratio of F(1,127) = 4.49; p < 0.05, ηp2 0.034, indicating a significant difference over time in all three groups of CCGs’ total capacity to acquire, assess, adapt and apply research evidence to support decision-making. The main effect of the evidence briefing service received yielded an F-ratio of F(2,127) = 0.77; p > 0.5, ηp2 0.012. The interaction of time and the intervention was also not significant yielding an F-ratio of F(2,127) = 0.213; p > 0.05, ηp2 0.003. Exposure to the intervention had no significant effect on perceived CCG capacity.
Impact of the evidence briefing service on Clinical Commissioning Groups’ capacity to acquire, assess, adapt and apply research evidence to support decision-making
Although there was no summary effect on capacity, we nonetheless hypothesised that the interventions may have had differential effects on different aspects of capacity.
Acquiring: capacity to acquire research
Neither the main effects of time [F(1,127) = 0.01; p > 0.05, ηp2 0.01], nor the evidence briefing service received [F(2,127) = 1.07; p > 0.05, ηp2 0.02] nor the interaction effect of time and the evidence briefing service received [F(2,127) = 0.88; p > 0.05, ηp2 0.01] were significant. CCGs’ perceived capacity to acquire research therefore appeared unchanged.
Acquiring: capacity to look for research in the right places
Clinical Commissioning Groups’ perceived capacity to look for research in the right places appeared to improve over time (Figure 4). The main effect of time yielded an F-ratio of F(1,127) = 4.76 (p < 0.05, ηp2 0.036), indicating a statistically significant improvement over time irrespective of any intervention. The main effect of the evidence briefing service received yielded an F-ratio of F(2,127) = 0.09 (p > 0.5, ηp2 0.01). The interaction of time and evidence briefing service was also not significant, yielding an F-ratio of F(2,127) = 0.05 (p > 0.05, ηp2 0.01), indicating no statistically significant benefit from the form of intervention received.
Assessing: capacity to tell if research is valid and high quality
None of the main effects of time [F(1,127) = 1.66; p > 0.05, ηp2 0.01], or the evidence briefing service received [F(2,127) = 0.91; p > 0.05, ηp2 0.01] or the interaction effect of time and the evidence briefing service received [F(2,127) = 1.48; p > 0.05, ηp2 0.02] were statistically significant, suggesting that perceived capacity to discern research quality in CCGs remained unchanged.
Assessing: capacity to tell if research is relevant and applicable
There was an apparent increase in the capacity to determine relevance of research across the intervention groups (Figure 5). The main effect of time yielded an F-ratio of F(1,127) = 7.62 (p < 0.05, ηp2 0.06), indicating that all three groups of CCGs had a statistically significant improvement in their perceived ability to acquire research evidence relevant to decision-making. The main effect of the evidence briefing service received yielded an F-ratio of F(2,127) = 0.9 (p > 0.5, ηp2 0.01). The interaction of time and the evidence briefing service was also not significant, yielding an F-ratio of F(2,127) = 0.82 (p > 0.05, ηp2 0.01), indicating that the intervention had not contributed to CCGs’ perceived improvement in their ability to identify relevant research.
Adapt: capacity for summarising results in a user-friendly way
There appeared to be a small increase in the capacity of CCGs to summarise research findings and adapt them to decision-making (Figure 6). The main effect of time in ANOVA yielded an F-ratio of F(1,127) = 5.46 (p < 0.05, ηp2 0.04), indicating that this improvement was statistically significant. The main effect of the evidence briefing service received yielded an F-ratio of F(2,127) = 2.62 (p > 0.5, ηp2 0.04). The interaction of time and the evidence briefing service was also not significant, yielding an F-ratio of F(2,127) = 0.52 (p > 0.05, ηp2 0.01), indicating that the evidence briefing service had not contributed to CCGs’ perceived improvement in their ability to summarise and adapt research to their own decisions.
Apply: capacity for leading by example and valuing research use
Neither the main effects of time [F(1,127) = 0.02; p > 0.05, ηp2 0.01], nor the evidence briefing service received [F(2,127) = 2.09; p > 0.05, ηp2 0.03] nor the interaction effect of time and evidence briefing service received [F(2,127) = 0.92; p > 0.05, ηp2 0.01] were significant, indicating that perceived capacity for leading and valuing research use had remained unchanged and was unaffected by the interventions.
Apply: capacity of decision-making processes for research use
In all other aspects of applying research in decision-making neither the main effects of time [F(1,127) = 0.49; p > 0.05, ηp2 0.01], nor the evidence briefing service received [F(2,127) = 0.53; p > 0.05, ηp2 0.01] nor the interaction effect of time and the evidence briefing service received [F(2,127) = 1.2; p > 0.05, ηp2 0.02] were statistically significant, suggesting that CCGs’ systems and processes had not appreciably changed, irrespective of the intervention.
Summary
Over a 1-year period, all CCGs – regardless of the intervention received – were associated with a (statistically) significant increase in capacity to use research evidence for decision-making. However, this apparent effect should be interpreted cautiously for two reasons. First, the overall capacity has changed little and does not represent a meaningful shift in the overall score. Second, the increase in perceived capacity observed in study CCGs was similar for all the CCGs in the national survey, which received no interventions directly from the CRD. Overall, the evidence briefing service had no measurable impact on the overall ability to acquire, assess, adapt or apply research.
The increases in subdomains that were observed in perceived ability to look in the right places for research, to tell if research is relevant and applicable and to summarise results in a user-friendly way also occurred nationally. Again, although the changes are statistically significant, the magnitude of change is so small that it is unlikely to represent meaningful observable changes in CCGs’ acquisition, assessment, adaptation and application of research.
Did the evidence briefing service improve Clinical Commissioning Groups’ intentions to use research evidence to support decision-making?
As with the effect of the evidence briefing service on capacity to use research for decision-making, we also wanted to examine the effect on CCG’s collective intention to use research evidence for decision-making.
Attitude towards use of research in decision-making was the strongest of these dimensions, and perceived behavioural control the weakest (Table 9). All intervention groups had apparent small and non-statistically significant declines in almost all of these theory of planned behaviour dimensions from baseline to follow-up.
Theory of planned behaviour domain | Intervention received | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
A (n = 39) | B (n = 53) | C (n = 38) | ||||||||||
Baseline | Follow-up | Baseline | Follow-up | Baseline | Follow-up | |||||||
Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | |
Intention | 5.61 | 5.22 to 6.00 | 5.41 | 5.07 to 5.76 | 5.31 | 5.00 to 5.61 | 5.42 | 5.08 to 5.76 | 5.72 | 5.33 to 6.11 | 5.59 | 5.17 to 6.02 |
Attitudes | 6.23 | 5.97 to 6.49 | 5.85 | 5.50 to 6.20 | 6.23 | 5.88 to 6.30 | 5.91 | 5.62 to 6.20 | 6.28 | 6.03 to 6.54 | 6.22 | 5.94 to 6.49 |
Group Norms | 5.18 | 4.85 to 5.52 | 4.77 | 4.24 to 5.29 | 5.03 | 4.77 to 5.30 | 5.02 | 4.60 to 5.44 | 5.39 | 5.02 to 5.76 | 5.43 | 5.08 to 5.78 |
Perceived Behavioural Control | 5.01 | 4.69 to 5.33 | 4.85 | 4.30 to 5.40 | 4.95 | 4.64 to 5.25 | 4.36 | 3.87 to 4.85 | 4.85 | 4.37 to 5.33 | 5.07 | 4.67 to 5.47 |
Intention
Neither the main effects of time [F(1,127) = 0.3; p > 0.05, ηp2 0.01], nor the evidence briefing service received [F(2,127) = 1.09; p > 0.05, ηp2 0.02] nor the interaction effect of time and the evidence briefing service received [F(2,127) = 5.96; p > 0.05, ηp2 0.01] were statistically significant. This suggests CCGs have not changed in their intention to use research evidence in their decision-making, irrespective of any interventions applied.
Attitudes
All CCGS, regardless of intervention received, appeared slightly less positive towards using research in their decision-making (Figure 7). The main effect of time yielded an F-ratio of F(1,127) = 4.28 (p < 0.05, ηp2 0.01), indicating a statistically significant difference over time in each of the three groups of CCGs’ attitudes towards research evidence to support decision-making. The main effect of the evidence briefing service received yielded an F-ratio of F(2,127) = 1.55 (p > 0.5, ηp2 0.02). The interaction of time and the evidence briefing service was also not significant, yielding an F-ratio of F(2,127) = 0.72 (p > 0.05, ηp2 0.01). These together confirm the initial impression of a decline in attitude towards use of research.
Group norms
Neither the main effects of time [F(1,127) = 0.69; p > 0.05, ηp2 0.01] nor the evidence briefing service received [F(2,127) = 2.01; p > 0.05, ηp2 0.04] nor the interaction effect of time and the evidence briefing service received [F(2,127) = 0.78; p > 0.05, ηp2 0.01] were statistically significant. This suggests that there was no effect on the perceived group norms surrounding the use of research evidence in CCG decision-making from pre to post intervention.
Perceived behavioural control
Neither the main effects of time [F(1,127) = 1.27; p > 0.05, ηp2 0.26] nor the evidence briefing service received [F(2,127) = 1.08; p > 0.05, ηp2 0.02] nor the interaction effect of time and the evidence briefing service received [F(2,127) = 2.30; p > 0.05, ηp2 0.04] were statistically significant. This suggests there was no effect on the perceived behavioural control associated with the use of research evidence in CCG decision-making from pre to post intervention.
Summary
The evidence briefing service did not appear to have any effect on individuals’ intentions to use research evidence in decision-making, their perceptions of the CCG norms surrounding research evidence or their sense of standard service around the use of research for decision-making. There was a (statistically) significant decline in attitudes towards research use; specifically, all CCGs were less positive towards research use for decision-making after 1 year. However, this difference is – in real terms – marginal; the positions were representative of broadly similar positions before and after encountering the intervention.
How do the Clinical Commissioning Groups view their contact with research and researchers?
At both baseline and follow-up, participants receiving all three interventions were asked questions that assessed the quality and quantity of contact with researchers, specifically:
-
perceptions of equal status between CCG members and the researchers they encounter
-
Clinical Commissioning Group support for that contact (both important aspects of contact quality)
-
Clinical Commissioning Groups and researchers seeing themselves as part of an overarching group with common goals
-
researchers in general
-
whether or not contact with researchers is equally useful for both parties, more important for researchers, or for CCGs.
These measures were used to assess whether or not the evidence briefing service improves contact between CCGs and researchers and/or results in more positive perceptions of researchers in general. Strength of identification as a CCG was also assessed, as a potential moderator of any impact of the intervention, but the small sample size makes it difficult to analyse or interpret this measure as a moderator.
Did the evidence briefing service improve Clinical Commissioning Groups’ perceptions of intergroup contact?
Perceptions of contact appeared generally more positive from the start among respondents receiving intervention A (see Table 10) than in the other intervention groups. Other than this, the amount of contact stood out as having the most consistent negative rating across the intervention groups, and changed little from baseline to 1-year follow-up.
There were increases in most other dimensions of contact from baseline to follow-up across the groups. None of these appeared to reach statistical significance (Table 10). The magnitude of these gains appeared a little lower in intervention A than in interventions B and C.
Perceived intergroup contact | Intervention received | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
A (n = 39) | B (n = 53) | C (n = 38) | ||||||||||
Baseline | Follow-up | Baseline | Follow-up | Baseline | Follow-up | |||||||
Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | |
Amount of contact | 1.76 | 1.2 to 2.36 | 2.11 | 1.82 to 2.42 | 1.17 | 0.65 to 1.69 | 1.72 | 1.45 to 2.01 | 1.16 | 0.70 to 1.62 | 1.92 | 1.67 to 2.17 |
Quality of contact | 4.60 | 3.09 to 6.11 | 5.66 | 5.21 to 6.11 | 3.19 | 1.68 to 4.67 | 5.96 | 5.51 to 6.41 | 2.89 | 1.62 to 4.13 | 5.61 | 5.23 to 5.99 |
Institutional (CCG) support for contact | 4.60 | 3.45 to 5.67 | 5.12 | 4.48 to 5.75 | 2.68 | 1.63 to 3.74 | 4.61 | 4.01 to 5.20 | 2.56 | 1.63 to 3.50 | 4.79 | 4.26 to 5.32 |
Equal status during contact | 4.74 | 3.56 to 5.96 | 4.97 | 4.57 to 5.30 | 3.03 | 1.92 to 4.13 | 4.11 | 3.73 to 4.48 | 2.77 | 1.78 to 3.75 | 4.46 | 4.12 to 4.79 |
Common in-group identity | 3.88 | 2.83 to 4.92 | 4.44 | 4.06 to 4.81 | 2.68 | 1.70 to 3.66 | 4.34 | 3.99 to 4.69 | 2.60 | 1.73 to 3.47 | 4.54 | 4.22 to 4.85 |
Amount of perceived contact
We examined the hypothesis that the evidence briefing service would increase the amount of perceived contact between CCGs and researchers. Respondents reported low amounts of contact (see Table 10). Neither intervention had a statistically significant effect on respondents’ perceptions of the amount of contact (for a variety of formats: face to face or via e-mail, or telephone) with researchers.
Perceived quality of contact
We examined the hypothesis that the evidence briefing service would improve the quality of contact between CCGs and researchers. No intervention had a statistically significant effect on the perceived quality of contact with researchers that CCGs experienced.
Perceived institutional support for contact
We examined the hypothesis that the evidence briefing service would improve perceptions that CCGs and the NHS more generally are supportive of NHS managers/leads and researchers working closely together. Neither intervention had a statistically significant impact on the degree of support for collaborative relationships between service staff and researchers.
Equal status during contact
We examined the hypothesis that the evidence briefing service would improve perceptions that researchers and CCGs recognise one another’s expertise, and that the CCG participants are perceived as having equal status in the contact. The interaction between intervention and time was not significant [F(1,57) = 1.61; p = 0.208], suggesting that the interventions did not have a statistically significant effect on perceptions of equal status.
Perceptions of a common in-group identity superordinate goals
We examined the hypothesis that the evidence briefing service would improve perceptions that NHS managers/leads and researchers feel like part of one overarching team committed to achieving the same goals, rather than two separate groups. The interaction between intervention and time was not significant [F(1,57) = 2.24; p = 0.12], suggesting that there is no statistically significant impact on the development of a common team identity.
Did the evidence briefing service increase the perception that communication between Clinical Commissioning Groups and researchers achieve their goals?
The findings identified a slightly more positive perception of both individual common goals at baseline among those receiving intervention A (evidence briefing service) than among the other two interventions (B and C). There was a small improvement in these perceptions among those in intervention A, with a slightly larger improvement in this perception among those receiving either of the other two interventions (Table 11).
Goals | Intervention received | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 (negative) to 7 (positive) | A (n = 39) | B (n = 53) | C (n = 38) | |||||||||
Baseline | Follow-up | Baseline | Follow-up | Baseline | Follow-up | |||||||
Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | |
CCG’s goals | 4.28 | 2.96 to 5.61 | 4.85 | 4.26 to 5.44 | 2.40 | 1.12 to 3.68 | 4.86 | 4.29 to 5.43 | 2.38 | 1.30 to 3.45 | 4.04 | 4.56 to 5.52 |
Researchers’ goals | 4.50 | 3.14 to 5.85 | 5.57 | 5.00 to 6.13 | 2.20 | 0.89 to 3.50 | 5.40 | 4.85 to 5.94 | 2.42 | 1.29 to 3.46 | 4.95 | 4.49 to 5.14 |
CCG and researcher goals | 4.35 | 3.03 to 5.67 | 5.07 | 4.53 to 5.60 | 2.33 | 1.05 to 3.60 | 4.80 | 4.28 to 5.31 | 2.38 | 1.32 to 3.53 | 4.95 | 4.15 to 5.38 |
Clinical Commissioning Groups’ professional goals
We examined the hypothesis that the evidence briefing service would improve the perception that communication with researchers helps CCGs to achieve their professional goals. The interaction between intervention and time approached significance [F(1,47) = 2.99; p = 0.06].
Post hoc analyses demonstrate that communication with researchers is perceived as more valuable in achieving CCG goals at outcome (M = 4.87) than at baseline (M = 2.40) in those receiving intervention C [F(1,14) = 12.08; p = 0.48] [and intervention B follow-up: M = 5.05; baseline: M = 2.38; F(1,20) = 25.60; p = 0.0005]. In contrast, there was no change in attitude towards researchers between baseline (M = 4.29) and outcome (M = 4.85) with those receiving the evidence briefing service (intervention A) [F(1,13) = 0.59; p = 0.48]. In summary, the hypothesis was not upheld: intervention did not increase the perception that communication with researchers helps CCGs to achieve their professional goals.
Researchers’ professional goals
We examined the hypothesis that the evidence briefing service would improve managers’ perception that communication with CCGs helps researchers to achieve their professional goals. The interaction between intervention and time was not significant [F(1,47) = 2.45; p = 0.10], indicating that the intervention had no statistically significant impact on achieving their professional goals.
Clinical Commissioning Group and researcher goals
We examined the hypothesis that the evidence briefing service would improve managers’ perception that communication between the two parties helps both researchers and CCGs to achieve their professional goals. The interaction between intervention and time was not significant [F(1,47) = 2.37; p = 0.11], indicating that the intervention had no statistically significant impact on achieving common goals.
Did the evidence briefing service improve Clinical Commissioning Groups’ perceptions of researchers?
We examined the hypothesis that the evidence briefing service would improve perceptions of researchers in general using a ‘feeling thermometer’ measure in which participants reported perceptions of researchers on a scale of 0 (very negative) to 100 (very positive). Perceptions of researchers were positive among respondents receiving intervention A, at baseline, almost at the level of the post-intervention responses across the board (Table 12).
Attitude | Intervention received | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
A (n = 39) | B (n = 53) | C (n = 38) | ||||||||||
Baseline | Follow-up | Baseline | Follow-up | Baseline | Follow-up | |||||||
Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | Mean | 95% CI | |
0 (negative) to 100 (positive) | 67.31 | 49.35 to 85.27 | 72.68 | 65.58 to 79.79 | 46.35 | 30.28 to 62.41 | 77.20 | 70.84 to 83.55 | 41.25 | 26.58 to 55.91 | 78.20 | 72.40 to 84.01 |
There was a significant interaction between intervention and time [F(2,57) = 3.29; p = 0.045]. Post hoc analyses demonstrate that perceptions of researchers in general were significantly more positive at follow-up (M = 77.20) than at baseline (M = 46.35) in intervention B [F(1,19) = 9.76; p = 0.006]. Similarly, perceptions of researchers were also significantly more positive at outcome (M = 78.21) than at baseline (M = 41.25) in ‘control’ intervention C [F(1,23) = 23.72; p = 0.0005]. In contrast, there was no change in attitude towards researchers between baseline (M = 67.31) and outcome (M = 72.69) in intervention A [F(1,15) = 0.36; p = 0.56]. In summary, the evidence briefing service did not change perceptions of researchers (in general).
Summary
Exposure to the evidence briefing service did not increase perceptions that communication between CCGs and researchers helped CCGs achieve professional goals (or indeed, researchers’ goals). Nor did it lead to increases in what were already positive perceptions of researchers in general. Exposure to the evidence briefing service did not increase perceptions of the quality or quantity of contact between CCGs and researchers.
Table 13 presents a summary of the results of all the hypotheses tested in this chapter.
Hypothesis | Supported |
---|---|
Capacity | |
Access to an evidence briefing service will improve overall capacity to use research in commissioning decision-making | No |
Access to an evidence briefing service will improve CCGs’ abilities in acquiring (capacity to acquire research) | No |
Access to an evidence briefing service will improve CCGs’ abilities in acquiring (capacity to look for research in the right places) | No |
Access to an evidence briefing service will improve CCGs’ abilities in assessing (capacity to tell if research is valid and high quality) | No |
Access to an evidence briefing service will improve CCGs’ abilities in assessing (capacity to tell if research is relevant and applicable) | No |
Access to an evidence briefing service will improve CCGs’ abilities in adapting (capacity to summarise results in a user friendly way) | No |
Access to an evidence briefing service will improve CCGs’ abilities in applying (capacity for leading by example and valuing research use) | No |
Access to an evidence briefing service will improve CCGs’ abilities in applying (capacity of decision making processes for research use) | No |
Intention to use research evidence | |
Access to an evidence briefing service will improve CCGs’ intentions to use research evidence in their decision-making | No |
Access to an evidence briefing service will improve CCGs’ attitudes to using research evidence in their decision-making | No |
Access to an evidence briefing service will improve CCGs’ group norms around using research evidence in their decision-making | No |
Access to an evidence briefing service will improve CCGs’ sense of self-efficacy with regard to using research evidence in their decision-making | No |
Intergroup contact | |
Access to an evidence briefing service will improve the reported amount of contact between CCGs and researchers | No |
Access to an evidence briefing service will improve the reported quality of contact between CCGs and researchers | No |
Access to an evidence briefing service will improve the reported institutional (CCGs’) support for contact between CCGs and researchers | No |
Perceptions of researchers and the research relationship | |
Access to an evidence briefing service will improve CCGs’ perceptions of having an equal status between CCG members and the researchers they encounter | No |
Access to an evidence briefing service will improve CCGs’ support for contact with researchers | No |
Access to an evidence briefing service will improve CCGs’ perceptions that it will see itself and researchers as part of an overarching group with common goals | No |
Access to an evidence briefing service will improve CCGs’ perceptions of researchers in general | No |
Access to an evidence briefing service will improve CCGs’ perceptions that contact with researchers is useful for both parties | No |
Chapter 5 Case studies exploring uptake and use of evidence in Clinical Commissioning Groups’ decision-making
This chapter explores the question: does access to a demand-led knowledge translation service improve uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives?
Four case studies are presented. These are the two CCGs receiving intervention A, one CCG receiving intervention B and a regional cross-case analysis based on the topic of value-based commissioning. The case studies are based on analysis of interviews, observations and documents. Interviews with two public health consultants were also included to assist understanding of local ways of working and the relationships within the local health economy.
Four themes are described and explored in the context of existing frameworks/what is known:
-
local decision-making processes (and the use of evidence)
-
the types of sources identified as ‘evidence’
-
the organisation’s absorptive capacity
-
the relationship between and exchange of knowledge between commissioners and researchers.
We also capture the complexity of commissioners’ relationships with research evidence. In particular, the challenges involved in acquiring and applying research evidence under pressures of time constraints, limited resources (organisational, intellectual and translational) and specific local contextual factors.
Exploring uptake and use of evidence in decision-making in the A1 Clinical Commissioning Group
Decision-making processes
Interview data suggest that commissioners in the A1 CCG aspire to adopt a logical analytical approach by developing evidence-based options papers capturing risks and benefits:
The traditional approach really would be to go away, do lots of research, understand what’s happened in other areas, look at the evidence, see what works, write a specification, a service specification and then you’d go out to the market and you’d go out to tender and say ‘Actually this is the service that we want to deliver, who’s going to come forward and can deliver it for us?’.
P14
Evidence-informed decision-making is valued and built into the system by a requirement to confirm the use of evidence via ticking a box on the front cover of all documents examined by the executive board:
All papers are signed off by directors, and the director . . . or if the director doesn’t do it the exec[utive] should do it, should say, ‘Where’s the evidence behind this?’ and there is on the . . . again on that front cover sheet it says, are the proposals within this paper evidence based and are they referenced within the paper.
P18
This is supported at board level as the board requests additional evidence or queries the evidence presented to support options: ‘if the board don’t feel assured that that hasn’t been worked up adequately prior to the paper coming then they won’t make a decision on it’ (P14). In an ideal decision-making scenario, all questions regarding risk, cost and benefit would be answerable with evidence, however, information may be best sourced from people with practical experience, such as palliative care experts ‘thinking about going forwards, does a different model of how we look after people in the community and in care homes influence the number of specialist palliative care beds that you might need’ (P17).
There is consensus among CCG informants that there are different ways of decision-making. Two participants (P18 and P14) suggested that policy decisions with significant impact (cost, mortality, etc.) are more likely to include what is described as an ‘evidence cycle’ (P18) in which evidence for different options is sought, acquired, assessed and adapted. The difference appears to relate to whether or not a decision will result in significant ‘change’ or impact. One participant (P14) suggested that the applied use of evidence was improving in the CCG but that it is not ‘necessarily the first port of call’. This structured process of evidence seeking and evidence-informed business case development does take place in some circumstances. For example, participant 18 described collecting advice from local experts on interventions, placing these in a grid and refining them through an iterative process of challenging by members of the team to develop an appropriate service. It is likely that decisions in which there are clearly specified and measurable goals require the more formal ‘evidence cycle’ approach, whereas less clearly defined decisions merit more informal approaches. 63
Despite organisational intentions, research evidence is not always sought to support recommendations presented in business cases. The emphasis on evidence is not always translated into practice even at the strategic level of the organisation: ‘. . . I don’t think we’re good at pushing back things at exec[utive] level that maybe don’t seem to have an evidence base which might have been a better thing to, do . . .’ (P14). Aspirations to seek and appraise evidence to inform options appraisal may also be discarded unintentionally:
We forget to do that [look for the evidence], and we just think it’s . . . is a good idea and it’s based on some case reviews, something we’ve read, or something we’ve heard about and we plod on, and then don’t involve them [public health] to the degree that we could, and in an ideal world we wouldn’t even ask them, they would be at the table hearing about the early discussions and they would say, ‘I’ll go and do some research about that, I think I know what you need here, you need to know whether this works or this works’.
P18
In other commissioning areas, commissioners may not always be able to apply processes that are intended to ensure evidence use, perhaps owing to a lack of research-related skills. For example, the box ticking process indicates the presence of any evidence rather than an assessment of the quality of that evidence and the value of this process is dependent on the individual member’s ability to judge the quality of evidence:
It’s about whether there’s any reference to evidence as opposed to what is quality and the quality of the evidence, the volume of evidence and even whether the evidence is directly relevant and supports what you’re saying, what you’re suggesting . . . So that [box on the front of documents] is meant to be a prompt, but you can tick yes and then have very flimsy evidence. So the quality of the evidence isn’t therefore systematically reviewed.
P18
In addition, what commissioners value as evidence varies; for example, those with a management background may value certain types of journal but a clinical member would question trial conduct (P17).
The organisation’s capacity to adopt a rational decision-making process is partially limited by the perceived lack of availability of information about options. This is partly in recognition that high-quality research evidence may not be available to support commissioning decisions:
Well, coming from this other medical background, it tends to be evidence-based medicine that’s the sort of thing that pops up, which gets peer reviewed, randomised control trial evidence, there’s very, very little of that in commissioning.
P17
Executive members are reported to ask ‘is it evidence based?’ but the answer is often ‘there isn’t the evidence out there’.
Commissioners identified pressures that exert a greater influence than research evidence. One participant stresses the importance of listening to people:
. . . from our perspective, we’re delivering a day job which is about making sure patients out there get the best possible care and it seems lunacy to not say ‘Well why don’t you look at evidence in order to do that’, but it’s almost like we’re being more reactive around listening to what their needs are and trying to build services to meet those needs whereas the pro activity will come around that review of evidence and using that evidence to full effect.
P14
The absence of research evidence can lead to decisions that are influenced by the values of CCG leadership and acceptability to the local population rather than information on cost-effectiveness. For example, in the context of the Vanguard projects, decisions are based on what models would ‘best fit’ the local health and social care system and ‘which ones are likely to be palatable’. This appears to be influenced by the need for local buy-in to ensure that implementation is effective. There is no formal process for integrating organisational values into decision-making, it is an assumed process.
Table 14 compares the pressures on commissioners identified in the CCG with those documented by Wye et al. 48 The possibility that there may be limited time available for decision-making was, surprisingly, not extensively discussed by participants from this CCG.
Pressures on commissioners identified in A1 CCG | Pressures on commissioners from Wye et al.48 |
---|---|
Public health and local authority partners drive priorities that are public health issues (e.g. cancer) | Evidence purveyors |
National and regional performance managers | |
The press | |
|
Health-care providers |
No significant pressures in terms of local priorities but perspectives are sought in the commissioning processes described by participants (e.g. self-care services) | The public |
Service users | |
Views are actively sought as part of an ‘innovative’ approach to commissioning. Expertise about what might work is used to design a new service. For example with self-care consultation events | Clinicians |
In some cases, regardless of the evidence base, decision-making is influenced by what stakeholders/clinicians prefer – if they disagree with the evidence, it will be too challenging to implement effectively | Internal colleagues |
Developing alternative processes in response to challenges
The context of a perceived lack of research evidence to support decisions encouraged participants to identify alternative sources of information as ‘evidence’ to inform their decision-making. Our observations of decision-making meetings and analysis of documentary evidence suggest regular use of forms of information other than research to inform decision-making.
Commissioners described seeking the experience of other CCGs that have already developed policies or activities:
They would always want that sent to check about well what’s happening in other areas, it’s you know, and what’s that learning? So rather than us having to learn for ourselves the experience of implementing a service, well if someone else has done it, what have they learnt and why have they either chosen to continue it or chosen to stop it?
P14
This experiential information is treated as evidence even though it is not research based and conflicts with the recognition of a ‘gold standard’ for high-quality evidence:
One of the big pieces of work that the CCG has been involved with is around an urgent care redesign and we looked around the country for other models and how they had been implemented and what their outcomes were, so a form of evidence, a low grade of evidence as a case review, but they’re often referred to within business cases that come to the exec[utive].
P18
Obviously he’s come across, you know evidence of other places where they’ve written up the outcomes of their services and so it’s almost like we’ve instructed the CSU to embark on a review of our MSK services based on that and around how you can integrate pain services with MSK.
P14
Board members also request information about what other CCGs are doing; this is treated as evidence even if effectiveness has not been robustly demonstrated.
Another driver of an innovative approach to policy-making in the absence of research is quality:
[we] have a responsibility of spending that money absolutely as wisely as we possibly can so even though we’re good at coming up with ideas the back stop is if it’s going to save money but be a more efficient way of delivering that service for the patient, bringing the services closer to the patient, then ultimately that’s going to help, you know, be prioritised probably over something that maybe might not save that money. But there is a strong quality thread through the work that we do as a CCG even though we are quite cash strapped, we don’t have a lot of spare money I would say to do like masses of innovation, I do think we’re good at prioritising what is needed for the population.
P14
There was an apparent connection between the notion of innovation and evidence in which the concept of ‘ideas’ was employed in lieu of research evidence:
We’re very good at coming up with ideas in the CCG using evidence and that evidence based approach . . . but I would say that it is not always our first port of call, we tend to use a lot of feedback and you know, just experience of the clinicians on the ground around how either current contracts are working or current services are working or people that are coming forward with more innovative ways of doing stuff because they’ve experienced that in other areas as it were.
P14
Service development based on stakeholder consultation was another example of an innovative approach to policy-making. Although decision-makers requested an evidence briefing on self-care, it was unclear how they intended to use the evidence in the service design. When no research evidence was found, the commissioners formally consulted with stakeholders to design an innovative service. The primary source of consultation to collect stakeholder views and experiences to inform the service took the form of a 1-day workshop event led by the clinical lead on long-term conditions. Rather than feeding into an options appraisal, this information is used to design a service or to express preferences for different services. The development of a stakeholder-led service design was guided by knowledge of what works locally and regionally but, as participant 17 makes explicit, is ‘perhaps not necessarily driven by the best evidence’ (P17). However, national documents from societies (e.g. the Palliative Care Society) are also fed into the process. These are acquired by the CSU because it works with more than one CCG and can share knowledge between these groups. This ‘evidence’ is described as forming the basis for an ‘options paper’ to be reviewed by the executive team, but practical constraints mean that the detail of risk included in this is not in depth.
Participants describe this type of service design as taking small risks to provide services that are innovative but this has its own challenges. For example, some of the development of integrated teams was to an extent influenced by providers (P14). Although this was considered a risky strategy, it was perceived to have paid off (P14). It was also acknowledged that ‘nine out of ten times’ the service is not actually innovation as it is likely to have been done elsewhere in the country (P14) and even with ‘traditional’ (P14) ways of commissioning, stakeholders would be involved in the mapping of the current service before drawing on evidence.
I think we, yeah, but I suppose getting underneath the decision making process every, we’ve got quite a plethora both of experience and personalities around the board table and I think that’s a good thing. I think there’s always healthy discussion around, you know, innovative bids, I think from a board perspective they are quite up for innovation, you know, listening to, maybe doing something a little bit different and I think Integrated Care Team is a prime example of that because that was quite a risky strategy to adopt.
P14
Competing pressures
The adoption of alternative decision-making processes was justified by the competing pressures on commissioners:
Balance what’s desirable and what would be, you know, the gold standard way of doing things with actually what’s practicable and practical given the, you know, we’ve got a very tight running cost budget that we must adhere to, we can’t go on spending lots of money on running and making these decisions, so actually, often the decisions that we’ve got to make, the CCG has got to be a pragmatic, make the decision best we can with the information we’ve got available to us rather than higher into the [–]nth degree.
P17
The challenges of a perceived lack of evidence, and limited capacity to evaluate evidence, may prevent the CCG from making evidence-based commissioning decisions in every case. The response has three dimensions: seeking an alternative evidence base, focusing on innovation and incorporating organisational values into decision-making in addition to evidence.
Absorptive capacity
Some of the challenges to the practice of evidence-based policy-making relate to the organisation’s capacity to acquire, assess, adapt and apply research evidence.
Capacity to acquire research evidence
Acquisition limitations are both skills and resource based. As decision-makers have only limited capacity to evaluate and process information, the gold standard of evaluating risks and benefits through options papers is balanced with the feasibility based on costs and resources. Participant 14 explicitly states that there are limited costs to making decisions (outside of the costs of the subsequent intervention):
The CCG has got to be pragmatic . . . because what we don’t have is a plethora of time to go away and do the evidence review and equally we don’t, like I’ve alluded to before, have the skills or knowledge or expertise around that, that’s a very specialist service, in order to do a good lit[erature] review of a particular area or do you know what I mean?
P14
Having the means to access evidence is not considered a particular challenge (P18); rather, the CCG is not in the habit of requesting evidence and, when it does, the type of evidence may not be strong enough as it often seeks case studies. There is some capacity within the CCG to acquire research evidence, but its relatively small size limits its internal capacity. It draws on external resources, the key one being the CSU:
CSU would do that on our behalf [draw on academic evidence] to be honest and that comes through in some of the background work up of the business cases and the papers that come to the exec[utive], that’s always part of a standard part of the reporting . . .
P14
Although the CSU provides evidence in the reports and business cases, there is no guidance regarding where this evidence should be drawn from and it does not provide an evidence appraisal service due to a perceived lack of appraisal and adaptation skills within the CSU [‘they haven’t got the in-house expertise to do it’ (P18)]. However, it was recognised that the CSU facilitates knowledge-sharing across the region:
A critique of CSU would be that they don’t have a central . . . and I think what we need to be better at is having a library of evidence because individual Clinical Directors I know, you know, keep to up to date with clinical protocols and guidelines and everything like that.
P14
The local public health team assists in acquiring some evidence but this takes different forms:
. . . they go off and they come back and sometimes they just give an e-mail or verbal report and say, ‘Yeah, you’re on the right track here’, or, ‘You’re not on the right track’, or sometimes we ask them to put something more in writing and write something similar to an evidence briefing.
P18
Other means of acquiring evidence include library services but these are an evidence-collating service and do not provide any degree of analysis or assimilation of the evidence found (P18 and P17).
Commissioners do seek evidence themselves, however, and descriptions indicate collective confirmation bias in which individuals prefer pieces of information that support the preferred alternative64 as they sometimes seek information from sources that they know share the CCG values:
I think there’s probably a degree of bias in terms of the health foundation stuff, very passionate about person-centred care, and there’s probably some reporting bias on their behalf to reinforce their message, but because it seems to be the right thing to do and we’re excited by that we probably look there and don’t look for evidence to contradict our views.
P18
We look for affirmation I think that we’re doing the right thing, as opposed perhaps don’t always look for, actively seek out evidence that would contradict what we’re doing. And if we do, if I’m being honest, I think when we do find it we say, ‘Well yeah but we’re doing something slightly different’.
P18
Some evidence-seeking behaviour is ‘informal’ horizon scanning of what goes on elsewhere (P18 and also P14). This can generate sources of evidence [e.g. about the ways in which the Health Board in New Zealand has reduced the burden on hospital care (P18)], but this process is not structured and identifying these sources is attributed to luck to a certain degree (P18).
Quality assessment of research evidence
Although there are formal processes for evaluating the quality of the evidence used, the example given above demonstrates that some executive members may lack the skills to do this adequately, for example a focus on quantity of evidence in a report rather than quality.
Participants observed that the nature of CCGs means that the executive and governing body teams have diverse levels of experience and degrees of clinical training, so have different training needs. Across the organisation, training needs also vary in terms of using, seeking, disseminating and understanding evidence. However, the presence of GP clinical commissioners and their background in evidence-based medicine means that it is likely that there are those with the appropriate skills to do this: ‘You tend to find that the clinicians are stronger at using evidence because they have to as part of their current role and almost their CPD [continuing professional development] to keep up to speed with their particular clinical interests in clinical areas’ (P14). These clinical skills are perceived as advantageous as they strengthen critiques of research that is brought to support decisions (P17).
Feasibility and cost were not explored as extensively – one participant suggests that the CCG does not explicitly attempt to estimate potential cost savings (P18). Appraisal of sources from other places does take place, although it is recognised that much of this is dependent on trust:
. . . we take a lot of . . . I suppose we do take a lot on trust, I mean they presented some fairly robust stats showing, you know, over a timescale of about 10 years what was happening in terms of where patients were accessing health and what was happening to their health outcomes, and that looked fairly robust.
P18
Public health doctors are better at considering the research basis. Some participants with a clinical background do demonstrate an understanding of robust high-quality research evidence: ‘Oh, like a systematic review of several areas which have actually been under trial conditions’ (P18); this is set against a description of ‘flimsy’ evidence: ‘like a case study, we’ve heard that some, they’re doing something like this so we think we should do the same, because they’ve seen some benefits in the short-term’ (P18). Participants described high-quality evidence as including randomised controlled trials (RCTs) and peer-reviewed studies. However, there was little explanation of why these sources are deemed to be of higher quality than others and these descriptions appear to reflect teaching in evidence-based health care.
Although there is potential to develop the organisation’s capacity, one participant argues (P14) that this should not be via training specifically. Previous attempts to review critical appraisal skills training have indicated some benefit, but based on poor-quality evidence.
Capacity to adapt research evidence
National recommendations for priorities are judged by commissioners in terms of their applicability locally given the nature of the local organisations involved: ‘it’s not that we’re just . . . ignor[ing] them, but we discuss which ones are more likely to be palatable and which ones are more likely to be successful locally’ (P18). The adaptation of knowledge to the participant’s local context appears to be done at the group or clinical lead level rather than the board level, but this is unclear. The term ‘options’ is used on several occasions during interviews, with the board making a final evidence-based decision on the back of these and requesting further information.
Capacity to apply research evidence
As shown above, some participants assume that the CCG is good at applying evidence. However, there are few examples that demonstrate this and there are mixed views on the CCG’s capacity to use evidence: ‘I don’t feel as a CCG we are great at using evidence’ (P14). ‘And at the moment there isn’t a sort of formal process or a cultural process within the organisation to do that (to integrate evidence in processes)?’ (P14). Although there is an intention to apply evidence to decision-making, ‘the application of evidence is not perceived as the main “warrant” for claims of knowledge, in part due to lack of skills in appraising evidence amongst non-clinical members’ (P14).
There is an informal process of looking at ‘data’ (as evidence) to identify local need (P14) and evidence is perceived as being used as a kind of retrospective sense-checking (P14), sometimes resulting in biased evidence-seeking behaviour: ‘[we] don’t look for evidence to contradict our views’ (P14).
Linkage and exchange
The relationship between commissioners, the public health team and the evidence briefing service does appear to be one of linkage and exchange in this case study. Some evidence transfer already takes place through the strong existing public health links between individuals in the CCG, and there are some cases of seeking evidence internally and via providers and stakeholders who have an interest in a particular field. Participants describe a relationship between commissioners and external organisations such as the CSU and the public health team that enables the transfer of evidence to support decision-making. This is driven in part by priority-setting processes and focuses on the relationship with the local public health team. However, the CSU is not perceived as facilitating knowledge sharing across the regions:
One of the frustrations we experience from a commissioning perspective is the fact that [the CSU] sometimes don’t share with us across the 13 CCGs, the differences or the learning that they’re getting around maybe a particular clinical project . . .
P14
There is perhaps a need for a ‘push’ of research evidence from research providers into CCGs, as described by one participant (P17), especially when it relates to possible changes that the CCG could implement but have not yet identified as a need (or solution). This, and observations of decision-making meetings, suggest that the presence of a researcher or public health clinical advisor in policy-development contexts could potentially help to identify points where evidence may assist decision-making; the so-called ‘blind spots’. The evidence briefing service facilitated a pull of information into the organisation by prompting and facilitating the executive team to seek evidence (P17). However, organisational culture may prevent the integration of knowledge into decision-making once it is acquired. The notion of ‘normalisation’, in which practices become routinely embedded into the organisational context, requires participants to have a shared understanding of the purpose and value of the information acquired. 65
There is a preference for a source of evidence to be situated within the decision-making system (i.e. in meetings) because the involvement at an early stage of people with access to evidence could benefit decision-making:
Our exec[utive], DPH [Director of Public Health] comes along, but maybe the consultants of public health who are much closer to the evidence would be better at the exec[utive], or somewhere else in the system, to challenge every decision we make.
P18
Locating representatives of the research community in decision-making situations may in part act as a reminder to decision-makers of the value of research. There is also value in members of the research community being embedded in the local context as this provides an understanding of the local challenges to decision-making:
I think the advantage of public health doing that is they are physically in the borough, they understand the systems that we have, they understand the population that we have, and all through the year they’re getting drifted what our issues are, so they come to it, you know, a bit of a running start . . . whereas using someone like Paul’s team every 3 months you see, ‘Oh we’ve got this new brainwave, can you help us answer some research questions’, he perhaps doesn’t really know what’s gone on, you know, what departments in the hospital are struggling and which parts of our population don’t seem to access health care . . .
P18
There is a perceived need for time to develop relationships with public health consultants before being able to use them to their full effect. The relationship between the evidence briefing team and participants illustrates this point, with increased discussion of questions once a rapport was established. The way in which questions are generated and negotiated with researchers is important because asking the ‘wrong’ questions, whether in internal evidence seeking or in discussions with external evidence providers, may result in no evidence being identified. The limited amount of research evidence used in CCG decision-making may in part be a product of commissioners asking the wrong questions, which may in turn generate recognition of need or demand for inappropriate or unavailable evidence.
Evidence briefing service
There has been a high degree of contact between individuals in the organisation and the evidence briefing service:
I’ve probably contacted them 6–12 times specifically to ask for help for something, I suspect that if you add up everybody else’s requests they come to a similar answer, but I’m not sure about that.
P18
On the whole, participants valued the evidence briefing service:
It’s all high quality, the language was good, they really were brief, they came with some conclusions, but the conclusions were often this is an area that’s not been robustly looked at, or we can’t really advise . . . you can’t really use the evidence to advise you to do this or not to do this.
P18
However, they addressed the challenges related to the types of questions being asked rather than the quality of the service itself:
Sometimes it was useful and other times it was less useful and as I say I guess it’s less useful in that the questions we were asking were hard questions and were often answered with we don’t know the answer, there needs to be more work.
P18
One participant expressed a need for evidence that can directly inform decision-making and that can support risk/cost/benefit analysis, but acknowledges that this information is not available:
Yes, I suppose the sort of things that don’t get answered very easily are things like numbers needed to treat, impact in terms of if you do this in your population you’ll save 100 lives, or you’ll reduce your admissions by such and such, and this’ll be the cost, or this’ll be the savings, and but that’s probably because we won’t ask those questions, so we may just not have got into the habit of asking really good questions. So then we get an answer that still doesn’t help the people at the exec[utive] make a good decision.
P18
Some value lies in the nature of the evidence provided by the service:
And actually, you know, we’ve been a little bit assured in the fact that we haven’t found any absolutely double gangers of things that we potentially should have done in a certain way that we haven’t but I do think there’s always areas to improve, do you know what I mean?
P14
Perceived impact
Part of the perceived value of the evidence briefing service was not related to use of evidence itself, but rather the light in which it showed the CCG:
I think it was useful in the fact that it gave us reassurance that, you know, I mean we checked back, we’d said ‘Well is anyone doing that?’, and actually it also gave us the impetus to say ‘Well no one else is doing it’ and we’re leading the way and doing something different around diabetes . . .
P14
Another benefit was the impact on awareness and evidence-seeking behaviour in the organisation:
I personally may have learnt to be a little bit more specific with my questioning, thinking back to early questions which were just tell us about the evidence for telehealth which is a bit vague . . . it’s raised awareness of . . . I think in people’s minds of our decision-making processes, and how we make good decisions . . . there’s probably been a trend to reference more evidence in papers that come to the exec[utive] over the year or 18 months.
P18
One of the outcomes of engagement with the evidence briefing service appears to be a growing recognition of the lack of appropriate evidence available to support decision-making in the CCG. In one example (P18), the briefing did support the decision to not engage heavily with an existing service due to a lack of supporting evidence. However, in another case, the evidence provided by a briefing was ignored partly because the course of action was supported by the people involved. This suggests that evidence is just one element of decision-making processes that are also influenced by individual preferences and drive because ‘it’s assumed [to be] the right thing to do’ (P18).
Participants suggested a better structure for interaction with the evidence service to support members to understand and use evidence:
We almost needed a bit more like a structure to hang around it like a bit of a training programme or an awareness programme around, you know, some sort of putting some practical things like if I, you know, and asking everyone like what does using good evidence look like?
P14
This service was offered as part of the evidence briefing service, but was not taken up by the CCG.
There is a perceived need for change in the organisation’s culture so that use of evidence becomes an organisational norm. This should not be addressed through training as this would reduce it to a tick-box exercise. Participant 17 described a need to change the culture towards evidence so that its use becomes normalised. This will require skills development and to ensure the absorptive capacity of the organisation, evidence needs to become part of all conversations. Normalisation Process Theory suggests 16 criteria to assess the likelihood of activity becoming assimilated into an organisation. 66 These include the extent to which individuals perceive difference in ways of working, agreement with the purpose of an intervention, individual buy-in and organisational support for the intervention. 67 Evidence needs to become part of the whole organisational way of working:
From my perspective it would just, it’s like making it accessible to everyone, you know, it’s almost like the girls in the admin[istration] office and it’s not just certain types of people or certain levels of people within the organisation who should be doing it, it’s almost everyone should have that sort of minimum education about why it’s good to use evidence, where you can access it, what do we mean by evidence and things like that.
P14
There is potential value in developing a service that works with all organisations in the region because much of the work involves partnership working and integration:
Like we talked about relationships with public health and things like that and we are looking more, that integration is absolutely massive so why would we just look for that for help, it would be good to integrate that evidence service almost across health and social care as well.
P14
In this case study, the organisational intention is to acquire, assess, adapt and apply evidence in decision-making taking a logical approach. However, limitations in their capacity to acquire evidence affects its ability to consistently achieve their ideal processes of commissioning and leads to an emphasis on alternative sources of evidence and other problems in the treatment of evidence, such as confirmation bias.
Summary
The CCG aspires to adopt a rational approach to decision-making in which options are identified and high-quality evidence is used to select the most appropriate option. Despite real-world limitations, this is achieved in some commissioning cases.
Commissioners value formal research evidence (such as RCTs, peer-reviewed science and clinical guidelines) and are able to incorporate these into clinical decision-making. However, because high-quality and relevant evidence may be unavailable, more diverse sources of evidence are used, such as ‘stakeholder views’ and local patient data. Decision-making processes are therefore more innovative, exploring new options developed by stakeholders, but without data on effectiveness.
There are two different decision-making processes in the CCG: the first applies an (intended) ‘evidence cycle’, in which evidence is sought and integrated into decisions and impact is then evaluated. The second relies on the generation of new service delivery ideas from stakeholders.
The CCG does have some capacity to acquire, assess, adapt and apply research evidence, but this is varied and limited by resources. It does draw on external resources such as the CSU and public health to do this.
Linkage and exchange takes place between the CCG and public health consultants and with the evidence briefing service, in particular, to address problems with identifying and answering appropriate and useful questions.
Exploring uptake and use of evidence in decision-making in Clinical Commissioning Group A2
Decision-making processes
The main catalyst for the use of evidence in this case study was the financial constraints specific to this CCG. As in CCG A1, there is a distinction between the way that commissioners here aspire to make decisions and the reality of decision-making processes. NICE guidance was used in part because of the cost-effectiveness element, but also as a bargaining or influential tool when working with providers. Participant 16 gives the example of the medicines team that considers cost-effectiveness evidence to facilitate negotiation between consultants and the CCG. Other drivers include the availability of resources and the influence of factors other than research-based evidence on decision-making processes.
The intended model is evidence-based logical analysis of options:
We don’t debate stuff without knowledge, we say bring it back next time, and we ask public health to go away and do a, you know, really good piece of work on that really and, and come back again making recommendations based on clinical evidence and cost-effectiveness; and cost-effectiveness exercises have usually been done by other people, for example NICE, and if it hasn’t then we’ve got the local thing called [local treatment advisory group] which is like a mini NICE really, which takes on things that NICE haven’t done. They have a waiting list of stuff to be considered, but what we can do is defer or decline particular requests until [local treatment advisory group] have done a bit of work on it.
P02
This perspective was reflected by P03; yet she also suggested that evidence is ‘not bandied around the CCG much’.
Decisions such as those about priority setting are supported by benchmarking processes, although this is recognised as ‘not quite evidence-based’ (P03). Other decisions combine evidence [e.g. the Joint Strategic Needs Assessment (JSNA) is used for priority setting (P16)] with local need but also with local ‘appetite for change’ (P12). This ‘appetite for change’ highlights the influence that the likelihood for successful implementation (and the role of GPs in this) has on decision-making.
Challenges to the use of evidence in decision-making
There was little discussion of the practicalities of decision-making and, although descriptions are not consistent, participants identify two key challenges to the application of this evidence-based policy-making model.
Absence of evidence: some participants perceived that evidence or national policy on a topic does not always exist even where it has been sought. To fill this gap [‘what do we do then?’ (P12)], commissioners seek alternative and non-research-based information. This is a particular challenge for commissioning in relatively new fields such as social prescribing, where research-based evidence is not yet available (as demonstrated by the briefing provided by the evidence briefing service). The fluidity of research evidence as well as their own priorities can make it difficult to apply appropriate and timely evidence to current priorities.
Financial constraints: given the financial context of this case, the limited evidence-based decision-making that is reported may be a reflection of the need to respond rapidly to financial pressures. Establishing ‘risks and cost–benefit analysis’ is in the interests of patient safety (P03) but the CCG’s strong performance also affects it because:
I think for us, the priorities are, probably thankful in some respects that they’re money, and I say that, you know, much as it’s a massive headache, it is because we’re not so worried about quality or performance, we’ve got really good-quality performance metrics.
P09
Participants recognise the benefits of using research to make better decisions in the long run:
Decisions are made on financial pressures to give short-term reward, where if you delayed it slightly you’d probably get a bigger reward in the long run if it was evidence based than if it was, I just, but that’s hard when you’re in that situation.
P03
There is a lack of skills and resources within the CCG to make evidence-based decisions, although one participant recognised that if there was less need for a short-term response, ‘better’ decisions may be made, as those decisions could be informed by evidence (P03).
Absorptive capacity
Other challenges to the use of evidence in decision-making relate to the organisation’s capacity to acquire, assess, adapt and apply research evidence.
Capacity to acquire evidence
There is no formal process for the acquisition of knowledge in the CCG and there are more benchmarking exercises described than evidence-seeking exercises (P09). However, there are two clear processes captured in our data: (1) informal scoping by members of the CCG of information from other CCGs; and (2) pulling in research evidence via external agencies.
-
Informal scoping: informally scoping the activity of other CCGs, one participant referred to this as ‘plagiarising’ via ‘low-tech’ scanning of the activity of others with similar objectives (P12). This involves an initial internet search [Google (Google Inc., Mountain View, CA, USA; www.google.com) was specifically cited as the search engine used], followed by a ‘review [of] the evidence’ (P02). The details of this process are unspecified but participants describe reading policies and information from ‘evidence bodies’ such as The King’s Fund and other CCGs, and using it to create local policy guidance. If multiple CCGs have similar policies, this mass is perceived to contribute to the ‘robustness’ of the evidence. The emphasis given in this case-study CCG to practice from elsewhere is highlighted by descriptions of a failure to look at what has worked elsewhere leading to a weaker service development.
-
Drawing on external services: individuals do not necessarily seek out evidence directly, rather they commission it via external sources deemed ‘reputable’, indicating recognition of the need for robust information (P03). For example, commissioners may request that public health clinical advisors do a review of evidence including cost-effectiveness. Attitudes towards the regional public health team are positive but this appears to be based on individual personality in the team. Commissioners also draw on local treatment advisory group services to review evidence. In contrast, although it is used, support from the local CSU is referred to as ‘a tick-box exercise’ due to the delays in receiving a response to questions. Drawing on the public health team expertise is the usual means of acquiring evidence to support low-value interventions (P02), although the evidence briefing service provided some new and some updated evidence on MSK procedures.
One of the challenges to acquiring evidence is the lack of skills in the organisation. Skills in considering evidence in the organisation are perceived to be limited, beyond GP commissioners’ own clinical skills (P03). Unlike other CCGs in the region, there is an epidemiologist available internally who leads on JSNA data analysis to identify priorities and need and to plan services. However, it is acknowledged that this is not based on research evidence as such, but on ‘key information’ (P09) and is likely to be primarily a benchmarking process given the emphasis on JSNA data.
Capacity to assess research evidence
There is a distinction between participants’ descriptions of evidence-seeking activity (such as internet searches for other policies) and their understanding of evidence quality that indicates some capacity to appraise the quality of evidence. Clinical participants do demonstrate some ability to appraise evidence, for example, the understanding that RCTs are ‘high-quality evidence’, that evidence should be from reputable journals and that they should include cost-effectiveness in their evaluations. One non-clinical participant described the process of seeking quality evidence as being related to ‘not looking at Wikipedia [Wikimedia Foundation, San Francisco, CA, USA; www.wikipedia.org]’, but instead seeking to quote evidence from ‘reputable’ organisations (P03). Good sources include NICE and the Royal College of Surgeons and these appear to play an evidence transfer role: one participant collects peer-reviewed research only if it has been included in the reviewing done by an external organisation such as NICE or the Royal College of Surgeons, rather than sourcing it himself. However, even clinical members have a mixed capacity to appraise ‘evidence’, as one described evidence as:
. . . basically any piece of information at all that can have relevance on what you’re, you’re looking to answer; suppose you’ve got a question to answer, you look at, you look at anything at all that can help answer that question, then you look at it irrespective of that evidence, I suppose, and gives it more robustness
P03
while also emphasising the importance of ‘respected journals’ in backing up decisions.
Capacity to adapt research evidence
Participants do not address the replicability of peer-reviewed trials of interventions, but there is an attempt to adapt evidence from other CCG policies: ‘no point in reinventing the wheel’ (P02). They describe an approach of selecting elements of another policy that they deem feasible in their own context.
Clinical Commissioning Groups with similar objectives are perceived as ‘independent advisors’ (P12) to help the CCG go through the process, with no recognition of its potential biases. One participant recognised the tendency for a CCG to promote its own activity even in the absence of empirical evidence (e.g. in the area of social prescribing) and these qualitatively ‘compelling’ cases for its own activity make it appealing to commissioners elsewhere (P16). For example, when the CCG team visited another CCG to explore its model of service delivery, it discovered that the impact had been overstated: ‘our teams went down to have a look at it and the difference between [the CCG] plus the control is not that big, it’s so it’s kind of how you present it’ (P12). This led to the case study B CCG not pursuing that model of care despite its initial appeal. On the whole, participants did not describe engagement with the evidence used to inform other CCGs’ policies: one participant stated that he only collates information and selects the best bits without assessing the quality of the evidence that might have been referenced: ‘I am assuming other areas have used it [evidence], it’s been factual’ (P13). Another perceived limitation of this type of evidence is the challenge of replicating locally. In social prescribing the impact demonstrated in other CCGs may not be replicated locally due to different populations (P13). Although this is also a challenge for implementation of evidence from clinical trials,68 it is less acknowledged in narrative-based evidence.
In contrast, one of the perceived strengths of other models of care is the influence of ‘soft and passionate’ narrative-based evidence in the form of opinions and anecdotal evidence about the model [e.g. social prescribing (P16)]. This was the driving force behind seeking evidence about social prescribing from the evidence briefing service. This suggests that commissioners are influenced by narratives to develop services; in some cases this may prompt evidence-seeking behaviour, but this may not always be the case. 69 There was also a tension between a desire to learn from the perceived success of others (P12) and a reluctance to abandon a model of care (such as social prescribing) despite the lack of robust evidence for these services:
[The CCG] was saying that they’d saved money but again you can’t actually say it’s social prescribing, it could be just looking at new ways of looking at unplanned care, you know, putting it out there. So it’s a mixture and there’s no way to actually definitely say, but it’s still a benefit.
P13
In turn, the limited engagement with the research evidence that had informed policy development in other CCGs, the challenges to replicating policy locally and the lack of robust evaluations, appear to affect how the commissioners use this type of evidence: ‘. . . using those ones that are thought would fit in best with what we were looking at’ (P13, see also P12). To address this, some members of the CCG take a ‘pick-and-mix’ approach to policies from elsewhere, adopting parts of another CCG’s model but not others which they ‘couldn’t or wouldn’t necessarily want to reproduce’ (P16).
Capacity to apply research evidence
Evidence, in its various forms, is applied to support decision-making in the CCG but research-based evidence is not always used instrumentally to directly influence decisions. CCG policies are applied to create new local guidance, for example around low-value interventions (P03), and non-empirical information gained from interactions with other CCGs is sometimes applied to the local context, for example in terms of ways of partnership working:
That’s [model from X CCG] really influenced my thinking when it comes to commissioning . . . so although that visit to X wasn’t applicable for what we went for . . . actually came away with a sea change in how we talked to our local providers about things.
P16
Although this demonstrates learning from activity elsewhere, it relates more to learning about the processes of commissioning rather than the interventions or services themselves. There is also some use of evidence as a confirmatory tool, for example a preference for research evidence that supports what commissioners ‘have been asked to do’ (P16).
In comparison with other A1 and B1 CCGs, there is less indication that the A2 CCG seeks ‘stakeholder’ and patient preferences to inform its decision-making in line with key sources identified by other studies. 48 In its place, there is a greater emphasis on the models of service delivery taking place in other organisations and on cost-effectiveness information. Stakeholder preferences are collected, patients are represented on guidelines groups and the CCG also consults more formally with HealthWatch (HealthWatch England, Newcastle upon Tyne, UK; www.healthwatch.co.uk). However, there is a perception that patients tend to agree with the messages given by existing research into patient preferences (P16), suggesting that some commissioners feel that there may be less point in consulting locally if there is research into it.
Table 15 compares the pressures on commissioners identified in the CCG with those documented by Wye et al. 48
Pressures on commissioners identified in case study B | Pressures on commissioners from Wye et al.48 |
---|---|
Evidence briefing service | Evidence purveyors |
National and regional performance managers | |
With regard to low-value interventions | The press |
Yes, pressures (often evidence based) from consultants in secondary care to commission specific services or interventions | Health-care providers |
Yes, to a limited extent | The public |
Yes, to a limited extent | Service users |
Yes: secondary care providers | Clinicians |
Yes | Internal colleagues |
Linkage and exchange
Our data suggest that there is linkage and exchange between the CCG and external research-related organisations, including the evidence briefing service. This is predominantly the transfer of evidence reviews from research organisations into the CCG. Although the independence of research reviews is deemed important, their source is unimportant – public health or the evidence briefing service are equally respected sources as long as they are ‘stand-alone’ [by which we understood to mean ‘independent’ (P02)]. Descriptions of the working relationship between evidence providers and commissioners were positive and the CCG draws on and has relationships with several sources of evidence:
Commissioning Support Unit: barriers to the relationship with the CSU included administrative demands imposed (P02) and the long turnaround that these provide (P03). Services bought from the CSU are perceived as a transactional ‘tick-box exercise’ that enables the CCG to meet statutory requirements around research, but without engagement with researchers or a commitment to include evidence in policy-making (P03).
Public health: public health had a key involvement in decision-making around low-value interventions and the IFR panels (P02 and P03) and descriptions of the relationship with the public health clinical advisor (PHCA) is largely limited to this policy. This is partly because the lead on low-value interventions was keen for policy revisions to be evidence based but likely also to be due to the presence of public health clinical advisors at early meetings of the Low Value Interventions Implementation Group (P02).
Local Treatment Advisory Group: this group provides a service around clinical guidance where NICE has not created guidance. There are some apparent capacity issues [there was a waiting list for work to be conducted (P02)]. The link between the public health clinical advisor and the Local Treatment Advisory Group is important, as the PHCA acts as a knowledge broker as he sits on the Low Value Interventions Group and can channel requests into the Local Treatment Advisory Group.
Some aspects of linkage and exchange were more evident than others.
-
Presence: descriptions highlighted the benefits of face-to-face contact between groups as well as the importance of researchers [from both the evidence briefing service and public health)] being present in decision-making fora. For example, the evidence briefing service main contact was described as ‘omnipresent’ in the Low Value Interventions Implementation Group but not necessarily in other policy-making contexts (P02). Being present enabled PW to identify opportunities for evidence use rather than depending on decision-makers to do so. Reviewers need to be immersed in context in order to understand evidence requirements (interview with NHS England members). There is a sense of a need for a service to ‘handhold’ the CCG in the use of evidence, in part due to lack of skills and lack of knowledge of how evidence can be used (P03).
-
Question generation: working with the evidence briefing service appears to have supported the generation of appropriate research questions via an ‘organic’ process of discussion between the CCG and PW (P12). Topics were initially generated by the lead evidence briefing service contact based on work done elsewhere and these were prioritised based on CCG needs and the contact’s suggestions. Over time, building this relationship has meant that commissioners have learned to present more tailored questions (P12). The conversation between the CCG and the evidence briefing service is perceived as positive because it is ‘iterative’ (P12), suggesting that an incremental approach to policy-making sometimes takes place in this CCG. There is greater recognition of the role of question generation in identifying evidence to support decision-making and the importance of asking the right question when seeking evidence (P03).
-
Relationship and rapport: participants were positive about the relationship built with the evidence briefing service (P09): participants recognise the need to invest time and energy into a relationship rather than it being a passive process (P09). This ensured that the relationship was ‘not just one-way traffic’ (P09). Participants also suggested that the degree of linkage and exchange could have been increased with regular face-to-face meetings about recent topics (P09), taking an informal approach rather than formal presentations, and an opportunity to ask questions (P16). This would aid consolidation of the information more so than reading a briefing paper.
-
Individual gatekeepers: one individual was deemed responsible for gatekeeping between the CCG and the evidence briefing service (P12). Briefings were shared with all of the senior teams, ‘the whole of the CCG’ (P12). It was assumed that distribution of briefings to locality directors, led to dissemination to all CCG members. Briefings were integrated into CCG activity: ‘Paul or one of his team shared with us papers on falls . . . either pressure ulcers or HCAIs [health-care-associated infections] or something, and those papers what I did with them, I was director sort of covering all the quality at the time as well, was send them to all of our providers and commissioners and then took it to our Quality Review Groups with them, and asked them to outline to us where they were delivering against the evidence bases’ (P12).
-
Time frames: time is critical to CCGs pulling evidence: the need to turn things around quickly means that it does not draw on evidence services (P16).
-
Shared understandings of objectives and values: shared understandings were perceived to be a positive aspect of working with PW. Similarly, the longevity of the CCG’s (and previously PCT members’) working relationship with individuals in public health is deemed beneficial to knowledge transfer (P02). High regard for some members of the public health team is derived in part from shared values (specifically that there is only one pot of money to be shared around and therefore services should be evidence based) as well as the PHCA’s skills.
The dominant direction of information transfer in this CCG was a push of evidence from PW on relevant topics. The one instance of a pull from the CCG for evidence on social prescribing was on financial grounds: seeking justification for greater spending in the area (P16). This CCG was the only case in which a participant raised the transfer from the CCG towards the research organisation: as a vanguard, it was able to inform the evidence briefing service contact and the service about their needs so that the service can better deliver to other vanguards (P12).
Evidence briefing service
The evidence briefing service offer appears to have come at a critical time for the CCG, as it was experiencing significant financial constraints. It was therefore perceived as a means of meeting strategic objectives while remaining within financial balance (P02).
Positive feedback
Interview participants gave limited but positive feedback on the evidence briefing service. The service was perceived to have had a positive impact on the CCG’s approach to using evidence where it previously lacked consistency, ‘[the evidence briefing service] helped us at least have a level of discipline about some sources that might then prompt, a lot of them prompt further questions, there’s no doubt about it, but at least we’ve got that level of discipline across us . . .’ (P12), and was perceived to have affected their way of thinking (P12). The service was also seen in part as a useful ‘critical eye’ (e.g. in the low-value interventions work) that was valued because it justified their current decision-making. Time constraints are cited as a factor in the type of evidence seeking that is done and one benefit of the service was time saved for CCG members.
Knowing a face (PW) was perceived as beneficial (P16). The service was perceived as a trusted and credible source of evidence (P12) that is ‘robust’ (P09).
Although the briefings were useful, given the absence of available evidence for some topics (P12), it would be useful to have a summary of studies that are ongoing.
Summary
Research is used instrumentally to inform specific decisions when it is available (e.g. the low-value interventions policy is clinically focused and NICE guidance is available to inform this) and when the topic aligns with CCG priorities (the CCG is operating under financial constraints).
Although the CCG aims to use research-based evidence, it draws heavily on the adaptation of policies and practices from other CCGs. Challenges to this process, such as replicability and a lack of empirical evidence of effectiveness, are also recognised.
The CCG’s engagement with the evidence briefing service was strong and there was also engagement with a larger number of individuals within the organisation than in A1 CCG.
Exploring uptake and use of evidence in decision-making in the B1 Clinical Commissioning Group
Decision-making processes
Evidence-informed decision-making is valued in this CCG and there is an expectation that evidence will be included in business cases. However, there appears to be no formal process for doing this and some participants suggested that evidence ‘gets a bit forgotten’ within the organisation (P10). There is a perceived need to get research worked into decision-making throughout the process:
I think that if I’m honest it would be finding a way to get a bit closer to that or working on how we were thinking about evidence at the beginning and throughout the work we do, ‘cos I think it does get, it does get conveniently, you know, just one more factor to play in, you know, so I don’t know why that happens but, you know, it’s probably not given enough prominence, so from an organisation point of view it’s probably to give some more prominence and thought to that.
P10
However, there is currently little clarity of understanding about how the organisation wants to use evidence, in particular, making evidence part of a whole way of thinking rather than simply one factor.
There is no one model of decision-making captured by the study data. Some participants recognise the need for a rationale for commissioning decision-making:
We’ve matched our perceptions of what we need to do against the clinical evidence and make sure what patients are asking for was clinically sound as well and then they will be able to form an opinion around how the services might look going forward with a good sound rationale and be able to go back to the people who have been involved in that listening exercise and consultation to say why decisions were made.
P19
However, a process of formal options appraisal in which the evidence for alternative services is considered is not applied to all commissioning decisions. Because there is no formal appraisal and comparison of research-based evidence, decision-makers do not assume that the selected option will maximise costs or benefits, although there is some checking of patient safety. In contrast, there is a process of negotiation to find common ground in terms of local preferences and to develop services to fit this.
Drivers
One challenge to a formal process of options appraisal is the multiple pressures on commissioners. Participants emphasised the many sources of knowledge that impinge on policy-making and this highlights the many pressures on decision-making.
-
Enthusiasm: prioritisation is driven partly by individual enthusiasm (e.g. one participant described being inspired to address certain commissioning areas by her mentor), although this enthusiasm is not always borne out in implementation.
-
National direction of travel: there was a clear national lead on priority-setting, the commissioning of social prescribing and the Year of Care programme, possibly because of the nature of the funding in these areas.
-
Individual perspectives and experiences: ‘fears and concerns’ (P05). These can be prejudices or experiential, not necessarily based on evidence.
-
Common sense: some decisions are made because they have face validity [‘barn-door obvious’ (P05)], even if they are not supported by evidence.
-
Structural factors: the separation of public health and CCGs was considered problematic and the nature of this CCG as formed from multiple organisations meant that decision-making is still done in separate organisations, particularly for implementation and pathway design.
-
Organisational values: organisational core values are important to decision-making (P06). CCG members share the same concepts and approaches around holistic care, social justice and inequality (these are shared organisational values that reflect the notion of safeguarding the ‘common good’. ‘Imposing values’ (P05) may result in some bias.
Pressures on commissioners
Compared with Wye et al. ,48 evidence purveyors, national and regional performance managers and the press were not identified in this CCG; however, health-care providers, the public, service users, clinicians and internal colleagues were identified.
Use of evidence in decision-making
The catalyst for acquiring and applying evidence in decision-making in this CCG is unclear, although, as in cases A and B, the evidence-seeking process is led by the need to develop a service that meets performance targets rather than resulting from the emergence of research findings. However, different types of evidence are used differently in decision-making.
Evidence is, at times, used instrumentally in terms of ‘a little bit of evidence’ being used to raise interest in an area, but then there is recognition that this should be tested in a systematic way (P08).
National ‘travel’
Alongside evidence from patients and the public, guidance from NICE guidance and other bodies influences the commissioning intentions (P06).
Public and patient preferences: stakeholder involvement is especially key during the early stages of commissioning to shape the service (P06). The influence of patient and public preferences on commissioning processes is formalised by a consultation process [e.g. the reprocurement of community services (P06)]. This engagement includes a patient questionnaire to inform the development of urgent care, a patient forum for mental health services and interviews with patients and carers to inform the new service specification for community services (Governing Body meeting minutes September 2014). Documentary analysis illustrated the formal emphasis on data generated from public consultation with little mention of research or evidence in governing body documentation. Documentation associated with all governing body meeting minutes contains few references to research evidence and a significant focus on patient engagement and consultation. The service specification for MSK services was partially informed by engagement with 50 MSK patients (June 2014). Patient and public evidence is also used as a testing ground for commissioning plans.
Providers
Some service design takes place during the procurement phase and is influenced by providers and potential new providers. There is a market involvement aspect to commissioning as providers are asked to give their perspective on the design of services (P06). In a ‘market engagement event’, ‘coproduction’ is used to develop a high-level framework of potential services. This framework and a set of options are then presented at a stakeholder event to identify potential interest from providers. Because providers often collect patient and public feedback data, including patient satisfaction surveys, and have an influence over service specifications, patient and staff satisfaction is also built into the services at this stage: ‘so there was direct patient involvement, and so the strategy’s been, you know, it’s final draft basically and the views of patients are in that, so you know, we think we’ve represented patient views in that’ (P10).
However, participants maintain that national evidence has greater weight than surgeon or provider preferences: ‘I would say the main influence is national, or the national evidence says, I think quite low down would be local surgical preference . . .’ (P10). The description of decision-making processes provided by P19 suggests a process of integrating stakeholder perceptions but via a safety-checking mechanism that prioritises clinical evidence over patient perspectives. Part of the rationale for this is to ensure that decisions can be justified.
. . . We’ve had a significant period of time listening to service users and carers, we’ve matched comments from them against NICE guidelines and policies and things that come from NHS England, Department of Health around mental health service provision targets and all those . . . and I’ve been very clear I think with public and service users around the process we’re going through that yes, we listen to what you say, it may be that’s what you think you need or you’re asking for, it’s not deemed to be clinically sound if there’s no evidence base to say it’s what we should be doing. So it’s balancing the views, the evidence, the impact of cost, quality . . .
P19
However, there is no indication of how this information is used in decision-making. Feedback reports from stakeholders were produced, but it was unclear whether or not these are used to inform service design (in contrast with the A1 CCG, where this was a clear intention). Therefore, the process of integrating public and patient information may be limited to a representation of perspectives:
. . . so they have [a] regular sort of committee group meeting, which is a steering group with, you know, key people from the patient and voluntary sector playing into that. And then any bits of work that are going on, or emerging, play into that, so it’s a way of sort of bringing it together in some kind of co-ordinated way and that, and that allowing to be reported into the senior management team, the executive, so that we can understand the messages that are coming out.
P10
Working together
Ways to Wellness (www.waystowellness.org.uk) is an example of how the CSU, data analysts and the local council work together to identify what is the best value for money in service delivery. Evidence (from the CSU and regional Quality Observatory document) is used as a tool to convince members of a direction of travel. The regional Quality Observatory, commissioned to look at the evidence around the Ways to Wellness pathway, amalgamated more robust evidence from RCTs with less research-informed local information to work out what impact the pathway may have.
‘What works elsewhere’
All participants discussed drawing on other CCGs’ policies and exploring how these could be adapted to fit local need. However, commissioners aimed to identify the evidence of outcomes that other CCGs have considered: ‘so I think when it got to pathway level we’d be looking for some sort of local piloting with some sort of national or local evidence to back it up’ (P06). Commissioners would combine local evidence from pilots with asking other CCGs what national evidence supports it. Descriptions of collecting data from other CCGs were not as extensive as those in case study B. This was perhaps due to the nature of the example topics in each case study: social prescribing (case study B) is a relatively new intervention and it was recognised that this means there is little robust evidence. In contrast, the reprocurement of community services (case study C) may have encouraged the use of public consultation.
Face validity
The face validity of evidence was important to decision-makers. In instances when gut instinct suggests that a service option is wrong, hard research evidence may be sought to support this feeling (P06). In contrast, if the option has face validity, no evidence is sought. Clinical leads would look at clinical evidence if there was a move towards changing specific drugs for something ‘you would expect some sort of evidence base for that . . . and cost analysis’. A distinction lies between micro-level decisions about changing a drug and macro-level service design because (P06) people struggle to think about the evidence base around service redesign and micro-level interventions have population-defined intervention evaluations available.
Innovation
Although there is less emphasis on innovation than in the A1 CCG, an innovative approach to decision-making is adopted to fill the void where there is no research evidence available to inform commissioning. The perceived advantage of taking an innovative approach is that it enables greater flexibility: ‘I think [we] are going into unchartered waters. We do have to be innovative as organisations and we do have to therefore probably think about creating strategies that probably have very little evidence because it’s not yet been created’ (P08).
Challenges to the use of evidence in decision-making
Although there is a general principle of evidence-based decision-making, the reality of policy-making takes a more pragmatic approach to the use of evidence.
Difficulties with evidence: there are a number of perceived problems with evidence. These include a lack of evidence available to inform decision-making (P05); conflicts between local data and national evidence possibly owing to national evidence not being generalisable to local context, and queries around the robustness of local policy. Other problems are with evidence conflicting with other pressures such as the cost limitations and numbers of patients: ‘evidential problems get, to me, a level of friction’ (P05), and ambiguity of evidence: ‘there is very often not a clear-cut yes or no that comes out of the evidence’ (P08). If they are unsure about the evidence, they seek assistance from public health.
Attitudes towards evidence
Disagreement with evidence affects it use and enthusiasm for it because individuals have preferences for certain evidence depending on whether or not they agree with it (P05). When research evidence is provided (e.g. a regional Quality Observatory document supporting Ways to Wellness) ideally it will confirm the CCG’s existing values.
Structural challenges
These include the recognition that decision-making and evidence are complex and the human dimension has to be taken into account. Decisions need to be implemented and real people and patients deviate from the evidence in practice/implementation so the evidence may not always stack up locally (P08). The newness and nature of CCGs means that it is structured and operating differently and there is little evidence available to support this.
Absorptive capacity
Capacity to acquire research evidence
There is a key gap in the acquisition and review of evidence in-house and ways to ‘translate this into practice’. As in the A1 CCG, there is no formal process for acquiring research evidence to inform commissioning. This may in part be because the question about research evidence is not always asked by the executive team and because commissioning can be done without evidence (P10). The response to the lack of available evidence, for example, around social prescribing, is to seek to generate evidence in order to see what works: ‘what’s the evidence from our point of view’, rather than a formal seeking of ‘evidence’.
There are three avenues through which ‘evidence’ is sought: from national bodies by commissioners and managers themselves; via the CSU, public health teams and other bodies (including the evidence briefing service); and through the coproduction of evidence through consultation.
-
National bodies: although information is sought from think tanks and national bodies – ‘I think there is a reasonable stab at that’ (P10), this is not necessarily a formal process (P10). Literature searches are conducted by commissioning managers [e.g. on general community service provision (P06)] and may draw on evidence from The King’s Fund and the national 5-year forward view and other national papers such as the ‘Future Hospital Commission Paper’ that are influential as the ‘national direction of travel’.
-
External bodies: there is some provision via the CSU to review and assess evidence (participant 06 sent questions to this service recently around frailty). However, the CSU is more used to looking at data from a provider-monitoring point of view. As in the A2 CCG, there is therefore some dissatisfaction with the service provided by the CSU but it is the CCG’s formal means of acquiring evidence. In one example, the CCG sought evidence via the evidence briefing service (low-value interventions policy).
-
Coproduction of ‘evidence’: for example, stakeholder engagement events were held to collect information on public preferences to inform new service developments – public preferences are for health and social care to be integrated again.
One of the challenges to acquiring evidence is a lack of skills and knowledge or resources to seek information, as well as a lack of understanding about the processes of doing so:
. . . sometimes I say, I need a bit of information on this and someone can go there and do that but at the beginning of a project sometimes you don’t know where to start and where to start defining them, the questions for this. So I think it’s expertise and having possibly the right people in-house where, when the (ideas) come you can start asking the questions.
P06
Acquiring evidence is also challenging because commissioners do not ask questions that are specific enough: ‘we are not very good at trying to define the questions that we might want to do our searching on’ (P06). It is also because the population might be wider and more complex and the interventions are multiple and complex. ‘It is like putting the pieces of a jigsaw together with evidence and that is difficult’ (P06).
Capacity to assess research evidence
The CCG’s capacity to assess and appraise research evidence internally is limited, but it does draw on the skills in local organisations. One participant (P06) demonstrates an understanding of what constitutes high-quality evidence but this is led by his clinical perspective. Others mention ‘Level 1 A evidence’ and RCTs so there is recognition of different types of evidence (P05). There is a degree of appraisal of external CCG policies via judgements about what is good practice – ‘people who’ve won awards’ and ‘understanding the person’s credibility on the subject area, look at the methodology they’ve used, look at the sample size . . .’ (P19). Decision-makers have responsibility for appraising evidence and, although the capacity of individual members to appraise evidence is diverse, they may lack the skills to do this and the process is informal (P06). Clinical members have experience of critical appraisal in their training but non-clinical members do not necessarily have this experience (P06).
There is some discord between individual participants’ recognition of what constitutes high-quality evidence and the practice of employing information to inform commissioning. However, P06 suggests that evidence is unlikely to ‘fundamentally change the pathway’. For example, one clinical member includes ‘professional articles in journals’ about other CCGs’ policies and activity, as well as talking to other CCGs about what it has done as ‘evidence’ (P19).
The ‘value’ placed on different sources of evidence is mixed: although participant 06 places sources such as the Nuffield Trust at the top of the hierarchy, he also states that local piloting and data analysis done on the ground with feedback from patients would be ‘hugely influential’ on decision-making despite not being high-level evidence. What works elsewhere has a key role, despite the recognition that this is ‘not pure research . . . [and] not necessarily had been through a rigorous research process’ (P19).
Adapting research evidence
Although the organisation has some capacity to acquire evidence that includes research evidence, through their own searching and by employing external organisations, the way in which patient and public preference is incorporated into decision-making is unclear, as is the extent to which it is only a consultation process.
Replication of other policies and practice in the local context is recognised as a challenge. One participant discusses the need to adapt lessons learned elsewhere (from other CCGs): ‘never think that you can just lift and shift something that works in one city to another . . .’ (P19). In bringing together what works elsewhere with stakeholder information and demographics, there is a process of amalgamating different sources of ‘evidence’ (local information, stakeholder preferences and what works) in order to design a service. The important aspect is making sure that what works elsewhere also meets the needs of local people (preferences as well as demographic data).
Capacity to apply research evidence
Evidence is only ever one factor that plays into decision-making and this needs to change and the organisation needs to put thought into how it might do that (P10): ‘you know I think there’s a reasonable chance we’ve got a blind spot on how we, you know, fully bring evidence into our commissioning’ (P10). When multiple evidence sources are amalgamated by delivery groups an attempt is made to integrate evidence of varying quality: ‘getting a balance right between the academic evidence and what I call the softer evidence’ (P19). At pathway level, evidence is more likely to be needed and applied (P06), owing to the need for evidence around clinical intervention options rather than the design of a whole service. There is also a strategic deployment of evidence to gain influence (Nutley et al. 11), used as a persuasive tool that supports behaviour change among providers (P05) and as a defence for non-payment for evidence-free interventions to providers:
. . . now NICE guidance is to move away from the sort of invasive ligation and stripping and go towards, to go minimally invasive sclerotherapy and oblation, but we know that there’s massive disparity in trusts in, just in the [this region] where some are doing lots of the minimally invasive and some are doing the sort of more, you know, the old-fashioned stuff, so it’s about taking that information, having a clinical discussion with the trust, if necessary backed up with a contracting, . . . putting a target in if that’s what’s needed to say ‘you know, we’re looking for you in the next year to reduce that to 50% and the year after that to 25%’, whatever that might be.
P19
Linkage and exchange
There is evidence of a model of linkage and exchange between this CCG and local research bodies. In some cases, there are links with research institutions such as universities, but, for the most part, the research community is represented by the public health team, the CSU, the regional Quality Observatory and, during the last stages of the project, the evidence briefing service. Some indications of linkage and exchange are more evident:
Synthesis of local data with general knowledge: the synthesis of local data with wider evidence was done by the CCG itself rather than via knowledge brokers. Public health specialists appear to have provided this service for the community services reprocurement process.
Trust: trust between the CCG and ‘researchers’, in this case the organisations providing access to evidence, is important to maintain the relationship. There is existing support from public health embedded in the CCG, the regional Quality Observatory and the CSU, which are perceived as suppliers of information in the form of business intelligence. Some elements of this are positive and the long-term relationships with individuals in public health have resulted in trust in their work. Evidence transfer and information-seeking behaviour (pull of evidence into the CCG) is assisted by the close relationship between members of the CCG and the public health team. The latter is considered a robust source of evidence that provides a reliable quality of work.
The CSU contractually provides support to the CCG for decision-making, including providing evidence; however, the adequacy of the current service is questioned by some commissioners. Although one participant described a good linkage between the CSU and the CCG in terms of questions asked and evidence provided (P19), apparent difficulties in this relationship are based on a perceived disconnect between the question asked and the answer provided:
It’s a bit like going to a garage and saying, you know, I’ve got this real dreadful noise in my car from that wheel and I think there’s a wheel bearing gone, and they look at it and say, possibly, but we think the paint needs to be changed.
P05
The result is a lack of fit in the data to the question and a lack of clarity about what information (interpretation of data or non-analytical production of data) is being provided by the CSU. There was disagreement regarding where this service would be best located, as one participant believed that such a service would be better located within the CCG (P05).
The region’s Quality Observatory appears to replace a skills deficit in the organisation by providing a service that supports interpretation of routine data (P05). The success of this relationship is attributed to the transactional nature of the relationship ‘because we are paying them directly’. A second element may be the personal relationships between individuals in each organisation and a memory. Strong links between researchers (or in this case professional bodies) are important in changing practice: ‘I feel very strongly that you go back to the people that come up with the goods’ (P05). A similar picture is built around public health where the history of individuals from public health working with the CCG is strong due to the quality of previous work.
Question generation: one of the challenges with seeking evidence relates to defining questions. This is especially problematic for services that address the needs of complex populations, for example, community services. This complexity was referred to as a ‘jigsaw’ with multiple pieces to put together (P06). The interaction with the evidence briefing service (PW) in this arm of the service did not fully address this problem during the intervention period:
. . . we came up with a list of areas that we thought might be useful to have a bit of evidence base to, and had that conversation with Paul and then sent an e-mail. He, I think prior to that he sent me information about areas that he’d, other areas he’d been doing work for and what might be useful to reinventing the wheel and things.
P06
However, because the service was also working with local CCGs, some of the topics were similar and meant that the same questions could be asked: ‘the list that he came back with what the work he’d been doing on wasn’t a million miles away from what people asked internally to what they’d like some information on’ (P06). The availability of the service did enable individual clinical leads to present questions that would be useful to current commissioning topics. The process of comparing the CCG’s list of questions with those being prepared by the evidence briefing service seems to have been useful in part because PW had an understanding of the work going on across the region. Furthermore, early negotiation of questions with PW was useful:
I think it was a little bit puzzled about how high level the search was going to be, i.e. urgent care versus 75-year-old antibiotics at home, that; so I was, so I was a little bit unsure about the scope of the sort of evidence coming back and things. I mean ‘cos [sic] if we went off and went as a search on urgent care or that sort of stuff versus give me a search on telephone triage in general practice, I think that; so I didn’t really know where to start on that point, I knew; and when I was chatting with Paul he was like, ‘Well really there’s some areas that are quite broad, some areas that can be quite specific, and if you’ve got any key questions just get people to ask the key questions, or if there’s big areas you want to focus on like frailty, things like that, put them down as well’. So that was a bit of a mixture I think that we put down.
P06
Other aspects of linkage and exchange presented greater challenges for this Clinical Commissioning Group
Dissemination of briefings
This appears to have been a relatively weak area in the CCG compared with the A1 CCG. Dissemination of briefings took place in meetings and via e-mail, but there is no evidence of if or how this information was integrated into decision-making. This was in part attributed to busy workloads (P06). The CCG did send the briefings to interested people externally, such as providers (P06).
Forging new connections
Apart from the evidence briefing service and other than named individuals in public health, there is a lack of knowledge about who commissioners should contact to provide research evidence to answer questions: ‘I struggle a bit with knowing who to contact’ (P06).
One-to-one encounter
One participant would have preferred more one-to-one or teleconference contact with the service in order to improve the transfer of knowledge:
I think when you start with the high-level list, and sometimes you want to drill down a little bit into more specific questions, I think that probably came out of one of these; you might say, well that’s an area but actually I’d like to do a bit of a wider search on that specifically. So that would have been useful to do, and I could have done that through e-mail and things, but sometimes if you have more set meetings knowing that you’re probably more that your ad hoc asking a question to someone and you, but you know that you could ask a question and this is the way we do it and, on a monthly basis/6-weekly basis, and I think we probably would have getting more out of it . . . I always knew I could probably ask further questions and things but very, but very loosely; I think it was just, just there wasn’t any sort of real structure around it so time just passed.
P06
Development of positive relations
There is further work to be done to develop the relationship between the CCG and CSU as a source of research information.
Regional network
An ideal ‘business intelligence’ service would be a collaborative service that cuts across multiple organisations in the region such as the local authority.
There is slightly more evidence of a ‘user pull’ movement to draw research into the organisation rather than a response to a ‘push’ from outside the organisation. However, the degree to which research-obtained evidence is drawn into the CCG is limited.
Pull
The executive team seeks information from public health colleagues when they feel there is evidence lacking. This suggests a pull of research knowledge from decision-makers; however, this process appears to be aided by working closely together. In some cases the catalyst for this evidence briefing service is the need to answer a specific research question (as in the case of evidence around orthopaedic interventions during the review of the low-value interventions policy). In other cases, when the policy question is more generalised (such as reconfiguration of GP services), information is sought differently.
Push
There is a push of evidence from national bodies such as the Commissioning for Value packs; however, there is a perceived need for this to be more relevant for commissioners as they currently lack meaningful data.
Evidence briefing service
Given the ambiguity of evidence described above, some participants sought a service that could categorise interventions in a simple format:
. . . what I’d love to be able to do, ‘cos there’s no point in anybody’s time being wasted reinventing the wheel, I’d like people to come up and say, you know, this is a list of things; there’s three categories in this list, those for which there’s absolutely no input, no benefit, and probably some harm, those where it’s dubious and those where it’s even more, it’s less dubious but if you’re going to have to put; it’s informed data.
P05
This reflects the process applied to the low-value interventions briefings work that applied a red–amber–green system to the selected clinical interventions (see Uptake and use of evidence in meso-level commissioning).
Participants also described a need for time in which to engage with the service in order to generate appropriate questions and to understand what is needed in order for them to be answered. However, there remains a transactional dimension to the relationship:
Subcontracting is better, ‘cos [sic] (a) it makes me smarter about the questions I want to ask, ‘cos [sic] it’s going to cost me as soon as I walk into a room with somebody; it’s like being, it’s like going and having a discussion with a lawyer, you don’t go in and have a nice chat, you go in with a list of things that you’ve thought out in advance. So yes, it makes me smarter (b) it’s a better use of public purse, and (c) because of my background, or, or whoever’s going to this conversation isn’t necessarily going to be the same as the other person, the sparking between, in, in the discussion is actually going to generate so that the sum is more than the, sorry, the product is more than the sum . . .
P05
Feedback
Although awareness of the service and the briefings was mixed, reflecting the degree of dialogue between individuals and PW, the Medical Director did recognise briefings when prompted.
Participants provided positive feedback on the format and brevity. The format was considered helpful even to those with research skills (P06) and, as a snapshot of the evidence on an area, the briefings are good and reader-friendly and there was an assumption that the work has been thorough. The briefings were considered useful despite indicating a lack of evidence as they enable decision-makers to identify other criteria on which to base their decision:
We commissioned and received some briefings and I think they were, you know, they were high quality and the GP in A&E [(accident and emergency)] one because it’s so complicated I don’t think it unlocked the decision for us but it informed our thinking so I would struggle to say that I saw a briefing that disappointed me.
P08
Even the absence of available evidence identified in briefings informed decision-making because the CCG was forced to seek other types of information:
Well it helped us know that whatever decision we made was unlikely to be universally supported by the evidence and therefore we had to use other criteria really in making decisions whether or not we would invest in that as a model. Although there is an absence of evidence – in cases like social prescribing they talk about ‘generating evidence’ and seeing what works.
P08
Evidence from the briefing service was used in the commissioning of low-value interventions to confirm CCG intentions but there was no discussion of how it would have proceeded if the evidence contradicted its preferences.
The evidence briefing service may fill a gap that the CSU and other regional sources cannot meet (P10). A positive effect was also perceived on individuals’ use of evidence: working with the service helped one participant (P10) to see the value in presenting evidence to the board. The evidence briefing service has helped to make evidence a part of culture but it can be difficult to keep it in mind during decision-making. In part, it was the relationship between individuals that supported this as the two-way dialogue between PW and the service helped to identify what information would be of most value. It appears that briefings generated some dialogue between key contacts but it was unclear if they have been more widely read and there was little discussion about content (P06). Low-value intervention commissioning was clearly cited as the case in which the briefings have fed into decision-making:
The use of evidence is definitely higher profile . . . much more inclined to test out our assumptions and our things that we want to do based on evidence that might be out there.
P19
Participants also expressed some negative feedback. There was a perceived lack of visibility of the service:
I thought it would be more visible. So I’ll not, you know, I’ll tell you that my guess is that we’re not the high intervention is my guess because it was not something that came across my radar very much.
P08
At the highest level of the organisation there was an expectation of more from the service:
Well I imagine there might be some very direct work with us as a CCG in terms of perhaps, you know, a governing body development session on the nature of evidence or how to use evidence or, you know, some kind of developmental type seminar which maybe it was never ever going to be set up to be.
P08
However, other participants had a clearer understanding of the objectives of the service: ‘I think it’s been very, it was clear what you were able to give us’ (P10).
The ambiguity of research evidence was one challenge to its use and one participant described a service that categorised interventions according to harms and benefits (P06). This approach suggests a desire for a logical analysis of risks and benefits which is not currently present in the CCG’s decision-making models.
Summary
There is an intention to use evidence in decision-making and recognition that all decision-making should be supported by a clear rationale.
However, in this CCG the many and varied pressures that influence decision-making were especially evident.
Stakeholder involvement is viewed as key during the early stages of commissioning, and consultation to gather preferences from patients and the public was a formal part of service development. Providers also played a role in the development of new service models.
Compared with other case sites, engagement with the evidence briefing service was low but there was an increase in contact following the delivery of the post-intervention questionnaire.
Uptake and use of evidence in meso-level commissioning
This case study is of the development of a collaborative process involving all CCGs in the region to review and consider the inclusion of a wider range of procedures on a regional VBCP list of low-value interventions. The review cuts across all the study CCGs so the case study captures processes of joint policy-making and the unique challenges that arise from this. 70
Decision-making processes
The VBCP Implementation Group was developed via monthly meetings. The implementation working group that designed and updated the policy includes representatives from all CCGs in the region but is explicitly not a decision-making group. The policy is reviewed annually but is also viewed as a working document that may require more regular sign-off from individual CCG boards. The distinction concerns the nature of the changes proposed by PHCA and the Implementation Group: significant changes to criteria or additional interventions require ratification by CCG boards annually, whereas minor changes can be approved by the Implementation Group. Board-level ratification is driven partly by the potentially controversial nature of some decisions in the public eye.
This policy differs from others in our case studies because it is driven by current practice informing policy wording and content: ‘moving policy in line with clinical practice’ (policy document) and ‘bringing wording in line with decision precedents’ (policy document). In practice, there are a number of other influences.
Drivers
Drivers for low-value interventions policy development appeared more numerous and interlinked than for other topics. Perhaps unsurprisingly, financial constraints were an important driver. This is apparent in discussions in Implementation Group meetings (e.g. there is a focus on ‘big-ticket’ items that have the potential to maximise cost savings) and the policy document. The joint decision-making context highlighted the diversity of financial contexts of each CCG, for example, reducing spend was a particular driver for Case Study B, which was demonstrated by the internal document produced to outline impending changes to the policy in that CCG with cost as the context for the changes. This pressure also means that the Implementation Group was keen that providers are not able to strongly influence decision-making as this tends to increase CCG costs (9 July 2014 observations).
Second, meeting observations indicate a need for legal defensibility owing to the potential for judicial review. This was openly stated by the PHCA but also indicated by members’ stressing the need for an audit trail of their decisions, especially for those not signed off by individual CCG boards. In one board meeting (at the A1 CCG) there was concern about public response to the policy and anxiety that decisions be ‘defensible’ and future proofed: for example, for an intervention such as in vitro fertilisation, economic impact was considered but discussion focused on the legal and public implications of the decision. This may explain why commissioners draw heavily on policies from other CCGs, because these provide the strength in numbers that may make decisions robust (P03) and validate local activity. Internet searches were used to identify IFR policies in other CCGs to see what interventions had been included by other organisations before looking at the evidence for each intervention.
There is also a concern about implications for patient safety that drives a focus on evidence to support policy-making. Although this has the potential to conflict with the drive to reduce costs, it is also used in tandem with this; for example, in one board meeting the PHCA reiterated the focus on patient safety and the release of money from areas of limited clinical benefit for use elsewhere.
In contrast with local commissioning on other topics, research evidence and NICE guidance were key drivers in low-value commissioning both in the language used and the discussions in implementation and board meetings (e.g. in the A1 CCG the rationale presented by the PHCA was evidence based and questions presented by board members in the A2 CCG emphasised the evidence base). This is seemingly facilitated by the clinical nature of the decisions being made and the greater availability of evidence in these areas. It is also likely to be more important due to the potential legal implications highlighted elsewhere and helps to protect against challenges from providers. Sources of evidence for legal justification are primarily existing guidance: NICE was commonly referenced by board members. Although seemingly straightforward, this driver is complex. For CCGs where the financial motivation for inclusion of an intervention in the policy is especially strong, there was relief when, having retrospectively sought evidence, it found that it supported their decisions. Observations of meetings suggested that the PHCA did recognise conflict between NHS England (national evidence) and the CCG (local evidence). Evidence is welcomed if it matches the objectives of the policy; for example, there was relief that the evidence briefing service review supported the decisions that had already been made by the working group (P02).
At board level, evidence sometimes comes into conflict with values; in the A1 CCG there was much discussion around the rationale for surgery being offered (quality of life vs. health reasons). Despite, or perhaps because of, the local financial constraints, organisational values do play a part in the decision-making: with the low-value interventions policy the new interventions are seen as ‘the right thing to do’ (P02). Some policy decisions, such as surrogacy, appear to be made on ‘moral’ grounds (PHCA). The board in the A1 CCG was interested in the role of the CCG’s values – for example, patient quality of life was raised several times. The PHCA made it clear that the policy should reflect the CCG’s values as it is an expression of the CCG’s values.
Public health role in decision-making
The PHCA draws on information from clinical experts locally (including providers) and from clinical networks nationally. The local treatment advisory group also provided some support but primarily around low-number/high-cost treatments. The CSU provides administrative support but does not have the capacity to provide analytical support.
Challenges of multiorganisational policy-making
Financial drivers, although influential in terms of policy-making, also demonstrate one of the challenges of cross-organisational policy-making. It highlights the importance of local/organisational context on decision-making as those CCGs with more acute financial constraints were more enthusiastic about the inclusion of additional interventions in the policy, whereas those CCGs with fewer financial constraints demonstrated less enthusiasm. Although this gave rise to some heated discussion, there was an understanding that there needed to be a joint policy. As a result, the ‘better-off’ CCG members were less enthusiastic about their implementation of the policy.
Regional commissioning involves navigating through multiple agendas, including those of different CCGs (P02). This may mean that there is potential for conflict between organisational values. Joint decision-making also presents the challenges of ensuring that the policy does not include a pathway that contradicts another in the region, all the more so when there are multiple organisations to consider. The consensus in the board meeting in the A1 CCG appeared to be that the public should be involved in decision-making on this topic in the future and that decisions needed to be made jointly with other CCGs in order to cover themselves.
The policy is also perceived as creating a ‘technical solution to a cultural problem’ (P18), which is problematic because this may be neglecting the real issues. That is, GPs already know the evidence about the decisions; it is just not the culture to refer in this way.
The benefits of shared policy-making are the shared governance and safety nets that this provides (9 September 2014 observations). National and regional commonality is a protection against legal proceedings, as individual CCGs were reluctant to progress in isolation from other CCGs in the region. On a practical level, multiorganisational working provides opportunities to test parts of policies within one CCG before being rolled out elsewhere.
Summary
That the low-value interventions policy focuses on interventions rather than wider services leads to a greater drive to seek evidence than for other policies. This is potentially strengthened by the need to publicly justify reductions in referrals. The process of PW offering to critique interventions focused on push rather than pull ‘without being asked’ (P02). However, in the absence of the evidence briefing service, some commissioners suggested that they would have drawn on public health services (P02). Existing relations between the PHCA and commissioners meant that there was an intention to seek evidence to support low-value interventions decision-making and public health presence on the working group; however, the evidence briefing service provided an additional push of evidence.
Chapter 6 Discussion and conclusions
The Health and Social Care Act 20123 has mandated research use as a core consideration in health service commissioning arrangements. NHS commissioners are expected to use research to inform commissioning and decommissioning of services, and there is a substantive evidence base on which they can draw. Building on development work undertaken as part of the NIHR CLAHRC for Leeds, York and Bradford and under the auspices of the CRD core contract with the NIHR, we sought to establish whether or not having access to a responsive evidence briefing service would improve the uptake and use of research evidence by NHS commissioners, compared with less intensive and less targeted alternatives. We did this by undertaking a controlled comparative evaluation with CCGs in one defined geographical area of North England.
Statement of principal findings
Over the course of the study the evidence briefing service addressed 24 topics raised by participating CCGs (see Chapter 3). Requests for evidence briefings served different purposes. The majority of requests were focused on options for the delivery and organisation of a range of services and possible interventions to support the self-management of long-term conditions. Most of the requests could be categorised as conceptual, not directly linked to discrete decisions or actions but often intended to gain knowledge and awareness of possible options for future actions. Symbolic drivers of use of research (i.e. to justify or support pre-existing intentions or actions) were less frequent and included a pre-existing decision to close a walk-in centre and to lend weight to a major initiative to promote self-care already under way. Instrumental use was linked to explicit disinvestment processes. There were no instances in which requests for evidence could be viewed as representing an imposed use of research.
Our primary research question asked whether or not access to a demand-led evidence briefing service would improve the uptake and use of research evidence by NHS commissioners, compared with less intensive and less targeted alternatives. In terms of the primary outcome measure, the evidence briefing service was not associated with increases in CCG capacity to acquire, assess, adapt and apply research evidence to support decision-making.
Regardless of the intervention received, at baseline participating CCGs indicated that they lacked a consistent approach to their research-seeking behaviours and their capacity to acquire research remained the same at follow-up. At baseline, CCGs were non-committal (neither agreeing nor disagreeing) about whether or not they had the capacity to assess the quality, reliability and applicability of research for use in decision-making. This perception remained unchanged at follow-up. There was also no change between baseline and follow-up on perceptions of CCGs’ capacity to adapt and summarise research results for use in decision-making; there was neither agreement nor disagreement that CCGs had the capacity to do so. Finally, individuals’ perceptions that their CCG did not have systems and processes in place to apply research routinely also remained unchanged.
A secondary research question sought to establish whether or not contact between researchers and NHS commissioners would increase use of research evidence. Exposure to the evidence briefing service did not increase perceptions of the quality or quantity of contact between CCGs and researchers, nor did it lead to perceived improvements in institutional (CCG) support for contact between commissioners and researchers. Exposure did not increase perceptions that communication between CCGs and researchers helped commissioners to achieve professional goals, nor did it increase what were already positive perceptions of researchers in general.
Exposure to the evidence briefing service did not appear to have any impact on individuals’ intentions to use research evidence in decision-making or their perceptions of a shift in collective CCG norms towards the use of research for decision-making. Regardless of intervention received, these measures were positively orientated at baseline and were sustained at follow-up.
Our final secondary research question asked whether or not evidence briefings tailored to specific local contexts could inform decision-making in other CCGs. Our ability to answer this question was undermined by a lack of recorded documentary evidence of research use (a finding in itself) across participating sites. With a few exceptions, most discussions between commissioners and the evidence briefing team were informal and rarely involved minuted meetings or formal gatherings of CCG staff. This lack of a visible audit trail or indeed for the onward distribution and cascade of generated outputs makes us dependent on self-report and/or observed use for impact. Therefore, it is difficult to determine the extent to which the evidence briefings produced had wider value across participating CCGs and in those outside the study.
Strengths and limitations
This quantitative component of the evaluation was in many ways the most challenging aspect of the study. We were reliant on the quality of the sampling frames provided by (1) CCG cases themselves and (2) nationally, in the form of contact data for each CCG. We found that information provided by CCGs, and especially that sourced for the national benchmarking component of the study, was sometimes inaccurate (spelling mistakes in e-mail addresses and surnames), incomplete (absent e-mail addresses or contact numbers) or included individuals who no longer worked at a CCG. As such, each CCG had to be contacted to obtain, check and recheck the contact details of staff provided. In a related limitation, we asked individuals to complete the national survey on behalf of their CCG and in consultation with colleagues, but, in a rapidly changing landscape, we cannot rule out the possibility that different individuals completed the survey at baseline and follow-up.
The respective baseline and follow-up response rates of 68% and 44% are not unreasonable given the number of competing requests for information CCGs are routinely faced with. For example, our response rates compare favourably with annual surveys conducted by the Health Foundation and The King’s Fund over the same time period,71,72 and with a contemporaneous Canadian randomised evaluation of the effects of an evidence service on policy-makers’ use of research evidence that failed to recruit. 37,73 However, we acknowledge that we experienced considerable attrition in the percentage of participants in our study who completed both baseline and follow-up surveys. In the study case sites the percentage of individuals completing both surveys ranged from ≈60% for those receiving intervention A to ≈30% in the CCGs who were allocated to receive intervention C, the non-responsive version of the service. As the turnover of staff employed at participating CCGs was relatively stable over the course of the study, there may be a degree of selection bias apparent in our study.
We utilised an 87-item questionnaire to collect data relevant to the primary outcome and, although all responses were on short scales (none required any written responses), piloting estimated that it would take participants up to 45 minutes to complete. We employed a range of factors to increase the odds of response including pre-notification, follow-up contact, online and postal formats, reminder copies, mention of an obligation to respond and university sponsorship. 74 However, we are aware that both shorter questionnaires and financial incentives are also associated with increased response rates. 74 In this instance, it may be that the perceived return for time invested of access to a funded evidence briefing service either immediately or after the intervention phase was complete (the offer made to participants in the ‘control’ standard service intervention C) was deemed inadequate compensation by some participants. The CCGs allocated to intervention C had expressed initial enthusiasm for participation. However, the lack of any immediate return from, or a sufficient relationship with, the evidence briefing service over the course of the study may go some way to explaining why CCGs allocated to the ‘control’ intervention C had the lowest response rate.
Survey length may also have contributed to the lack of completeness in the data collected. This lack of completeness necessitated the use of multiple imputation to strengthen analysis. 45,46 In line with best practice in multiple imputation, comparison with the non-imputed data revealed similar means and distributions.
Taken together, these limitations mean that we have been cautious in our interpretation of any apparent impact of the evidence briefing service on the primary outcome measures. Indeed, we have been careful to avoid the pitfalls of p-values in assessing whether this study provides evidence ‘for’ or ‘against’ rejection of the null hypothesis. 62 Although the statistical tests applied have generated some apparent statistical differences, beyond those that we would have expected to see by chance, our approach to interpretation has, we think, injected appropriate caution in interpreting the real-world significance of what was observed. Although not explicitly stated in the original protocol, it would be reasonable to consider a shift of at least one point on any Likert scale as indicative of impact. So although, for example, we observed a statistically significant decline in attitudes towards research use at follow-up, the magnitude of this shift (no shift on the scale) is unlikely to be behaviourally significant. The benchmark of a national sample of non-intervention CCGs also helps assess the theoretical significance of what was observed. The fact that CCGs receiving the ‘control’ standard service intervention C and the national benchmarking sample have all ‘improved’ (capacity) suggests a degree of maturation and perhaps something of a ‘rising tide’75 phenomenon at play. In other words, a CCG may be making negligible gains in capacity as it becomes more established over time.
An original aim was to employ documentary analysis to identify and understand the ways in which briefings generated by the service were taken up and considered in the decision-making processes of each participating CCG. Our development work undertaken as part of the NIHR CLAHRC for Leeds, York and Bradford (admittedly with PCTs) had suggested that this would be a feasible approach to take. However, early analysis undertaken to trace evidence briefings generated in the intervention phase revealed, with few exceptions, a lack of recorded evidence of use. Most discussions between contacts in CCGs and the evidence briefing team were informal and rarely involved minuted meetings or formal gatherings of CCG staff. Indeed, we were often responding to requests from one, two or three named individuals who would be leading a piece of work or clinical area on behalf of the CCG as a whole. As such, analysis of records supporting the more formal executive and governing body meetings provided little information about sources used or about the decision-making process itself. The ‘unseen and informal spaces’76 of decision-making processes, the small numbers of staff involved and the reality that no audit trail existed for sources used during these processes meant that there was little or no ‘traceability’77 of use of evidence briefings at an organisational level. A similar lack of traceability exists for the dissemination of evidence briefings to other participating CCGs. We know when and to whom content was distributed, but are reliant on self-report and so we know little of what happened or how content was used (if at all) thereafter. Our experience aligns well with others who have faced similar challenges in identifying whether or not systematic reviews are used and the extent to which they add value to decision-making processes in public health. 77
Delivery of the evidence briefing service
In this study, we sought to make best use of outputs from the NIHR Systematic Reviews Programme and specifically those in relation to the CRD’s core work programme. The CRD core funding supported the provision of the DARE, NHS EED, and HTA and PROSPERO databases. As mentioned in Chapter 3, NIHR funding for the CRD’s core work programme ceased during the course of the study and with it so did the availability of a continuously updated single source for systematic reviews and economic evaluations. The ability to acquire and assess research-based knowledge of this type can be a significant undertaking and although systematic reviews continue to be indexed on a variety of database platforms, no such resource now exists for economic evaluations. Although the evidence briefing team were able to utilise existing CRD search and retrieval capacity to ensure the delivery of study commitments, the lack of a continuously updated single resource to draw on does have funding implications for future service provision of this type. It is worth noting that not all questions could be addressed through existing systematic reviews. A feature of many of the outputs produced was an absence of synthesised evidence; this was particularly the case for those that focused on summarising evidence for proposed new models of care. As such, search and retrieval activity was actually greater for topics where we sought to establish ‘known unknowns’ than for those topics with a larger and already synthesised evidence base.
When we conceived the evidence briefing service, the evidence-informed rationale was that addressing real decisions or problems in collaboration with those directly affected should mean that research evidence was more likely to be used and inform decision-making. Then, in the NIHR CLAHRC for Leeds, York and Bradford, the service was an adjunct to a larger implementation programme of research. In this study, the evidence briefing service as constituted represented a resource-intensive intervention. From the outset, we sought to add insight as to how much added value the service would offer over alternative or more basic approaches. Although no costs associated with searching, information support and document retrieval or with publication and dissemination of the evidence briefings were included in the study application, 1.5 full-time equivalent experienced researchers and a significant proportion of the principal investigator’s time were committed to its delivery. There was sustained engagement with the service by individuals in the CCGs receiving intervention A and because we employed a degree of flexibility in the service delivered (employing a combination of full evidence briefings and shorter more exploratory evidence notes in response to questions raised) we were able to deliver a number of outputs beyond the estimate made in our original application. However, the nature of requests we received were largely conceptual and the impact of evidence briefings on more explicit instrumental decision-making processes was limited. Although we recognise that conceptual use of research is an entirely appropriate and necessary goal in itself, we question whether or not supporting conceptual use represents a sufficient level of impact to justify a resource-intensive intervention of this type.
Reflections on delivery
It has long been understood that real-world decision-making reflects a complex interaction between economic, political and social factors, different sources of knowledge (of which research evidence is one) and the beliefs and ideologies of those making the decisions. 10,11,78 This study has provided further insight as to how and where services packaging evidence derived from systematic reviews may most efficiently be deployed to impact on decision-making processes in a commissioning context.
Work undertaken to support decisions around the inclusion of 14 MSK procedures designated as low value into a regional list of interventions that CCGs will not normally fund had the most traceable impact on decision-making. Participating CCGs appeared to value the transparency that the evidence briefing service brought to the process. The existing regional value-based policy list predated the creation of CCGs but the process for assessing the evidence for new policies did not appear to have been transferred into the new system. Indeed, the proposed MSK polices had been compiled by one CCG using a ‘copy, paste and adapt’48 approach from existing policies identified at other CCGs across the country. This led other group members to question their provenance. The offer to undertake an independent and systematic appraisal of the evidence assisted the collective deliberation process, not least by providing reassurance to the representatives of the other CCGs.
Although our intention was that the evidence briefing activity was demand led, there is a consistent message from CCG informants that they would have valued more of the systematic and transparent push approach employed in the low-value work to identify interventions and ways of working that should be funded or not funded. However, we also need to recognise that the nature of decision-making and the processes employed in the context of these low-value policies was very different to those experienced elsewhere in the study. The low-value policy work represented a meso (regional)-level process with CCGs coming together to make decisions collectively. This process had a clear objective, namely to establish clear region-wide policies across CCGs relating to interventions of no or low clinical benefit; a process that needed to be both transparent and defendable. Further clarification on how best to identify and support this type of meso-level commissioning activity may be warranted.
Most other requests from CCGs could be categorised as conceptual, that is, not directly linked to discrete decisions or actions, but intended to provide knowledge and awareness of possible options. The issues raised were iterative and evolving in nature without obvious end points or decisions,79 and our role provided knowledge and awareness of possible options for future actions. This is perhaps best exemplified by the work undertaken around interventions to support the implementation of self-management. Our experience mirrored earlier accounts describing commissioning services for people with long-term conditions as a long drawn-out process. 80 The process of producing the series of related briefings involved a range of discussions and activities with a range of individuals and stakeholders both within and outside the CCG. The time and effort involved appeared to be disproportionate to the likely impact on the local commissioning decisions we sought to support. Even after the intervention phase was complete, deliberations on how best to act were still ongoing and needed additional input from a trusted local source (a senior member of the local public health team) to summarise and contextualise the already summarised information. It is likely that many of the self-management issues that we were asked to address would have been salient and relevant to CCGs in other settings across the country. And it could be argued that this would apply to most of the briefings produced as part of this study. Passive dissemination of the social prescribing briefing has generated considerable interest from CCGs and Health and Wellbeing Boards outside the study. Given the absence of evidence of effect, further advice on how to evaluate has been sought from those either currently providing or considering introducing social prescribing programmes.
Given the large resource requirement and the particularity of process and unpredictable timing of decision-making in individual commissioning organisations, it may be better to invest far more in identifying commissioning priorities and uncertainties from key informants with local credibility. These could then be serviced by a centralised evidence synthesis service and less costly targeted dissemination strategies could be used to raise awareness among what appear to be receptive commissioning audiences’ options or actions in key audiences. The cases examined here suggest that this would include those members of local public health teams supporting CCGs. Targeted dissemination (similar to the approaches the CRD previously employed with the Effective Health Care and Effectiveness Matters series of bulletins) could deliver similar impacts. 15,81 Indeed, passive dissemination of the social prescribing briefing has generated considerable interest from CCGs and Health and Wellbeing Boards outside the study and the evidence briefing team have been asked for further advice on a number of these decision-making processes. Taken together, this may suggest that resource-intensive approaches to providing evidence are best employed to support instrumental decisions occurring at a meso level where impact is likely to be proportionately greater. This would also be consistent with informants’ requests for more ‘supply-side’ push (researcher-led distribution of research) alongside the demand-led (pull) access they received. The potential for impact from the targeting of tailored messages and topics at specific audiences may be of interest to the NIHR Dissemination Centre and merits further investigation.
Implications for research use
If meso-level activity may represent the best focus for resource-intensive services, we still need to consider how to systematise research use among individual CCGs. The Supporting Policy In health with Research: an Intervention Trial (SPIRIT) Action Framework (published after the intervention phase of this study was completed) hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with available research. 82 Where there is sufficient capacity (the value placed on research, the tools and systems the organisation has to support research engagement; and the skills and knowledge of staff), a series of research engagement actions might occur that facilitate research use. The SPIRIT Action Framework82 predicts that the greater the organisational capacity, the more research engagement actions (accessing and appraising research, generating new research and interacting with researchers) will occur, which will in turn result in a greater use of research evidence.
Using the SPIRIT Action Framework to reflect on this study, we had catalysts and engagement opportunities (around the questions raised and the briefings produced), but the service as constituted did little to enhance the capacity of the organisation to use research routinely. Both baseline and follow-up data suggest that commissioners are well intentioned ad hoc users of research evidence and that they work in a setting where there is a lack of systems and processes to do this routinely. CCG informants also indicated the potential for confirmation bias in their evidence-seeking behaviours and the challenges of being confronted with an absence of reliable evidence for policies or options that they were pursuing. This suggests a knowledge and skills gap that this study has not addressed. The evidence briefing team offered training on how to acquire, assess, adapt and apply evidence to CCGs receiving intervention A or intervention B (which could have addressed these knowledge and skills gaps), but this offer was not taken up. Rather than making training a demand-led ‘offer’ it may have been better to identify the capacity for research use of each CCG at the outset, and develop a corresponding offer to each organisation that included training relevant to their current state. At the very least, this study has highlighted the importance of building organisational capacity as a component of evidence use, an area that appears to be under-researched. 83 SPIRIT is informing an ongoing evaluation of a multifaceted programme to build organisational capacity for the use of research evidence in policy and programme development in Australia. 84 Findings from this will help shed light on the value of the Framework to develop and test other interventions to build organisational capacity to use research.
Public health specialists have traditionally supported and facilitated the use of research evidence in a commissioning context. 19,20,48 Throughout this study we observed that despite its relocation, the public health specialist remained accessed and engaged with by CCGs despite being no longer central to decision-making processes. Some senior staff in participating CCGs had much prior experience of support from public health teams under previous commissioning arrangements. As the interventions followed soon after the preceding arrangements had ceased, it is perhaps no surprise that the CCG commissioning staff made use of the service offered by the CRD. Nevertheless, all the CCGs continued to place value on the knowledge and expertise of trusted ‘critical friends’85 in the shape of public health consultants. They provided a bridge between the old and new commissioning arrangements and brought valuable insights and networks from beyond the boundaries of the CCG. Although we often observed commissioners looking out and undertaking fact-finding trips to see what other CCGs around the country were doing, the same individuals were often unaware that colleagues in adjacent areas were undertaking similar work or grappling with similar questions. Public health specialists were the individuals viewed as most likely to fill this local knowledge gap and to mitigate against a general dissatisfaction with the knowledge-sharing capabilities of the formal commissioning support arrangements. Whether fair or otherwise, there was a general perception among CCG informants that the CSU lacked the necessary infrastructure and/or expertise to efficiently acquire, assess and adapt research for use in decision-making. The one-to-one transactional arrangements the CSU had with CCGs were themselves viewed as a barrier to wider knowledge sharing across the region. This danger of ‘network closure’ undermining local knowledge sharing and historically trusted relationships has been anticipated previously. 86
Wye et al. 48 have argued that researchers need to build relationships and engage with commissioners locally using commissioners’ preferred methods of conversations and stories, to find out what is wanted and how best to deliver it. In this study we had fewer face-to-face engagement opportunities than originally anticipated; this was despite case informants indicating that they would have liked more. We consistently offered to discuss priority areas and the key messages and implications arising from evidence briefings face to face, but in many instances participants found it easier to have a quick telephone or e-mail discussion with the CRD team. Our geographical distance from the intervention sites may have influenced the mode of interaction and communication, and in turn reduced the type of contact perhaps necessary to facilitate an increased use of research evidence on the part of commissioners. Although we do not discourage the cultivation of face-to-face relationships, the reality of the decision-making process is that any engagement is resource intensive and so researchers need to carefully consider how best to target those interactions that will deliver the best return. Even with proximity, somebody needs to be around or ‘in the room’ when ideas first germinate, to spot the potential catalysts to research use and to question what is the evidence for this? Why do we want to pursue this course of action? Given this, Wye et al. ’s48 suggestion that researchers cultivate relationships with local public health teams could represent the intermediary channel through which use of research by individual CCGs can be influenced. Public health staff are more likely to be ‘in the room’ and have the necessary skills and local networks to facilitate knowledge sharing within and across the commissioning landscape. The current emphasis on innovation and the development of new models of health and social care is favouring coproduction approaches to the design, commissioning and delivery of services. This shift may strengthen the intermediary role of public health. But, if this intermediary role is to be sustained, public health specialists will need to be supported and resourced to return to playing a more central role in commissioning.
Alongside capacity building and engagement, macro-level intervention is also needed to enhance research use at the level of the individual CCG. The Health and Social Care Act3 mandates CCGs in the exercise of their functions, to promote innovation in the provision of health services, promote integration and to make use (in the health service) of evidence obtained from research. Infrastructure to support the statutory duty to drive innovation at scale is under way. Fifty ‘vanguard’ sites are supported by a £200M transformation fund from NHS England. A similar commitment of significant resources has also been made via the Better Care Fund and the Prime Minister’s Challenge Fund; providing further impetus to innovation and integration between health and social care. However, whereas the current policy climate explicitly incentivises innovation and integration, there is no equivalent incentive for finding and applying research to support the many decisions required to turn this vision into a reality. The CCG Assurance Framework87 focuses on leadership, financial and performance management, planning and delegated functions, but contains no specific metrics on whether or not CCGs are fulfilling their statutory duties in respect of use of evidence obtained from research.
During the course of this study we were given the opportunity to suggest wording to sharpen that existing in the CCG Assurance Framework87 on the use of evidence derived from research. We suggested the wording ‘each CCG must, in the exercise of its functions, demonstrate the ability to acquire, assess, adapt and apply evidence obtained from research in health-service decision-making.’ This has now been incorporated into appendix 2 of the Assurance Framework operating manual. 87 However, whereas it is stipulated that CCGs must have a plan in place to address their duties in relation to promoting and supporting the conduct of research, there are no similar explicit requirements relating to the use of evidence obtained from research. If we are serious about shifting CCGs from being well intentioned but inconsistent users of research evidence, then a more explicit set of requirements may be necessary. Ideally, the incentive structure that exists for health-service innovation and integration may need to be replicated to support CCGs’ fulfilment of their statutory duties in respect of use of research under the Health and Social Care Act 2012. 3 Without this, the current ad hoc engagement with research is likely to remain.
In the current financial climate, disinvestment decisions relating to interventions of no or low clinical value are likely to remain high on the commissioning agenda. In this study, we witnessed the development of collaborative processes for considering disinvestment at the local level. A lack of organisational memory about the processes previously in place with earlier commissioning arrangements was also apparent. Despite this, practical challenges in identifying and contextualising research evidence to inform these processes remain. 55 Unlike the rigorous processes in place to inform the NICE guidance on the use of new and existing medicines, no similarly resourced infrastructure exists to support disinvestment decisions. 7 Although NICE makes ‘do not do’ recommendations publicly available, we found low awareness of these among commissioners and a notable lack of skills to systematically and transparently identify other relevant evidence that could inform disinvestment decisions. The NIHR already funds infrastructure with the skills necessary to support disinvestment activity at a local level. This includes the NIHR CLAHRCs, rapid evidence centres and HTA groups. More proactive and targeted dissemination of low-value recommendations combined where necessary with synthesis using standardised methods could enhance the ability of local commissioners to identify and then generate local policies on interventions of no or low clinical benefit.
Recommendations for research
We are conscious that our findings relate to a specific decision-making context and setting and have been generated at a time when the commissioning arrangements are rapidly evolving. Given this, further comparative evaluation and clarification of the role and value of similar demand-led evidence briefing services in other contexts and settings may be warranted. The SPIRIT Action Framework may provide a guide on which the evaluation of any future services seeking to increase the use of research in policy can be based.
Our study has revealed commissioners to be well intentioned but lacking the necessary skills and infrastructure to make use of research evidence routinely. Further research is required on the effects of interventions and strategies to build individual and organisational capacity to use research. Exploration and clarification of the potential for macro-level intervention to incentivise research use is also warranted.
Disinvestment decisions relating to interventions of no or low clinical value remain high on the commissioning agenda. No established process appears to be in place for assessing research evidence to inform the generation of local policies. Rather than have local settings developing their own distinct approaches it would seem sensible if a country-wide approach was taken to identify and then summarise the evidence for interventions of no or low clinical value. Methodological research is therefore required to establish an optimal, transparent and standardised approach that identifies and contextualises research evidence that can then be used to inform local decision-making processes.
Our study suggests that resource-intensive approaches to providing evidence may best be employed to support instrumental decision-making at a meso level. Otherwise, less resource-intensive approaches to delivering optimally packaged systematic review-derived findings should be pursued. We know that passive dissemination can represent better value in some contexts and settings, particularly when there is a single clear message and/or when the topic is known to align with known commissioning priorities and or uncertainties. Many research agencies fund or undertake engagement activities and have invested in a range of communication channels. How best to harness ‘supply-side’ infrastructure to deliver effective targeted communications remains unclear. As such, there is considerable scope for comparative evaluation of the impact of different active and targeted dissemination strategies on the uptake and use of research by commissioners and other key stakeholders.
Acknowledgements
This project was funded as part of a programme of research funded by the NIHR HSDR programme (12/5002/18). We would like to thank all the staff at the participating CCGs who, despite considerable time pressures, took the time to provide quantitative and qualitative data.
Contributions of authors
Paul M Wilson conceived the study and was responsible for its overall direction, contributed to research design and led the evidence briefing team and the production of evidence briefings and the final report.
Kate Farley contributed to the evaluation component of the study and to research design, administered pre- and post-intervention questionnaires and managed data, conducted qualitative interviews, led the qualitative analysis and contributed to the production of the final report.
Liz Bickerdike was a member of the evidence briefing team and contributed to research design, the production of evidence briefings and the final report.
Alison Booth was a member of the evidence briefing team (replaced Duncan Chambers) and contributed to the production of evidence briefings and the final report.
Duncan Chambers was a member of the evidence briefing team and contributed to research design, the production of evidence briefings and the final report; he left the study to take up a full-time post in June 2014.
Mark Lambert provided advice to both the evaluation and intervention component throughout the study, and contributed to research design and the production of the final report.
Carl Thompson led the evaluation component of the study and contributed to research design, conducted statistical analysis of pre- and post-intervention questionnaires, conducted qualitative interviews and contributed to qualitative analysis interpretation and the production of the final report.
Rhiannon Turner contributed to the evaluation component of the study and to research design, statistical analysis of pre- and post-intervention questionnaires and the production of the final report.
Ian S Watt provided advice to both the evaluation and intervention components throughout the study and contributed to research design and the production of the final report.
Publication
Wilson PM, Farley K, Thompson C, Chambers D, Bickerdike L, Watt IS, et al. Effects of a demand-led evidence briefing service on the uptake and use of research evidence by commissioners of health services: protocol for a controlled before and after study. Implement Sci 2015;10:7.
Data sharing statement
All available data can be obtained from the corresponding author. All data will be shared in a way that safeguards the confidentiality and anonymity of respondents.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.
References
- Five Year Forward View. Redditch: NHS England; 2014.
- Innovation Health and Wealth: Accelerating Adoption and Diffusion in the NHS. London: Department of Health; 2011.
- Health and Social Care Act 2012. London: The Stationery Office; 2012.
- Sheldon TA, Cullum N, Dawson D, Lankshear A, Lowson K, Watt I, et al. What’s the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients’ notes, and interviews. BMJ 2004;329. http://dx.doi.org/10.1136/bmj.329.7473.999.
- Owen-Smith A, Kipping R, Donovan J, Hine C, Maslen C, Coast J. A NICE example? Variation in provision of bariatric surgery in England. BMJ 2013;346. http://dx.doi.org/10.1136/bmj.f2453.
- Car J, Huckvale K, Hermens H. Telehealth for long term conditions. BMJ 2012;344. http://dx.doi.org/10.1136/bmj.e4201.
- Hollingworth W, Rooshenas L, Busby J, Hine CE, Badrinath P, Whiting PF, et al. Using clinical practice variations as a method for commissioners and clinicians to identify and prioritise opportunities for disinvestment in health care: a cross-sectional study, systematic reviews and qualitative study. Health Serv Deliv Res 2015;3.
- Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. Knowledge Transfer Study Group . How can research organizations more effectively transfer research knowledge to decision makers?. Milbank Q 2003;81:221-48.
- Sheldon TA. Making evidence synthesis more useful for management and policy-making. J Health Serv Res Policy 2005;10:1-5. https://doi.org/10.1258/1355819054308521.
- Lomas J, Culyer A, McCutcheon C, McAuley L, Law S. Conceptualizing and Combining Evidence for Health System Guidance. Ottawa, ON: Canadian Health Services Research Foundation; 2005.
- Nutley SM, Walter IC, Davies HTO. Using Evidence: How Research can Inform Public Services. Bristol: The Policy Press, University of Bristol; 2007.
- Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy 2002;7:239-44. http://dx.doi.org/10.1258/135581902320432778.
- Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2007;85:729-68. http://dx.doi.org/10.1111/j.1468-0009.2007.00506.x.
- Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLOS ONE 2011;6. http://dx.doi.org/10.1371/journal.pone.0021704.
- Murthy L, Shepperd S, Clarke MJ, Garner SE, Lavis JN, Perrier L, et al. Interventions to improve the use of systematic reviews in decision-making by health system managers, policy makers and clinicians. Cochrane Database Syst Rev 2012;9. http://dx.doi.org/10.1002/14651858.CD009401.pub2.
- Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res 2014;14. http://dx.doi.org/10.1186/1472-6963-14-2.
- Tricco AC, Cardoso R, Thomas SM, Motiwala S, Sullivan S, Kealey MR, et al. Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci 2016;11. http://dx.doi.org/10.1186/s13012-016-0370-1.
- Lavis JN. How can we support the use of systematic reviews in policymaking?. PLOS Med 2009;6. http://dx.doi.org/10.1371/journal.pmed.1000141.
- Weatherly H, Drummond M, Smith D. Using evidence in the development of local health policies. Some evidence from the United Kingdom. Int J Technol Assess Health Care 2002;18:771-81. https://doi.org/10.1017/S0266462302000582.
- Clarke A, Taylor-Phillips S, Swan J, Gkeredakis E, Mills P, Powell J, et al. Evidence-based commissioning in the English NHS: who uses which sources of evidence? A survey 2010/2011. BMJ Open 2013;3. http://dx.doi.org/10.1136/bmjopen-2013-002714.
- Cohen W, Levinthal D. Absorptive capacity: a new perspective on learning and innovation. Admin Sci Quart 1990;35:128-52. https://doi.org/10.2307/2393553.
- Lane P, Koka B, Pathak S. The reification of absorptive capacity: a critical review and rejuvenation of the construct. Acad Manage Rev 2006;31:833-63. https://doi.org/10.5465/AMR.2006.22527456.
- CCG Assurance Framework 2014/15: Operational Guidance. Leeds: NHS England; 2014.
- Hanbury A, Thompson C, Wilson PM, Farley K, Chambers D, Warren E, et al. Translating research into practice in Leeds and Bradford (TRiPLaB): a protocol for a programme of research. Implement Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-37.
- Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q 2011;89:131-56. http://dx.doi.org/10.1111/j.1468-0009.2011.00622.x.
- Lomas J. Using ‘linkage and exchange’ to move research into policy at a Canadian foundation. Health Aff 2000;19:236-40. https://doi.org/10.1377/hlthaff.19.3.236.
- Turner RN, Hewstone M, Voci A. Reducing explicit and implicit outgroup prejudice via direct and extended contact: The mediating role of self-disclosure and intergroup anxiety. J Pers Soc Psychol 2007;93:369-88. http://dx.doi.org/10.1037/0022-3514.93.3.369.
- Pettigrew TF, Tropp LR. A meta-analytic test of intergroup contact theory. J Pers Soc Psychol 2006;90:751-83. http://dx.doi.org/10.1037/0022-3514.90.5.751.
- Gaertner S, Dovidio J, Anastasio P, Bachman B, Rust M. The common ingroup identity model: recategorization and the reduction of intergroup bias. Eur Rev Soc Psychol 1993;4:1-26. https://doi.org/10.1080/14792779343000004.
- Chambers D, Wilson P. A framework for production of systematic review based briefings to support evidence-informed decision-making. Syst Rev 2012;1. http://dx.doi.org/10.1186/2046-4053-1-32.
- Chambers D, Grant R, Warren E, Pearson SA, Wilson P. Use of evidence from systematic reviews to inform commissioning decisions: a case study. Evid Policy 2012;8:141-8. https://doi.org/10.1332/174426412X640054.
- Wilson PM, Farley K, Thompson C, Chambers D, Bickerdike L, Watt IS, et al. Effects of a demand-led evidence briefing service on the uptake and use of research evidence by commissioners of health services: protocol for a controlled before and after study. Implement Sci 2015;10. http://dx.doi.org/10.1186/s13012-014-0199-4.
- Is Research Working for You? A Self-Assessment Tool and Discussion Guide for Health Services Management and Policy Organizations. Ottawa, ON: Canadian Health Services Research Foundation; 2008.
- Kothari A, Edwards N, Hamel N, Judd M. Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implement Sci 2009;4. http://dx.doi.org/10.1186/1748-5908-4-46.
- Oxman AD, Vandvik PO, Lavis JN, Fretheim A, Lewin S. SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. Health Res Policy Syst 2009;7. http://dx.doi.org/10.1186/1478-4505-7-S1-S2.
- Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ 2006;84:620-8. https://doi.org/10.2471/BLT.06.030312.
- Lavis JN, Wilson MG, Grimshaw JM, Haynes RB, Hanna S, Raina P, et al. Effects of an evidence service on health-system policy makers’ use of research evidence: a protocol for a randomised controlled trial. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-51.
- Wilson MG, Lavis JN, Grimshaw JM, Haynes RB, Bekele T, Rourke SB. Effects of an evidence service on community-based AIDS service organizations’ use of research evidence: a protocol for a randomized controlled trial. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-52.
- Ajzen I. The theory of planned behaviour. Organ Behav Hum Decis Process 1991;50:179-211. https://doi.org/10.1016/0749-5978(91)90020-T.
- Armitage CJ, Conner M. Efficacy of the Theory of Planned Behaviour: a meta-analytic review. Br J Soc Psychol 2001;40:471-99. https://doi.org/10.1348/014466601164939.
- Francis J, Eccles M, Johnston M, Walker AE, Grimshaw JM, Foy R, et al. Constructing Questionnaires Based on the Theory of Planned Behaviour: A Manual for Health Services Researchers. Newcastle upon Tyne: Centre for Health Services Research, University of Newcastle; 2004.
- Sheeran P. Intention–behavior relations: a conceptual and empirical review. Eur Rev Soc Psychol n.d.:1-36.
- Hewstone M, Judd C, Sharp M. Do observer ratings validate self-reports of intergroup contact? A round-robin analysis. J Exp Soc Psychol 2011;47:599-60. https://doi.org/10.1016/j.jesp.2010.12.014.
- Wright S, Aron A, McLaughlin-Volpe T, Ropp A. The extended contact effect: knowledge of cross-group friendships and prejudice. J Pers Soc Psychol 1997;73:73-90. https://doi.org/10.1037/0022-3514.73.1.73.
- Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ 2009;338. http://dx.doi.org/10.1136/bmj.b2393.
- Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York, NY: Wiley; 1987.
- Kazis LE, Anderson JJ, Meenan RF. Effect sizes for interpreting changes in health status. Med Care 1989;27:178-89. https://doi.org/10.1097/00005650-198903001-00015.
- Wye L, Brangan E, Cameron A, Gabbay J, Klein J, Pope C. Knowledge exchange in health-care commissioning: case studies of the use of commercial, not-for-profit and public sector agencies, 2011–14. Health Serv Deliv Res 2015;3.
- Systematic Reviews. CRD’s Guidance for Undertaking Reviews in Health Care. York: Centre for Reviews and Dissemination, University of York; 2009.
- Better Value Healthcare . Critical Appraisal Skills Programme (CASP). Making Sense of Evidence 2015. www.casp-uk.net/ (accessed 11 January 2015).
- Effective Health Care Series. York: University of York; 2004.
- Effectiveness Matters Series. York: University of York; n.d.
- Weiss CH. The many meanings of research utilization. Publ Admin Rev 1979;39:426-31. https://doi.org/10.2307/3109916.
- Weiss C, Murphy-Graham E, Birkeland S. An alternate route to policy influence: how evaluations affect D.A.R.E. Am J Eval 2005;26:12-30. https://doi.org/10.1177/1098214004273337.
- Garner S, Docherty M, Somner J, Sharma T, Choudhury M, Clarke M, et al. Reducing ineffective practice: challenges in identifying low-value health care using Cochrane systematic reviews. J Health Serv Res Policy 2013;18:6-12. http://dx.doi.org/10.1258/jhsrp.2012.012044.
- National Collaboration for Integrated Care and Support . Integrated Care and Support: Our Shared Commitment 2013. www.gov.uk/government/uploads/system/uploads/attachment_data/file/198748/DEFINITIVE_FINAL_VERSION_Integrated_Care_and_Support_-_Our_Shared_Commitment_2013-05-13.pdf (accessed 10 October 2016).
- Coulter A, Roberts S, Dixon A. Delivering Better Services for People with Long-Term Conditions. Building the House of Care. London: The King’s Fund; 2013.
- The Business Case for People Powered Health. London: Nesta; 2013.
- Panagioti M, Richardson G, Murray E, Rogers A, Kennedy A, Newman S, et al. Reducing Care Utilisation through Self-management Interventions (RECURSIVE): a systematic review and meta-analysis. Health Serv Deliv Res 2014;2.
- Taylor SJC, Pinnock H, Epiphaniou E, Pearce G, Parke HL, Schwappach A, et al. A rapid synthesis of the evidence on interventions supporting self-management for people with long-term conditions: PRISMS – Practical systematic Review of Self-Management Support for long-term conditions. Health Serv Deliv Res 2014;2.
- Public Health England . Change4Life n.d. www.nhs.uk/change4life/ (accessed 28 September 2016).
- Colquhoun D. An investigation of the false discovery rate and the misinterpretation of p-values. R Soc Open Sci 2014;1. http://dx.doi.org/10.1098/rsos.140216.
- Hammond KR. Human Judgement and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York, NY: Oxford University Press; 1996.
- Schulz-Hardt S, Frey D, Lüthgens C, Moscovici S. Biased information search in group decision making. J Pers Soc Psychol 2000;78:655-69. https://doi.org/10.1037/0022-3514.78.4.655.
- May C. Towards a general theory of implementation. Implement Sci 2013;8. https://doi.org/10.1186/1748-5908-8-18.
- May C, Finch T. Implementing, embedding and integrating practices: and outline of normalisation process theory. Sociol 2013;43.
- May CR, Finch T, Ballini L, MacFarlane A, Mair F, Murray E, et al. Evaluating complex interventions and health technologies using normalization process theory: development of a simplified approach and web-enabled toolkit. BMC Health Serv Res 2011;11. http://dx.doi.org/10.1186/1472-6963-11-245.
- Hoffman TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348. https://doi.org/10.1136/bmj.g1687.
- Greenhalgh T. Narrative based medicine: narrative based medicine in an evidence based world. BMJ 1999;318:323-5. https://doi.org/10.1136/bmj.318.7179.323.
- Yin RK. Applications of Case Study Research. London: Sage; 2003.
- Robertson R, Holder H, Bennett L, Ross S, Gosling J. Clinical Commissioning Groups – One Year On: Member Engagement and Primary Care Development. London: Nuffield Trust and The King’s Fund; 2014.
- Robertson R, Holder H, Bennett L, Ross S, Gosling J, Curry N. Risk or Reward? The Changing Role of CCGs in General Practice. London: Nuffield Trust and The King’s Fund; 2015.
- Wilson MG, Grimshaw JM, Haynes RB, Hanna SE, Raina P, Gruen R, et al. A process evaluation accompanying an attempted randomized controlled trial of an evidence service for health system policymakers. Health Res Policy Syst 2015;13. http://dx.doi.org/10.1186/s12961-015-0066-z.
- Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev 2009;3. https://doi.org/10.1002/14651858.mr000008.pub4.
- Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf 2016;25:303-10. http://dx.doi.org/10.1136/bmjqs-2015-004372.
- Rushmer RK, Cheetham M, Cox L, Crosland A, Gray J, Hughes L, et al. Research utilisation and knowledge mobilisation in the commissioning and joint planning of public health interventions to reduce alcohol-related harms: a qualitative case design using a cocreation approach. Health Serv Deliv Res 2015;3.
- Armstrong R, Pettman T, Burford B, Doyle J, Waters E. Tracking and understanding the utility of Cochrane reviews for public health decision-making. J Public Health 2012;34:309-13. http://dx.doi.org/10.1093/pubmed/fds038.
- Klein R. Evidence and policy: interpreting the Delphic oracle. J R Soc Med 2003;96:429-31.
- Davies H, Nutley S, Smith PC. What Works? Evidence-Based Policy and Practice in Public Services. Bristol: Policy Press; 2000.
- Shaw SE, Smith JA, Porter A, Rosen R, Mays N. The work of commissioning: a multisite case study of healthcare commissioning in England’s NHS. BMJ Open 2013;3. http://dx.doi.org/10.1136/bmjopen-2013-003341.
- Giguere A, Legare F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, et al. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;10. https://doi.org/10.1002/14651858.cd004398.pub3.
- Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: a structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med 2015;136–137:147-55. http://dx.doi.org/10.1016/j.socscimed.2015.05.009.
- Moore G, Redman S, Haines M, Todd A. What works to increase the use of research in population health policy and programmes: a review. Evid Policy 2011;7:277-305. https://doi.org/10.1332/174426411X579199.
- The CIPHER Investigators . Supporting Policy In health with Research: an Intervention Trial (SPIRIT)-protocol for a stepped wedge trial. BMJ Open 2014;4. https://doi.org/10.1136/bmjopen-2014-005293.
- Warwick-Giles L, Coleman A, Checkland K. Co-owner, service provider, critical friend? The role of public health in clinical commissioning groups. J Public Health 2015;pii. https://doi.org/10.1093/pubmed/fdv137.
- Petsoulas C, Allen P, Checkland K, Coleman A, Segar J, Peckham S, et al. Views of NHS commissioners on commissioning support provision. Evidence from a qualitative study examining the early development of clinical commissioning groups in England. BMJ Open 2014;4. http://dx.doi.org/10.1136/bmjopen-2014-005970.
- CCG Assurance Framework Operating Manual 2015/16. Leeds: NHS England; 2015.
- Cowling TE, Harris MJ, Watt HC, Gibbons DC, Majeed A. Access to general practice and visits to accident and emergency departments in England: cross-sectional analysis of a national patient survey. Br J Gen Pract 2014;64:e434-9. http://dx.doi.org/10.3399/bjgp14X680533.
- Cowling TE, Harris M, Watt H, Soljak M, Richards E, Gunning E, et al. Access to primary care and the route of emergency admission to hospital: retrospective analysis of national hospital administrative data. BMJ Qual Saf 2016;25:432-40. http://dx.doi.org/10.1136/bmjqs-2015-004338.
Appendix 1 Survey instrument for Clinical Commissioning Group case sites
Appendix 2 Example data extraction form for systematic reviews
Appendix 3 Vignettes of evidence briefings
Enhancing access in primary care settings flow chart
Topic | Enhancing access in primary care settings |
CCG | A2 |
Role | Commissioning director and chief officer |
Date of contact | 15 June 2015 |
Type of contact | |
Reason for contact | Emerged from previous evidence note on accountable care organisations and other integrated models of care: a scope, circulated 26 May 2015 |
Question to be addressed | Focus on access: an individual service component, part of the development of the Accountable Care organisation |
Sources searched | CRD Databases; NHS Evidence (systematic review filter); The King’s Fund; Health Foundation; Nuffield Trust; Nesta; RCGP; NIHR Journals Library and NIHR ongoing projects |
Search terms used | Access, GP, primary care, out of hours, waiting times (in various combinations) |
Our response | GP surgeries across the country are implementing new strategies such as extended hours, telephone consultation and role substitution to meet rising demands. Evaluation of extended hours shows uptake varies depending on locality and that uptake on Sundays is lower than on extended weekdays and Saturdays. Overall there is limited impact on Emergency Department activity. Telephone consultation shifts the workload from face-to-face to telephone contact and increases the number of primary care contacts within 28 days of the initial consultation. Role substitution is being widely promoted but the extent to which this will reduce GP workload is unclear. The whole-system implications of extended hours, telephone consultation and role substitution need to be considered. Each strategy has the potential to reveal unmet need and displace activity rather than reduce workload. The lack of good-quality evidence around these approaches highlights the need for evaluation alongside implementation |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs. Available at: www.york.ac.uk/media/crd/Ev%20briefing_Enhancing%20access%20in%20primary%20care.pdf |
Date sent | 14 July 2015 |
Additional work |
|
Social prescribing flow chart
Topic | Evidence to inform the commissioning of social prescribing |
CCG | A2 |
Role | Commissioning managers |
Date of initial contact | 20 October 2014 |
Type of contact | |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed |
|
Sources searched | DARE, CDSR and NHS EED As few relevant reviews were identified, we conducted quick searches of MEDLINE, ASSIA, Social Policy and Practice, NICE, SCIE and NHS Evidence to locate details of any relevant guidance or service evaluations We also searched the websites of The King’s Fund, Health Foundation, Nuffield Trust and Nesta to locate any reports of relevant evaluations in UK settings |
Search terms used | Social AND prescribing; Community AND referral; Exercise AND prescription OR referral; Art AND therapy OR prescription; Behaviour change interventions; Social AND interventions |
Our response |
|
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs. Available at: www.york.ac.uk/media/crd/Ev%20briefing_social_prescribing.pdf |
Date sent | 12 April 2015 |
Additional work |
|
Care-planning flow chart
Topic | Promoting patient-centred care planning consultations |
CCG | A1 |
Role | Clinical director and Director of Public Health |
Date of initial contact | 13 November 2014 |
Type of contact | |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed |
|
Sources searched | DARE, NHS EED, CDSR, NHS Evidence and NHS England |
Search terms used | Care, planning, consultation, primary care, general practice (in various combinations) |
Our response | Personalised care planning can improve some measures of physical health in people with long-term conditions such as diabetes and asthma; lack of time in consultations is perceived as a barrier to care planning by professionals and patients; interventions aimed at improving consultation skills for both professionals and patients could improve outcomes; encouraging professionals to initiate care-planning discussions and reassuring patients that social and emotional issues are legitimate discussion topics could be helpful |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs. Available at: www.york.ac.uk/media/crd/Ev%20briefing_care%20planning.pdf |
Date sent | 2 March 2015 |
Supporting self-management flow chart
Topic | Supporting self-management: helping people manage long-term conditions |
CCG | A1 |
Role | Clinical director |
Date of initial contact | 13 November 2014 |
Type of contact | E-mail Face-to-face (Project Oversight Group development session 30 January 2015 where PW presented self-management overview slides) |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed | The topic of self-management support was proposed as part of a series of briefings on self-management themes identified in an initial scoping of the evidence. Other topics were education, social prescribing, care planning, mobile telephone apps (evidence note) and shared decision-making (evidence note) What is the evidence of effectiveness for self-management support? |
Sources searched | The series of self-management related briefings and notes shared a common large search with updating searches using specific terms as necessary plus interrogation of reference lists and citation tracking Sources included DARE, NHS EED, CDSR, NHS Evidence, Health Systems Evidence, The King’s Fund, Nesta, Health Foundation and the Nuffield Trust |
Search terms | Common terms: self management, self care, long term condition, chronic condition, patient centred (in various combinations) Specific terms: support |
Our response | Successful self-management interventions are multicomponent and tailored to individuals’ needs. Key components include education, action planning and practical, psychological and social support. Condition-specific self-management reduces overall hospital use and improves quality of life in the short term – effects on costs are mixed. Key considerations for implementation include strong clinical leadership, training and resources, and regular evaluation |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Ev%20briefing_supporting%20self-management.pdf |
Date sent | 16 February 2015 |
Unplanned admissions from care homes flow chart
Topic | Interventions to reduce unplanned admissions from care homes |
CCG | A2 |
Role | Commissioning director and commissioning manager |
Date of initial contact | 27 October 2014 |
Type of contact | |
Reason for contact | Under the Better Care Fund, CCG have a key project related to reducing inappropriate admissions and deaths in hospital of patients from care homes |
Question to be addressed | What is the evidence, if any, around this? For example, is there evidence that a single GP covering a whole care home reduces admissions to hospital (rather than a few seeing only their own patients)? What improves clinical care in care homes? |
Sources searched | DARE, NHS EED, CDSR, The King’s Fund, Age UK and NHS Evidence |
Search terms | Unplanned admissions, care home, elderly, geriatric services |
Our response | Much of the evidence for integration and community geriatric services comes from case studies which are not always well reported. Closer working between health-care and care home staff (through dedicated GP or community geriatric services), protected training for care home staff, and implementing processes for stated end-of-life care preferences all appear promising. NICE recommends implementation of multifaceted interventions to prevent delirium in long-term care settings. The lack of good-quality evidence highlights the need to monitor the impact of changes made to services particularly in relation to resource use and patient experience |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Ev%20Briefing_unplanned%20admissions%20from%20care%20homes.pdf |
Date sent | 3 December 2014 |
Lay-led self-care education flow chart
Topic | Effects of lay-led self-care education programmes |
CCG | A1 |
Role | Clinical director and clinical lead |
Date of initial contact | 13 November 2014 |
Type of contact | |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed | The topic of self-care education was proposed as part of a series of briefings on self-management themes identified in an initial scoping of the evidence. Other topics were self-management support, social prescribing, care planning, mobile telephone apps (evidence note) and shared decision-making (evidence note) What is the evidence for the effects of lay-led self-care education programmes or interventions that it might commission to help people manage their own care |
Sources searched | The series of self-management related briefings and notes shared a common broad search with updating searches using specific terms as necessary plus interrogation of reference lists and citation tracking Sources included DARE, NHS EED, CDSR, NHS Evidence, Health Systems Evidence, The King’s Fund, Nesta, Health Foundation, Nuffield Trust |
Search terms | Common terms: Self management, self care, long term condition, chronic condition, patient centred (in various combinations) Specific terms: Lay, patient, peer, education, knowledge |
Our response | Evidence suggests programmes produce small, short-term improvements in self-efficacy, self-rated health and levels of exercise. The Expert Patient Programme resulted in small improvements in self-efficacy and quality of life and was likely to be cost-effective. There was no evidence for the outcome of unplanned health-service use |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Ev%20briefing_Lay-led%20self-care%20education.pdf |
Date sent | 11 December 2014 |
Topic | Value-based commissioning of MSK procedures: an appraisal of evidence for the proposed policies |
CCG | All |
Role | N/A |
Organisation | Regional group but work originally instigated by A2 CCG. Invitation to support group originally came from B3 CCG |
Date of initial contact | June 2014 |
Type of contact | Regular monthly meetings and some e-mail contact |
Reason for contact | A2 CCG presented the MSK resource pack to the VBCP Implementation Group. A coterminous CCG that also shared the policy indicated that, if agreed by other CCGs, these procedures would be incorporated into the regional Value Based Commissioning policy. We were asked to undertake an independent appraisal of the evidence underpinning the proposed policies for MSK procedures |
Question to be addressed | Evidence for the following procedures was reviewed: autologous cartilage transplantation, autologous blood injection for tendinopathy, bunions, carpal tunnel syndrome, discectomy for lumbar disc prolapse, Dupuytren’s contracture, epidural injections for lumbar back pain, exogen ultrasound bone healing, facet joint injections for back pain, ganglia, hip resurfacing, knee arthroscopy and irrigation, non-specific low back pain and trigger finger |
Sources searched | Staged searches for each topic*: 1. NICE guidance 2. websites of relevant Royal Colleges for guidance 3. CDSR 4. DARE and NHS EED *The production of this report involved a modified version of the process used for evidence briefings (see Chapter 3); we have not produced a flow diagram documenting the number of records identified due to the stepped approach to searching for each individual procedure |
Search terms | Condition-specific terms |
Our response | Summaries for each procedure outlined whether or not proposed policy was in line with current evidence |
Final output | 32-page report, including a summary table and flow chart describing the approach to using evidence in commissioning decisions Available at: www.york.ac.uk/media/crd/Evidence%20review%20MSK%20VBC%20Interactive.pdf |
Date sent | Summary findings presented at October 2014 meeting. Full report circulated January 2015 |
Additional work | 17 June 2015: contacted by manufacturer of one of the technologies included in the briefing – confirmed our conclusion was in line with NICE guidance on the topic (insufficient evidence to support routine use in clinical practice) |
Self-care for chronic obstructive pulmonary disease flow chart
Topic | Self-care for COPD |
CCG | A1 |
Role | GP vice chairperson, Planned Care Lead |
Date of initial contact | 28 April 2014 |
Type of contact | Face to face |
Reason for contact | Emerging from general discussions and following on from the earlier Evidence Note (Self Care, circulated 1 July 2014), a more specified briefing focused on COPD was requested |
Question to be addressed |
|
Sources searched | The series of self-management related briefings and notes shared a common broad search with updating searches using specific terms as necessary, plus interrogation of reference lists and citation tracking Sources included DARE, NHS EED, CDSR, NHS Evidence, Health Systems Evidence, The King’s Fund, Nesta, Health Foundation, Nuffield Trust |
Search terms | Common terms: Self management, self care, long term condition, chronic condition, patient centred (in various combinations) Specific terms: chronic obstructive pulmonary disease, COPD |
Our response | There is consistent evidence that multicomponent interventions reduce respiratory-related hospital admissions and improve quality of life for people with COPD. Multicomponent interventions that include action plans, exercise, education and smoking cessation are likely to be beneficial Hospital- and community-based pulmonary rehabilitation has some short-term impact on health-related quality of life and hospital admissions, but the effects of home-based rehabilitation are unclear |
Final output | Evidence briefing format was altered to include a one-page evidence summary table following feedback from the CCG. The briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/COPD%20self%20care.pdf |
Date sent | 1 July 2014 |
Loneliness and social isolation flow chart
Topic | Interventions for loneliness and social isolation |
CCG | A1 |
Role | GP vice chairperson, Planned Care Lead |
Date of initial contact | 28 April 2014 |
Type of contact | Face to face |
Reason for contact | Emerging from general discussions about priorities |
Question to be addressed | Evidence for interventions aimed at reducing loneliness and social isolation, particularly in elderly people |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. SCIE, Age UK, Health Foundation, The King’s Fund and Nesta were also searched for relevant reviews and policy reports |
Search terms | Social isolation, loneliness, contact, support, befriending |
Our response | GPs may be well placed to identify people who suffer from, or who are at risk of, loneliness and social isolation Overall, evidence of effective interventions is limited, but group-based activities and support that provide opportunities for social interaction appear to show some promise in addressing isolation and loneliness |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Loneliness%20and%20social%20isolation.pdf |
Date sent | 1 July 2014 |
Evidence to inform urgent and emergency care systems flow chart
Topic | Evidence to inform urgent and emergency care systems |
CCG | Emerging from general discussions with CCGs about initial priorities |
Role | N/A |
Organisation | N/A |
Date of initial contact | N/A |
Type of contact | Initial face-to-face discussions with CCGs about priorities |
Reason for contact | Emerging from general discussions with CCGs about initial priorities – opportunity to consolidate previous work for Vale of York and Bristol CCGs |
Question to be addressed | Review evidence on a number of topics relating to urgent and emergency care services |
Sources searched | DARE, HTA, Health Systems Evidence, NHS EED, and CDSR for relevant systematic reviews and economic evaluations |
Search terms used | Accident AND emergency AND admissions Out of hours Service AND delivery and urgent Triage AND emergency OR accident Urgent AND triage |
Our response | A primary care front end to the emergency department involving GPs could be used to assess and treat patients presenting with less urgent problems. Other workforce models with promise include ECPs and nurse practitioners. ECPs can reduce patient transport to emergency departments, although this appears to be dependent on the setting Overall, the evidence for many interventions is limited and a lack of cost-effectiveness data reinforces the need for rigorous evaluation of service change |
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Evidence%20to%20inform%20urgent%20and%20emergency%20care%20systems.pdf |
Date sent | 24 March 2014 |
Consolidating urgent care services flow chart
Topic | Consolidating urgent care services |
CCG | A1 |
Role | Clinical lead |
Date of initial contact | 15 October 2013 |
Type of contact | |
Reason for contact | CCG were considering implementing an ‘urgent care hub’, locating out-of-hours provision on a single site adjacent to an accident and emergency department |
Question to be addressed | What evidence is there for such a model of delivery, impact on A&E volume, who should triage? |
Sources searched | DARE, NHS EED, HTA Database, CDSR, Health Systems Evidence. Also The King’s Fund, Nuffield Trust, RCGP and the BMA |
Search terms | Accident AND emergency AND admissions Out of hours Service AND delivery and urgent Triage AND emergency OR accident Urgent AND triage |
Our response |
|
Final output | Evidence briefing was sent via e-mail to named contacts in all participating CCGs Available at: www.york.ac.uk/media/crd/Consolidating%20urgent%20care.pdf |
Date sent | 20 November 2013 |
Appendix 4 Vignettes of evidence notes
Topic | Self-care overview |
CCG | A1 |
Role | GP vice chairperson, Planned Care Lead |
Date of initial contact | 17 January 2014 |
Type of contact | E-mail with follow-up discussion by telephone |
Reason for contact | As a result of a successful Pioneer bid for integrated care and its aim to build capability for self-care, asked for a ‘quick and dirty’ appraisal of the evidence relating to this? |
Question to be addressed | CCG requested a rapid summary of the evidence relating to increasing self-efficacy with patients and in the general public to build capability for self-management |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. Health Foundation, The King’s Fund and Nesta for relevant reviews and policy reports |
Our response |
|
Final output | Evidence note sent via e-mail |
Date sent | 30 January 2014 |
Additional work | Following this initial overview of the evidence base, we developed a series of full evidence briefings and notes on self-management themes: education, support, social prescribing, care planning, mobile telephone apps (evidence note), shared decision-making (evidence note) |
Topic | Models of psychiatric liaison implemented in general hospital settings |
CCG | A2 |
Role | Commissioning director |
Date of initial contact | 30 July 2014 |
Type of contact | Face-to-face meeting |
Reason for contact | Arising from general discussion about priorities. Team had just been approached by Vale of York CCG about the same topic |
Question to be addressed | Summary of the evidence about the components, benefits and associated costs of different psychiatric liaison models that have been implemented in general hospital settings |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. Health Foundation, The King’s Fund and NHS Evidence for relevant reviews |
Our response | Due to differences in liaison psychiatry services and outcomes reported and the methodological quality of studies identified, it is not clear which model of service or, service components, are most effective. Questions also remain around cost-effectiveness; the cost ‘savings’ attributed to the RAID model are overstated. This underlines the importance of evaluating any implementation of a liaison psychiatry service and to give careful consideration to outcome measurement |
Final output | Evidence note sent via e-mail |
Date sent | 3 September 2014 |
Topic | Evidence to inform a review of a pharmacy minor ailments scheme |
CCG | A1 |
Role | Senior officer, Planning and Service Reform – Commissioning Support on behalf of A1 CCG |
Date of initial contact | 22 July 2014 |
Type of contact | E-mail with follow-up face-to-face meeting (on 30 July 2014) |
Reason for contact | CSU conducting a review of the minor ailments service and CCG suggested that they seek assistance from us to identify evidence |
Question to be addressed | General summary of the evidence about the effects of pharmacy-based minor ailments schemes to support a review of current and future provision of such schemes in their locality |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. NHS Evidence, Health Foundation, The King’s Fund, Nuffield Trust and Royal Pharmaceutical Society for relevant policy reports and service evaluations |
Our response | We were able to identify a highly relevant systematic review not included in the draft review by Commissioning Support. The limited evidence suggested schemes do appear to offer an alternative to GP consultation. Two unanswered questions remain: we do not know how much demand would be shifted away from GPs if a scheme was introduced; we do not have a complete picture on the cost of providing such a scheme |
Final output | Evidence note sent via e-mail |
Date sent | 9 September 2014 |
Topic | ‘One-stop shop’ screening model for diabetes |
CCG | A1 |
Role | Commissioning manager |
Date of initial contact | 3 September 2014 |
Type of contact | E-mail with follow-up telephone conversation |
Reason for contact | Manager mentioned to KF face to face that they were looking for assistance on this topic. Team followed up |
Question to be addressed | Would implementing a comprehensive one-stop shop annual review and screening model for diabetes have an adverse impact on either the quality or uptake of screening (feet and eyes)? |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. NHS Evidence, Diabetes UK, Health Foundation, The King’s Fund, NETSCC, NICE and Nuffield Trust for relevant policy reports and service evaluations |
Our response | We were unable to identify any evaluations of models similar to that being proposed or indeed any evaluation that showed a negative link between a comprehensive annual review and screening uptake |
Final output | Evidence note sent via e-mail to the project group |
Date sent | 19 September 2014 |
Topic | Evidence to inform the development of integrated community teams |
CCG | A1 |
Role | Manager, Service Planning and Reform – Commissioning Support on behalf of A1 CCG |
Date of initial contact | 7 August 2014 |
Type of contact | E-mail with follow-up telephone conversation |
Reason for contact | Initial stages of developing integrated community teams in A1 CCG. In particular keen to hear about any other areas, nationally and internationally who have implemented a similar integrated team, what the key outputs were (reduction in secondary care attendances/admissions, etc.) and if there is any commonality in terms of best practice from areas where the service has worked particularly well. Provided details of a model in the Netherlands which they were planning to visit |
Question to be addressed | Summary of the evidence for effects of integrated community teams including any examples of best practice |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. The King’s Fund, Health Foundation, Nuffield Trust, NETSCC, NHS Evidence and RAND Europe for relevant reviews, case studies |
Our response |
|
Final output | Evidence note sent via e-mail |
Date sent | 29 September 2014 |
Topic | What validated tools are there for frailty risk profiling in an A&E context? |
CCG | A2 |
Role | Commissioning director and chief officer |
Date of initial contact | 20 October 2014 |
Type of contact | Telephone and e-mail |
Reason for contact | Chief officer conversations with A&E consultants who, along with all the dramatic stuff they do, feel they are increasingly filters/triage for frail older people with complex needs – if there was a risk profile of either low or high risk that they could use, it would have a lot of traction. Suspect part medical history, part medication and part based on investigation results. This is different from anticipatory care planning as they have crossed the hospital threshold |
Question to be addressed | Initial confusion over the question. We thought that we were being asked to assess risk stratification tools, but the chief officer clarified that they meant predictors in the A&E department which may be more biomedical than the predictors of frailty. So rather than predictive modelling, the CCG are more interested in predicting risk of adverse outcomes in frail individuals presenting in the acute setting (planned or unplanned) |
Sources searched | DARE, NHS EED, and CDSR for relevant systematic reviews and economic evaluations. Also, consulted with National Clinical Director and consultant Andrew Clegg and asked them what they would suggest for risk profiling in an A&E context |
Our response |
|
Final output | E-mail followed by Effectiveness Matters bulletin |
Date sent | 3 November 2014 |
Topic | Mobile telephone apps |
CCG | A1 |
Role | Clinical director and Director of Public Health |
Date of initial contact | 1 November 2014 |
Type of contact | |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed |
|
Sources searched | DARE, NHS EED and CDSR |
Our response |
|
Final output | Evidence note sent via e-mail |
Date sent | 24 November 14 |
Topic | Interventions to promote shared decision-making |
CCG | A1 |
Role | Clinical director and clinical lead and Director of Public Health |
Date of initial contact | 13 November 2014 |
Type of contact | |
Reason for contact | Evidence-based steer on what sorts of self-management programmes or structures could be commissioned as part of the Pioneer programme |
Question to be addressed |
|
Sources searched | DARE, NHS EED and CDSR |
Our response |
|
Final output | Evidence note sent via e-mail |
Date sent | 5 January 2015 |
Topic | Accountable and other integrated models of care: a scope |
CCG | A2 |
Role | Commissioning director |
Date of initial contact | 8 May 2015 |
Type of contact | Telephone with e-mail follow-up |
Reason for contact |
|
Question to be addressed | The CCG staff are interested in a scope of different models of accountable care – they are very early in the development process and will be looking for interventions and ways of working that they can pilot test before implementing more fully. They are reasonably familiar with US ACOs but may still be interested in a ‘lessons-learned’ overview. More interested in European models and mentioned they were interested in the Alzira model in Spain and some Dutch care models that they had heard about but are lacking information on |
Sources searched | DARE, NHS EED and CDSR for relevant systematic reviews and economic evaluations. Given the nature of the topic we carried out a general search for reports on acute care models including searching the websites of The King’s Fund, NHS Confederation and Monitor |
Our response |
|
Final output | Evidence note sent via e-mail |
Date sent | 26 November 2015 |
Feedback | Developed a related briefing ‘Enhancing access in primary care settings’ to focus on individual service components as part of the developing ACO |
Topic | Telehealth for COPD |
CCG | A2 |
Role | Commissioning manager |
Date of initial contact | 20 July 2015 |
Type of contact | |
Reason for contact | Locality are implementing a COPD telehealth pilot and are interested in learning lessons from evaluations of other implementation projects |
Question to be addressed | Update of Telehealth for patients with long-term conditions (June 2013) produced for Vale of York CCG, with a focus on COPD and implementation issues |
Sources searched | DARE, NHS EED, CDSR, NHS Evidence and PubMed for relevant systematic reviews and economic evaluations published since June 2013 |
Our response | The focus of the evidence note was on telehealth interventions for people with COPD. Although a number of systematic reviews and economic evaluations have been identified much of the evidence is weak and reported effects are mixed. Small-scale incremental introduction that enables adaptation, refinement and greater system integration should remain the preferred approach. Evaluation at this scale should involve a focus on initial experience, acceptability and system fit |
Final output | Evidence note sent via e-mail |
Date sent | 6 August 2015 |
Topic | Public engagement in decision-making |
CCG | A1 |
Role | Chief officer and clinical director |
Date of initial contact | 12 August 2015 |
Type of contact | |
Reason for contact |
|
Questions to be addressed |
|
Sources searched | DARE, NHS EED, CDSR, NHS Evidence, PubMed and Google |
Our response | The evidence about public participation in health-care policy-making is mainly descriptive and largely focuses on discrete deliberations or specific service redesigns. We found no evidence evaluating systematic use across a whole commissioning cycle. There is also a lack of detail about the overall impact public involvement has on decision-making processes generally. Nevertheless, early engagement, genuine and open interaction and processes led and supported by health professionals appear associated with success. The methods used to recruit and engage public participation should be tailored to the question and the setting |
Final output | Evidence note sent via e-mail |
Date sent | 4 September 2015 |
Topic | Independent review of evidence for existing value-based hernia and hysterectomy policies |
CCG | All |
Role | N/A |
Organisation | Regional group |
Date of initial contact | August 2015 |
Type of contact | Regular monthly meetings and some e-mail contact |
Reason for contact | Requested by consultant in public health as part of ongoing review of regional policies |
Question to be addressed | Review those topics that have the greatest absolute value opportunity. Hysterectomy and inguinal hernias are both on the proposed list of policies so it would be useful to have a review |
Sources searched |
|
Search terms | Condition-specific terms |
Our response |
|
Final output | Evidence note |
Date sent | October 2015 |
Appendix 5 Guide for commissioners on using evidence to support decision-making
List of abbreviations
- ANOVA
- analysis of variance
- app
- application
- ASSIA
- Applied Social Sciences Index and Abstracts
- CCG
- Clinical Commissioning Group
- CDSR
- Cochrane Database of Systematic Reviews
- CI
- confidence interval
- CLAHRC
- Collaboration for Leadership in Applied Health Research and Care
- COPD
- chronic obstructive pulmonary disease
- CRD
- Centre for Reviews and Dissemination
- CSU
- Commissioning Support Unit
- DARE
- Database of Abstracts of Reviews of Effects
- EBS
- evidence briefing service
- GP
- general practitioner
- HSDR
- Health Services and Delivery Research
- HTA
- Health Technology Assessment
- IFR
- individual funding request
- JSNA
- Joint Strategic Needs Assessment
- MSK
- musculoskeletal
- NHS EED
- NHS Economic Evaluation Database
- NICE
- National Institute for Health and Care Excellence
- NIHR
- National Institute for Health Research
- PCT
- primary care trust
- PHCA
- public health clinical advisor
- PROSPERO
- International Prospective Register of Systematic Reviews
- RCT
- randomised controlled trial
- SCIE
- Social Care Institute for Excellence
- SPIRIT
- Supporting Policy In health with Research: an Intervention Trial
- VBCP
- value-based commissioning policy