Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 14/156/06. The contractual start date was in November 2015. The final report began editorial review in May 2018 and was accepted for publication in November 2018. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
Louise Locock declares personal fees from The Point of Care Foundation (London, UK) outside the submitted work, and membership of the National Institute for Health Research (NIHR) Health Services and Delivery Research (HSDR) Funding Board. Jennifer Bostock declares membership of the NIHR HSDR Funding Board. Chris Graham declares a conflict of interest in financial activities outside the submitted work (he is employed by the Picker Institute). Neil Churchill declares a conflict of interest in financial activities outside the submitted work (he is employed by NHS England). John Powell is chairperson of the NIHR Health Technology Assessment (HTA) and Efficacy and Mechanism Evaluation (EME) Editorial Board and Editor-in-Chief of HTA and EME journals. He is the principal investigator on another NIHR HSDR programme-funded project that was funded under the same call [HSDR 14/04/48: Improving NHS Quality Using Internet Ratings and Experiences (INQUIRE)]. Sue Ziebland declares a conflict of interest in financial activities outside the submitted work (Programme Director of NIHR Research for Patient Benefit). She is the co-investigator on another NIHR HSDR programme-funded project that was funded under the same call (HSDR 14/04/48: INQUIRE).
Disclaimer
This report contains transcripts of interviews conducted in the course of the research and contains language that may offend some readers.
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2020. This work was produced by Locock et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2020 Queen’s Printer and Controller of HMSO
Chapter 1 Background and rationale
The importance of patient experience
Patient experience, alongside patient safety and clinical effectiveness, is widely acknowledged as a key component of quality of care. 1 Improving patient experience is a priority for the NHS, which has led the way in developing measures of patient experience, such as the NHS Adult Inpatient Survey. 2 Patients have a right to expect care that is compassionate, respectful and convenient, as well as safe and effective. Collecting data about patient experience, although important, is not enough: the data need to be used for improvement, and it is arguably unethical to ask patients to comment on their experience if these comments are not responded to and acted on. 3
Recent evidence suggests positive associations between patient experience, patient safety and clinical effectiveness for a wide range of disease areas, and between patient experience and self-reported and objectively measured health outcomes. 4–6 At a time of global recession, there is a risk that better-quality patient experience may be seen as a luxury rather than a top priority. However, the apparent conflict between maintaining tight financial control and providing good patient experience may not be as clear as is sometimes supposed.
First, we know that many of the things that matter most to patients are relational, for example paying attention to dignity, courtesy and kindness. These may not be resource-intensive in themselves (although they may involve emotional labour for staff, and may be easier to attend to if there is less pressure on staffing levels). Second, there is growing evidence linking person-centred care with decreased mortality and lower hospital-acquired infection rates, as well as a range of other organisational goals, such as reduced malpractice claims, lower operating costs, increased market share and better staff retention and morale. Hospital organisations in which care is person-centred are reported to have shorter lengths of stay and fewer medication errors and adverse events. 7–14 There is increasing evidence that good staff experience is a predictor of good patient experience. 13,15,16
Yet despite these compelling reasons for paying close attention to experience data, the quality of patient experience remains a concern, and continued media reports and inquiries have suggested that there is a long way to go in ensuring the provision of genuinely and consistently person-centred care at all levels of the organisation.
Current patient experience in the NHS
The NHS Adult Inpatient Survey in England is one of the most well-established patient-reported experience measures (PREMs); it has been providing national and trust-level data since 2002. However, the pace of change has remained slow on some of the most important questions for person-centred care. 17 The results of the 2016 NHS Adult Inpatient Survey indicate that there have been small, but statistically significant, improvements in a number of areas of care, compared with results dating back to the 2006, 2011 and 2015 surveys. 18 This includes patients’ perceptions of:
-
the quality of communication between medical professionals (doctors and nurses) and patients
-
the standards of hospital cleanliness
-
the quality of food.
However, in some areas a decline in reported experience has been noted, which goes against a trend since 2006 of largely stable or improving scores. This includes patients’ perceptions of:
-
their involvement in decisions about their care and treatment
-
the information shared with them when they leave hospital
-
waiting times
-
the support they receive after leaving hospital.
Only 56% of patients reported feeling as involved as they wanted to be in decisions about their treatment, which is down from 59% in 2015, and the proportion feeling that doctors and nurses ‘definitely’ gave their families all the information they needed when leaving hospital was down from 66% in 2015 to 64%. Only 55% of patients felt ‘definitely’ as involved as they wanted to be in decisions about discharge, down one percentage point on 2015; and 17% of patients reported waiting longer than 5 minutes for a call bell to be answered, up from 14% in 2006.
Since 2006, the proportion of patients who say that they have been asked to give their views on the quality of their care has increased substantially, rising from 6% in 2006 to 20% in 2015, but with a decrease of one percentage point to 19% in 2016. In other words, 81% of respondents said that they were not asked to give their views on the quality of their care, which indicates a substantial missed opportunity to learn from patients.
Existing evidence on using patient feedback and patient experience data for improvement
There is already good evidence of what matters most to many patients and how they experience services;19 the gap is in how trusts can respond to the available local and national data and use them to improve care. 3,17,20,21 It should, of course, be noted that the process of gathering and analysing data on patient experiences at local level can itself be part of building momentum for the use of these data in improvement. In narrative-based approaches such as experience-based co-design (EBCD), collecting experiences (narratives) and using them for improvement are closely linked as part of the same process of cultural change. 22 Nonetheless, there has been a lack of evidence about how organisations can move beyond collecting patient experience data to using them for improvement, and how best to use different types of quantitative and qualitative data from different national and local sources.
As Gleeson et al. 23 note, this is a relatively new field that does not have a large body of published evidence, a key factor behind the decision of the National Institute for Health Research (NIHR) to commission several new studies in this field, of which this report is one.
Dr Foster Intelligent Board research20 from 2010 into how trust boards use patient experience data found substantial variation in what data were collected and also, more importantly, in the degree to which the data were effectively analysed and used to improve services. Strikingly, the research found that, over 95% of the time, hospital boards’ minuted response to patient experience reports was to note the report but take no further action. Examples of when patient experience data were used to spark debate and action were rare, as were examples of non-executive directors challenging performance based on patient experience measures.
Since the Dr Foster research, studies have continued to show that interest in using experience data at senior level is not always matched by action or the skills to work with these data. Lee et al. 24 studied how two hospital boards of directors used patient feedback. They found that although the boards used both quantitative survey feedback data and in-depth qualitative data to help them develop quality strategies and design quality improvement (QI) work, they made less use of patient feedback to help monitor their strategies and assure quality. Even though the data contributed to the development of strategy and QI, in one of the sites this ‘did not result in formal board commitment to these initiatives’. The authors suggest that there is a need for further research to investigate how boards of directors can make better use of patient feedback, from a range of sources, and combine it with wider pressures and priorities to effect change.
Martin et al. 25 explored senior leaders’ use of ‘soft intelligence’ for QI, that is, informal, localised, often narrative or observational data of a kind not easily captured in standard quantitative metrics. They noted that senior leaders were highly aware of the value of such intelligence, gleaned perhaps from conversations with staff or just from walking on to the ward, and of ‘the idiosyncratic, uncalibrated views of patients or their carers’, but that they were unsure how to turn this potentially valuable material into a useful guide to action. In leaders’ efforts to make this material useful, they often sought to apply quantitative approaches, such as aggregating multiple comments or observations, or looking for triangulation with other sources. Seeking triangulation may be misguided, given the observed mismatch between what patients say about their experience in response to survey questionnaires and what they say in interviews,26 or between survey scores and free text. 27 Formal systemisation, Martin et al. 25 argue, risks losing the spontaneous ‘untamed richness’ of insights from soft intelligence. The alternative approach, they observed, was to downgrade soft intelligence from the status of evidence to that of illustration and motivation, for example using quotations to bring dry numerical data to life, but not focusing directly on its content as a guide to improvement.
Understanding how to use qualitative evidence, including narrative and observational data, alongside numerical evidence can indeed be challenging,20,28,29 and this is undoubtedly one of the key areas in which trusts need support and training. The status of complaints in this landscape of patient experience data is contested; as unsolicited and challenging examples of patient experience, they may be actively resisted. In their analysis of how health-care professionals make sense of complaints, Adams et al. 30 note that participants tended to characterise complainants as ‘inexpert, distressed or advantage-seeking’; they rationalised their motives for complaining ‘in ways that marginalised the content of their concerns’, and rarely used the content for improving care.
Meanwhile, the NHS also faces an explosion of other unsolicited sources of patient experience data online and through social media [e.g. NHS Choices, Care Connect, Care Opinion, blogs and Twitter (Twitter, Inc., San Francisco, CA, USA)], with little guidance on how to use them. As Dudwhala et al. 31 argue, unstructured data that have not been actively sought or sanctioned by health-care organisations are unlikely to have the impact of other, more formal, data sources. Similarly, in their study of how staff raise safety concerns, Martin et al. 32 note that organisational leaders tended to prefer such concerns to be routed through formal systems in order to provide evidence for action. However, as a result, some concerns were never voiced, because of staff anxiety about using these channels and becoming drawn into bureaucratic procedures. Thus, both staff and patient experiences that come through informal routes may gain little traction.
Although there is a growing number of studies about how patient experience data are (or are not) used at board or whole-organisation level, we still know remarkably little about how front-line staff make sense of, or contest, patient experience data, what supports or hinders them in making person-centred improvements and what motivates staff – and patients and families – to get involved in improvement work.
Quality improvement programmes and techniques abound, but few are strongly evidence based and few take seriously the need to involve patients and families throughout the process. A few studies have provided some evidence of promising front-line experience-based approaches. For example, facilitated feedback of survey findings at ward team level has been shown to have the potential to improve patient experience scores. 33 Studies of EBCD have shown how locally collected narrative and observational data can lead to both specific improvements in care and changes in attitude and behaviour. 34–38 Locock et al. 39,40 reported positive findings on the use of nationally collected narrative data alongside local observational data in EBCD to generate improvements, and a randomised controlled trial of EBCD is under way in Australia. 41 A UK evaluation of the patient- and family-centred care (PFCC) approach is also in progress. 42,43 However, there remains insufficient evidence to say which types of data or QI approaches are more or less likely to be useful to front-line teams in making care more person-centred in different contexts and settings, and how well these are received.
Gleeson et al. ’s23 systematic review of approaches to using patient experience data for QI has helped consolidate the evidence base. The authors state that their focus is on PREMs, but their review includes three studies reportedly using EBCD (interviews and observations) and one using PFCC (observations), alongside seven using questionnaire survey data. This reflects a broad interpretation of the word ‘measure’ to incorporate ethnographic approaches to understanding patient experience.
The systematic review results have several points of significance for this report:
-
Patient experience data were most commonly collected through quantitative PREM surveys (despite many authors acknowledging that clinical and ward staff generally find qualitative comments more insightful).
-
Qualitative data were acknowledged to be more difficult to use in terms of time and expertise.
-
Data were used to identify small incremental service changes that did not require a change in clinician behaviour.
-
Recording of the changes that were made or the impact that they had was poor.
-
None of the studies reported using ‘formal QI methods of data collection’. 23 [Arguably, EBCD and PFCC are themselves formal QI methods, but the review authors do not include EBCD and PFCC in their list of examples (total quality management, continuous quality improvement, business process reengineering, Lean and Six Sigma). As noted earlier, the process of data collection and the process of improvement are not always easily demarcated.]
-
Most studies used qualitative rather than quantitative methods to measure the impact of PREMs on QI.
-
Two studies which measured post-intervention PREM questionnaire results found no statistical improvement in experience.
-
EBCD appeared to generate more improvement efforts than questionnaire-based data, but ‘effects of the QI interventions were not measured or reported on’. 23
-
In many cases, staff reported using data not to only to identify areas for improvement but to support or provide a rationale for existing improvement projects.
Some studies have identified a mismatch between managerial expectations and how engaged and supported clinicians and other front-line team members feel to make improvements. For example, in a survey of hospital clinicians, 90.4% of respondents believed that improving patient satisfaction with their experience of hospitalisation was achievable, but only 9.2% thought that their department had a structured plan to do this. 44 Friedberg et al. 45 found that physicians’ use of patient experience reports was variable and, importantly, that little training in communication skills was provided, even though improving communication with patients was thought to be fundamental to the provision of person-centred care. These findings suggest that not enough is being done to make patient experience data available in a useful, accessible and credible way to clinical staff, and to empower them – with positive organisational support – to see improving patient experience as a priority that they can lead on.
Flott et al. 46 outline a familiar set of challenges in using feedback (in particular national survey data) at clinical level. These include:
-
scepticism about the quality of the data
-
lack of training in social research methods
-
statistical complexity and lack of technical guidance
-
aggregation of data at trust level that does not inspire local clinical ownership
-
isolated data that are not linked to other relevant data sources
-
contradictory results from different sources depending on how data are collected
-
disparities in external support to providers to help analyse feedback and how to drive improvement
Much of the research into using patient feedback for service improvement has focused on surveys of inpatient hospital care, but Burt et al. ’s47 study of improving patient experience in primary care confirms many of the same themes. They note that general practitioners (GPs) are positive about the concept of patient feedback, but struggle to engage with it and make changes under current approaches to measurement. Within practices, and in out-of-hours settings:
staff neither believed nor trusted patient surveys. Concerns were expressed about their validity and reliability and of the likely representativeness of respondents. Staff expressed a preference for free-text comments as they provided more tangible, actionable data.
Burt et al. 47
The authors conclude that supporting primary care staff to enable them to act on patient feedback remains a key challenge, and that surveys are necessary but not sufficient to generate meaningful data for improvement.
A recent study by Sheard et al. 48 has proposed a conceptual ‘patient feedback response framework’ to understand why front-line staff may find it difficult to respond to feedback. This has three components:
-
normative legitimacy (i.e. staff members express a personal belief in the importance of responding to patient feedback and a desire to act)
-
structural legitimacy (i.e. staff perceive that they have sufficient ownership, autonomy and resource available to establish a coherent plan of action in response to patient feedback)
-
organisational readiness (i.e. the capacity for interdepartmental working and collaboration at meso level, and senior hospital management/organisational support for staff to work on improvement).
The authors found that, although normative legitimacy was present in most (but not all) of their 17 case study wards, structural legitimacy and organisational readiness were more problematic. Even where staff expressed strong belief in the importance of listening to patient feedback, they did not always have confidence in their ability or freedom to enact change. A lack of organisational readiness and support could block change even when staff expressed high levels of structural legitimacy.
The original ‘theory of change’ underlying this study [Understanding how Front-line Staff use Patient Experience Data for Service Improvement (US-PEx)] was that high-level organisational support is necessary but not sufficient for person-centred service improvement; that many experiences that matter most to patients happen in front-line encounters; and that bottom-up engagement in person-centred improvement (as opposed to top-down, managerially driven initiatives) can be motivating for front-line staff,13 consistent with evidence that patient experience seems to be better in wards with motivated staff. 39 Sheard et al. ’s48 patient feedback response framework has provided us with an additional theoretical lens through which to view our findings.
Our study was focused on patient experience within a ward setting. However, shortfalls in patient experience often happen at the boundaries between services, for example between mental and physical health, child and adult services, adult and older people’s services, and primary and acute care. A major focus of integrated care systems is to improve care at these boundaries and even eliminate them altogether. In addition to a bottom-up/top-down perspective, therefore, we may need to add a third, lateral perspective of local but external clinical challenge. Multidisciplinary clinical teams working to improve integration of care, for example in areas such as frailty, may mutually challenge each other’s practice, culture and organisation.
Wider literature on quality improvement theory, methods and skills
There is, of course, an extensive broader literature on change in health care and what enables or frustrates change at both individual clinician and organisational levels, which in turn has fed into a more specific but still substantial literature on QI and ‘improvement science’. 49
Approaches such as normalisation process theory50 and the theoretical domains framework51 have drawn on sociological and psychological theory to develop explanations of how individuals within organisations change behaviour and adopt innovations, what motivates and deters them, what skills they require, who they listen to, and how they act collaboratively to make sense of52 and embed new practices. From the organisational change literature in health care, we know that certain common themes recur, particularly around the importance of a receptive context and an understanding of organisational history, of having both senior management and distributed leadership, of the engagement of opinion leaders and clinicians, of ‘fit’ with existing organisational goals and strategies, and of organisational support and resources. 53–56
From a human factors perspective, a daunting list of 760 challenges to the delivery of effective, high-quality and safe health care has been identified by Hignett et al. 57 The authors group these under eight headings, which, again, are remarkably consistent with other organisational studies:
-
organisational culture (26.4%)
-
staff numbers and competency (20.5%)
-
pressure at work (19.4%)
-
risk management culture (10.8%)
-
communication (10.5%)
-
resources (6.4%)
-
finance/budget (3.6%)
-
patient complexity (2.4%).
It is beyond the scope of this report to rehearse all of this wider organisational change literature, but it is clear that QI generally and patient experience improvement projects specifically are in most respects no different from other change initiatives, and that they need to pay attention to senior leadership; clinical and front-line engagement and motivation; dedicated project management support and skills; the role and attitudes of local opinion leaders/advocates for change; and the receptiveness of the context and organisational readiness to change. In a useful recent reflection on the state of the organisational change literature in health care, Fitzgerald and McDermott58 conclude that there is remarkably little evidence to support top-down, large-scale, transformational ‘big bang’ change of the kind exemplified by business process re-engineering, and that it remains ‘unproven rhetoric’. Strategies of incremental, accumulative change are, they argue, more likely to have long term impact.
Jones et al. 59 have recently constructed a measure of ‘QI governance maturity’ from working with 15 hospital boards. Consistent with wider organisational literature, they conclude that boards with higher levels of QI maturity had the following characteristics, which were particularly enabled by board-level clinical leaders:
-
explicitly prioritising QI
-
balancing short-term external priorities with long-term internal investment in QI
-
using data for QI, not just for quality assurance
-
engaging staff and patients in QI
-
encouraging a culture of continuous improvement.
It is commonly argued that QI is often ineffective when isolated in organisations and that it needs to be part of wider systemic efforts. Personal and organisational membership of formal and informal networks has been cited as a factor in effective QI in a Health Foundation60 study. The mechanism(s) through which this is achieved are difficult to identify precisely, although non-hierarchical and multidisciplinary exchange of ideas may be part of the answer.
‘Improvement science’ is a contested term. Walshe61 points to the dangers of ‘pseudoinnovation’:
. . . the repeated presentation of an essentially similar set of QI ideas and methods under different names and terminologies.
Walshe61
Improving health-care services can all too easily become equated with the use of certain ‘in vogue’ tools for improving quality. Advocates for different approaches will argue strongly that their way is the best (or even the only) way. Staff may feel overwhelmed by the array of methods promising a sure recipe for success, and concerned they cannot live up to the examples set by leading organisations in the field. As Walshe61 argues, QI needs sustained investment and support, but the rapid switching from one fashionable method to another has probably damaged the chances that any of these methods will be effective.
A recent King’s Fund report62 suggests that it is important for hospitals to adopt an established method for QI, one that is ‘modern and scientifically grounded’, and to ensure that all leaders and staff are trained in it. While using one consistent method may have advantages, the evidence behind most QI approaches remains fairly weak. Cribb63 argues that it is easier to identify a gap or shortfall in current practice than to identify, with any certainty, what specific approaches are likely to improve the situation.
In a systematic narrative review for Healthcare Improvement Scotland, Powell et al. 64 argue that:
Importantly, there is no one right method or approach that emerges above the others as the most effective.
Powell et al. ,64 p. 7
Consistent with the ‘contingency theory’ of management, which suggests that there is no one best or universal way to manage businesses or hospitals,65 the authors conclude that:
The specific approach (or combination of approaches) may be less important than the thoughtful consideration of the match and ‘best fit’ (however imperfect) for the particular circumstances in the local organisations using it.
Powell et al. ,64 p. 63
Most approaches have something to offer and will work some of the time in some settings, but are also likely to fail if good conditions are not in place. Powell et al. 64 identify the following as ‘necessary but not sufficient conditions’, regardless of the approach adopted:
-
provision of the practical and human resources to enable QI
-
the active engagement of health professionals, especially doctors
-
sustained managerial focus and attention; the use of multifaceted interventions
-
co-ordinated action at all levels of the health-care system
-
substantial investment in training and development
-
the availability of robust and timely data through supported information technology (IT) systems.
Lucas and Nacer66 also caution against becoming too focused on tools and techniques, arguing that:
Education for improvement practices can all too easily be reduced to, for example, ‘how to use a driver diagram’ or ‘how to lead an improvement project’.
Lucas and Nacer,66 p. 12
While Lucas and Nacer66 agree that learning such techniques is helpful, they continue that:
Without a clearer picture of what improvement really looks and feels like, the ‘packaging’ of improvement can end up becoming its lived reality.
Lucas and Nacer,66 p. 12
By contrast, ‘improvers are constantly curious, wondering if there is a better way of doing something’ (p. 12). They use a kind of ‘smart common sense’66 to keep reflecting (quotations reproduced with permission from The Health Foundation66).
Finally, we note Cribb’s63 argument for greater dialogue between what might be traditionally regarded as QI research, with its instrumental and applied focus on problem-solving, and wider social science in health care, which might not be intentionally focused on improvement but might, nonetheless, bring useful insights.
Conclusion
In summary, the literature on using patient experience data for improvement is incomplete. The focus to date has been primarily on quantitative survey data and their use at board level; much less is known about the use of other types of formal and informal data, and about their use at the front line. Surveys are important as measures, but their ability to provide a nuanced guide to understanding why patient experience is as it is, and how it could be improved, is limited. Yet other forms of data and ‘soft’ intelligence may be difficult for staff to analyse and use effectively. There is little reliable evidence from the wider QI and ‘improvement science’ literature to suggest that one approach is definitively better than the others. A thoughtful, eclectic approach that is sensitive to context and engages front-line and clinical staff at the same time as offering strong senior leadership may be more important than a particular technique.
This study set out to add to this evidence by exploring how front-line hospital ward teams engage with patient experience data when encouraged to do so, what challenges they face and how they can be better supported to work on patient-centred QI.
Chapter 2 Study methods
The study comprised three phases (Figure 1):
-
secondary analysis of existing survey data and new survey of trust patient experience leads
-
case studies in six medical wards
-
preparation of a toolkit or guide for NHS staff.
The study was overseen by a study steering committee (for details of membership, see the project web page: www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415606/#/; accessed 17 April 2019), which met approximately every 6 months. The full research team (see www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415606/#/; accessed 17 April 2019) also met every 6 months, with a smaller core team (including the lay co-investigator) meeting quarterly. The lay panel met every 6 months, and stayed in contact by e-mail between meetings (see Chapter 3, Patient and public panel methods and reflections, for a detailed account of patient and public involvement).
The research team comprised a diverse range of people, including academics, a lay co-investigator, colleagues from the NHS at both front-line practice and senior policy levels, and third-sector co-investigators from the Picker Institute. Research team members brought both quantitative and qualitative expertise in patient experience research, and included people with disciplinary backgrounds in sociology, psychology, organisational behaviour, health services research, ethics, clinical primary care, mental health and public health, nursing, digital health and statistics. This range of perspectives has been a particular strength in analysing the findings through several different lenses.
Phase 1: secondary analysis of existing survey data and new survey of trust patient experience leads
In this section, we describe a secondary analysis of existing national patient and staff experience data, and a new survey of patient experience leads. The aim of this phase of the study was to select the six case study sites (acute trusts) to take part in this research.
Secondary analysis of existing national patient and staff experience data
Secondary analysis sources were the National Results from the 2014 Inpatient Surgery,2 the National Results from the 2014 NHS Staff Survey,67 NHS Friends and Family Test (FFT) 2015 response rates,68 and Care Opinion web metrics (James Munro, Care Opinion, 2016, personal communication).
NHS Adult Inpatient Survey 2014
The NHS Adult Inpatient Survey 20142 contains 70 questions about patients’ most recent experiences of being an inpatient in hospital, as well as eight demographic questions. Thirty-one questions relating to four domains were identified as most relevant to this analysis. The four domains were:
-
referral
-
inpatient care
-
discharge
-
self-management.
The responses to these questions were analysed to create an average score for each trust in order to identify the top, middle and bottom thirds of the distribution with regard to patient experience. An overall average score was calculated as the mean of the four domain scores, and trusts were then ranked on this overall score.
The list of questions and details of the analytic approach are available in Report Supplementary Material 1.
NHS Staff Survey 2014
Nine questions were analysed from the NHS Staff Survey 201467 relating to the following three domains of care, which were the most relevant to our research questions:
-
training and development
-
standards of care
-
patient/service user feedback
Analysis of the staff survey results differed slightly from the approach used with the inpatient survey. The list of questions and details of the analytic approach are available in Report Supplementary Material 1.
An inspection of the results led to the decision to remove one question that did not contribute to internal consistency.
Trust-level scores on the remaining questions were standardised, and the mean of the standardised items was calculated as the overall score for the trust. Trusts were then ranked on this overall score.
NHS Friends and Family Test response rates, August–October 2015
Trust-level response rates to the NHS FFT68 carried out on inpatient wards were reviewed. This provided useful contextual data for selecting case study sites by giving an indication of the importance that organisations place on collecting patient feedback when viewed alongside other measures. There are limitations to the use of these data for this purpose. In particular, the FFT can be administered via a range of methods, and the choice of methods can reasonably be expected to influence response rates. Nevertheless, we considered that the overall response rates would provide contextual evidence of the relative priority given to obtaining people’s feedback.
Data from August–October 2015 were downloaded from the NHS England FFT website. 68 For each trust, the average response rate to the FFT across the 3 months was calculated.
To inform case study site selection, trusts were sorted by average FFT response rate to identify the top, middle and bottom thirds of the distribution.
Care Opinion web metrics, January 2014–November 2015
Data from Care Opinion (James Munro, personal communication) were reviewed to provide contextual information on trusts’ use of and engagement with patient experience. Care Opinion extracted the following web metrics for each acute NHS trust from January 2014 to November 2015:
-
number of stories posted
-
number/percentage of stories read by the trust
-
number/percentage of stories responded to by the trust
-
number/percentage of stories that led to a change being planned
-
number/percentage of stories that led to a change reported as made if the trust had subscribed to Care Opinion
-
number of staff registered to use Care Opinion.
This information was incorporated into a spreadsheet alongside inpatient and staff survey scores and FFT average response rate.
Survey of patient experience leads
The aim of this survey was to provide national contextual information on the collection, use and impact of patient experience data on service provision. It also provided additional information for selecting case study sites.
A questionnaire, to be completed online by patient experience leads at acute NHS trusts, was developed. Eleven questions were included from an online staff survey in another NIHR-funded study examining real-time patient reported experiences of relational care at six NHS trusts in England (reference 13/07/39). 69 Additional questions were adapted from those included in the Beryl Institute patient experience benchmarking study. 70
The draft questionnaire was reviewed by the project team. At the request of the Health Services and Delivery Research (HSDR) board, we collaborated with researchers from King’s College London who were conducting a separate NIHR-funded study (reference 14/156/08) that was to include a national patient experience leads survey. 71 These researchers also reviewed the draft questionnaire. A few further adaptations were made to ensure that the researchers gathered the information they needed.
The final questionnaire (see Report Supplementary Material 2) had 33 questions, covering:
-
types, methods and frequency of patient experience data collections
-
reporting and use of patient experience data
-
facilitators of and barriers to using patient experience data.
Sites were given the opportunity to leave their contact details if they were interested in taking part in the wider study.
All acute trusts, including specialist children’s trusts, were invited to take part in the survey (n = 153). The survey was administered using the Snap WebHost survey platform (www.snapsurveys.com).
The lead contact for the most recent (2015) national adult inpatient survey at each trust was sent a pre-approach e-mail in November 2015 with information about the survey and a link to the participant information sheet. This e-mail gave trusts the option to opt out from receiving the survey invitation and to notify the research team if they wanted the invitation e-mail to be sent to a colleague. Two days later, trust contacts were sent an e-mail containing the link to the online survey, with a reminder e-mail 1 week later. The fieldwork was kept open for 6 weeks.
Ethics approval
Approval for the online staff survey was granted from the Central University Research Ethics Committee in October 2015 (reference MS-IDREC-C1-2015-203). The Health Research Authority approved the process for all acute trusts centrally in October 2015, thereby removing the need for local research and development (R&D) approval, unless a site opted out within 35 days (Integrated Research Application System project ID 192500). One site opted out of receiving an e-mail invitation to take part in the survey.
Case study site selection
The core research team reviewed the secondary analysis and patient experience leads survey results and then met to discuss potential case study sites.
The team shortlisted 14 trusts based on four main factors:
-
identifying a spread of trusts in the top, middle and bottom third based on the secondary analysis matrix
-
having a diverse geographical spread
-
having some trusts that scored more highly on the inpatient survey than the staff survey and vice versa (i.e. mixed)
-
willingness of trusts to take part (through either the patient experience leads survey or previous correspondence).
Trusts currently in special measures were excluded.
From this longlist, six trusts were identified as the first choice sites. The six trusts ranged in their inpatient and staff survey ranks, in addition to the extent to which they appeared to approach and respond to issues relating to patient experience, and fell into the following cells in the matrix (Table 1).
Inpatient survey rank | Staff survey rank | ||
---|---|---|---|
Top | Middle | Bottom | |
Top | 2 | ||
Middle | 1 | 1 | 1 |
Bottom | 1 |
In February 2016, an invitation e-mail was sent to the six trusts inviting them to participate in the study. Five of the original six sites contacted agreed to take part. One trust did not have the capacity to become involved. The team therefore approached another trust with a similar profile from the secondary analysis, and this trust agreed to participate. Table 2 shows how the final six trusts fell into the inpatient/staff survey rank matrix.
Inpatient survey rank | Staff survey rank | ||
---|---|---|---|
Top | Middle | Bottom | |
Top | 2 | ||
Middle | 1 | 1 | |
Bottom | 1 | 1 |
Trusts were invited to nominate a ward. They were asked to propose a general medical ward whose staff were willing to take part. We did not specify criteria beyond this, leaving it to local contextualised knowledge. Although our initial aim was to carry out the research on general medical wards, through discussions with sites it became clear that there was an appetite for specialist and medical assessment units to be given the opportunity to participate. The research team considered the benefits and potential issues to each ward selection in turn, in consultation with the lay co-investigator (see Chapter 3), and in all cases decided that these wards should not be excluded. Indeed, the research team agreed that the different settings and populations might reveal different challenges to the use of patient experience data. The mix of reasons why specific wards were put forward forms part of our findings in Chapter 5.
An anonymised summary description of each site and its improvement work is appended (see Appendix 1).
Phase 2: case studies in six medical wards
Once six front-line medical wards had been selected and R&D approval had been obtained, the study moved to phase 2, a mixed-methods case study approach. This had several steps:
-
a baseline patient experience survey and interviews in each site
-
preparation of resources and formation of a learning community to support front-line teams to plan patient-centred QI work
-
qualitative ethnographic observation of the front-line teams’ QI work
-
follow-up patient experience survey and interviews.
We describe each of these steps below and, where appropriate, reflect on how our methods evolved and adapted in real-world NHS settings.
Funding of £6000 was given to each NHS trust to allow staff to take part in research-related activities (such as attending the learning community and being interviewed). The cost of time spent on QI work was expected to be borne by the trust as part of routine commitment to improving patient experience.
Ethics and research and development approval
Ethics and R&D permissions were obtained through the new Health Research Authority combined approvals process (REC reference 16/NE/0071). This is intended to eliminate the need for site-specific approval processes by individual trusts. However, as the system was new, some trusts continued to require separate, additional information after central approval had been granted. This delayed the administration of the baseline survey, and in one case trust agreement came through only a matter of days before the front-line teams were due to take part in the first learning community (see Resource book and learning community).
Research Passports were obtained by the three study ethnographers. The ethnographers then applied individually for letters of access from each trust where they would be working on site. Trust requirements were varied; some required re-presentation in person of all the original documentation already verified by the trust issuing the Research Passport, whereas others did not. Again, this slowed the process of gaining site access for fieldwork. All necessary site access documents were obtained by August 2016, a period of approximately 5 months from first applying for Research Passports.
Patient Experience Survey and interviews
Case studies began with a baseline postal survey of patients discharged from the participating wards over a 4-month period (January–April 2016). This was supplemented by a small number of telephone interviews with patients.
A post-intervention survey and interviews were carried out with patients discharged during March–May 2017. The main aim of the pre- and post-intervention surveys was to see if any measurable changes in patient experience were found following the teams’ QI work, and to provide quantification of people’s experiences on the wards to accompany the mainly qualitative findings from other elements of the research. As well as using the results as part of our assessment of change, we chose to present the findings from both the pre- and post-intervention surveys to staff at each of the case study sites. This was originally intended to help them understand their current position and the impact of their work. It was not directly intended to provide them with information that might contribute to directing improvement, although in fact several teams found it helpful in this regard.
Development of the questionnaire
The questionnaire used for the baseline and post-intervention surveys focused on the experience of four areas of care:
-
referral to service
-
inpatient care
-
discharge
-
support for self-management.
A database of questions was compiled and mapped against the four areas. Candidate questions were selected from extensively tested, reputable sources such as the NHS Adult Inpatient Survey, the GP Patient Survey, the National Cancer Patient Experience Survey, and previously developed questions about self-management and demographic indicators. A longlist of 130 questions was reduced to 44 by three researchers independently assessing and then discussing their suitability for inclusion. Face validity of the draft instrument was assessed, with input from lay and staff co-investigators (JB and MG), which led to a small number of content changes.
Further detail on this process can be found in Appendix 2. See Appendix 3 for the survey and Appendix 4 for further methodological reflection.
Covering letters to accompany the questionnaire (for each of the three mailings) were also designed (see www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415606/#/; accessed 17 April 2019).
Survey sampling
Ward teams drew a census of all patients discharged in a given period to whom the questionnaire was sent. Activity data from two of the sites showed that the actual number of discharges was much greater than anticipated and so a sample of patients was drawn, rather than a census. The sampling months for the baseline survey were January–March 2016 (one site also included patients discharged during April 2016 due to a smaller sample size). For the post-intervention survey, the sampling months were March–May 2017.
Survey fieldwork
The survey was sent to each site by post (Table 3). Fieldwork lasted for 12 weeks, during which two reminder mailings were sent to non-responders. Before each mailing, sites were asked to carry out a DBS (Demographic Batch Service) check for any patient deaths. An opt-out approach was used, stating that participation in the survey was completely voluntary. The low response numbers and rates in site 3 in particular are attributable partly to the fact that a high proportion of patients on the ward were living with dementia.
Site | Pre-intervention survey | Post-intervention survey | ||||
---|---|---|---|---|---|---|
Number of surveys mailed to patients | Number of respondents | Adjusted response ratea (%) | Number of surveys mailed to patients | Number of respondents | Adjusted response ratea (%) | |
1 | 250 | 101 | 41 | 250 | 112 | 47 |
2 | 223 | 88 | 42 | 227 | 86 | 38 |
3 | 110 | 19 | 27 | 250 | 46 | 19 |
4 | 250 | 75 | 31 | 250 | 82 | 34 |
5 | 120 | 36 | 30 | 171 | 55 | 34 |
6 | 181 | 63 | 37 | 170 | 49 | 29 |
Total | 1134 | 382 | 35 | 1318 | 430 | 34 |
Patient interviews
The survey was accompanied by in-depth interviews with patients and/or family or carers to gather more detailed information on their experiences. This was not intended to be representative of the overall patient population, but rather to add richer descriptive information alongside the quantitative data set (Table 4).
Site | Pre intervention | Post intervention |
---|---|---|
1 | 7 | 6 |
2 | 6 | 8 |
3 | 3 | 1 |
4 | 8 | 8 |
5 | 5 | 6 |
6 | 8 | 8 |
Total | 37 | 37 |
The aim was to carry out eight telephone/face-to-face interviews per ward. A question was included in the survey allowing participants to indicate if they would be happy to be contacted to take part in a follow-up interview. Where possible, the researcher aimed to get a mix of interview participants in terms of their demographic characteristics. In one site, where cognitive impairment and ability to consent were an issue, numbers were still smaller than anticipated.
The topic guide used for the interviews followed the domains presented in the patient experience questionnaire, with probes focusing on referral, inpatient care, discharge and self-management (see Appendix 5).
The interviews were audio-recorded, transcribed, and analysed using qualitative coding software (NVivo 10, QSR International, Warrington, UK). Framework analysis was used to identify, analyse and report themes and patterns within the responses from each ward. The findings from these interviews are summarised in the case descriptions (see Appendix 1).
Analysis and reporting to sites
Raw data from the survey were entered into Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA) and then transferred to SPSS version 23 (IBM Corporation, Armonk, NY, USA) for cleaning (including the correction of any wrong response codes and the application of any question routing rules) and analysis. The results for each ward team were analysed separately, with each team receiving a report of findings from the baseline survey and interviews, which included the following:
-
Response rate. An ‘adjusted base’ was used to calculate the response rate. The base excluded those questionnaires that were returned undelivered, deceased patients and patients who were ineligible to complete the survey.
-
Demographic profile of respondents. This included respondent profile, age and sex distribution of respondents and the ethnicity of respondents.
-
Infographic showing key results in a simplified pictorial format.
-
Frequency tables. Tables of frequency counts and percentages were created for each question.
-
Report of interview findings.
A similar report was produced following the post-intervention survey and interviews. This report, in addition to the content noted above, included a comparison between the baseline and post-intervention survey results. Z-tests were used to determine whether or not there had been any statistically significant changes between the two surveys at a confidence level of 0.95. Each z-test is equivalent to a chi-squared test on a subset of the whole data; using this test allowed the comparison of individual pairs of proportions.
A change in the profile of respondents in the post-intervention survey compared with the baseline survey may have affected the results, as we know that people tend to answer questions in different ways depending on certain characteristics. Our analysis of the survey data (across the six sites) showed that the sex of a respondent affected the results, with women reporting less positive experiences than men. If the post-intervention respondent sample had more male inpatients than the baseline survey did, then this could potentially lead to a trust’s results appearing better than if they had a higher proportion of female patients. To account for this, results were standardised by the sex of respondents to enable a more accurate comparison of results between the baseline and post-intervention surveys.
Alongside the mainly numeric data from the surveys, we produced accessible infographics to communicate key findings to staff with less experience of quantitative data. To select results to include in the infographic, the questions were scored to identify areas about which patients reported their most positive and least positive experiences.
The infographic showed the percentage figures for those questions with the highest scores (green), indicating where the ward was doing well, and for those with lower scores (red), where the ward had the most room for improvement (see Chapter 6, Making sense of patient experience ‘data’: where do ideas for change come from?, for an illustration of the infographic template).
Resource book and learning community
A ‘resource book’ was prepared for distribution to each team. This brought together short, accessible descriptions of different types of patient experience data and how they might be used, as well as brief summaries of key QI approaches. The resource book was developed by the principal investigator, initially using desk research to assemble information about different types of data, QI approaches and available evidence to support these for use in improving patient experience. Drafts were shared with members of the co-investigator team, as well as with the director of Care Opinion, and improvement advisers from The Point of Care Foundation and the NHS England Patient Experience Team, who commented on the accessibility of content and provided additional material. The content of the resource book has now evolved into an online guide hosted by The Point of Care Foundation (www.pointofcarefoundation.org.uk/resource/using-patient-experience-for-improvement/; accessed 8 May 2019) (see Phase 3: producing guidance for the NHS).
This material also formed the basis for the first ‘learning community’ event in July 2016. This was a 2-day residential workshop, facilitated by Professors Andrée Le May (chairperson of the NIHR Journals Library Editorial Group) and John Gabbay, to help prepare and support the front-line teams in planning their QI ideas. This was also attended by the lay panel, staff from NHS England, and speakers from Care Opinion and The Point of Care Foundation.
The original plan was to give all teams the results from the baseline survey from their ward by the time of the first learning community. Unfortunately, delays in obtaining site R&D approval meant that this was not possible for every site; in one case, site approval came through only a few days before the learning community took place.
The exact size and composition of teams was determined by each site, but attendance at learning community meetings was limited to up to five people from both clinical and non-clinical backgrounds. Sites were encouraged to focus primarily on front-line members of ward staff, and to bring a patient team member if possible. How teams were actually constituted forms part of our findings (see Chapter 5, Overview of phase 2 findings, and Chapter 8, The effect of team-based capital on quality improvement projects in NHS settings).
The first of the two days introduced the importance of understanding patient experience and concentrated on types of data and how to use them. Lay panel members gave a presentation on themes identified from their collective experiences, which had also been used to prepare a trigger film available for use in the sites if they wished. Members of the research team organised a ‘marketplace’, with stalls on survey data, narratives and interviews, observation, and online feedback/complaints. Lay panel members were involved in each stall. Front-line staff moved around stalls, as they chose, to learn about different options available to them and the strengths and limitations of both quantitative and qualitative data. The second day focused on QI techniques, particularly patient-centred approaches such as EBCD and PFCC. An overview of what is known about achieving organisational change and the importance of stakeholder mapping was also given. Before departing, teams had time in their own groups to plan their next steps.
Front-line quality improvement work and ongoing support
Teams then went back to their wards to decide what patient experience data to use, to design and implement their own QI projects, and to decide who else they needed to involve.
Two further learning community events were held: one at the mid-point of the fieldwork period in December 2016 and one at the end in July 2017. The purpose of the mid-point event was to enable teams to share progress and problems and to discuss their next steps. The intention was to offer supportive, formative input, with feedback from other teams and members of the research team. Emerging findings from the ethnographic study were also shared (see Focused team ethnography).
The third and final learning community meeting gave teams a chance to report on final outcomes, reflect on learning and help the research team shape the form and content of the guidance to be disseminated across the NHS. Senior managers from each trust were invited to hear presentations from the ward teams in the afternoon as a means of both celebration and dissemination; at least one senior manager from each trust attended, although uptake was not as high as had been hoped.
Teams were offered ongoing improvement advice and support during the fieldwork period by a senior adviser from the NHS England Patient Experience Team. It had also been planned to offer a monthly webinar with improvement advice and for the exchange of ideas between teams. However, a combination of difficulty finding time and technical resources to enable team members to join the webinar and the growing pressures of winter 2016–17 meant that the webinar was abandoned after two sessions. Other means of keeping in touch, such as a Facebook page (Facebook, Inc., Menlo Park, CA, USA), were tried but did not prove useful to the front-line teams.
Focused team ethnography
In this section we describe in detail the qualitative component of the case study fieldwork. The QI work of the front-line teams was studied by a team of three researchers using ethnographic methods. Data collection included observational field notes, notes of informal conversation, semistructured interviews and continued documentary analysis. Primary responsibility for fieldwork in each site was allocated to one of the three ethnographers: one full-time (responsible for three sites) and two part-time (covering one and two sites, respectively).
‘Team’ and ‘focused’ ethnography
Team ethnography has its origins in late nineteenth-century expeditions to non-western locations. 72 Embedded within multidisciplinary academic teams (including psychologists, physiologists, mathematicians and geographers) were social anthropologists whose role was to obtain an ‘ethnographic’ perspective of the cultures encountered. Ethnography has been increasingly used for the applied study of contemporary organisations; it is not uncommon for one or several ethnographers to be embedded within the field of inquiry as part of wider interdisciplinary research. Jarzabkowski et al. ,73 for example, describe a team of five ethnographers conducting simultaneous fieldwork in 25 global reinsurance organisations across 15 countries. Thus, ‘team ethnography’ can refer both to the presence of an ethnographer in an interdisciplinary team, and to a team of ethnographers researching different case studies within one project.
In the US-PEx study, the full co-investigator team included a range of academic researchers, clinicians and non-academic partners (see www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415606/#/; accessed 17 April 2019). Their role did not include fieldwork at any of the sites, but they provided discussion and reflection whenever ‘tales of the field’74 were summarised and discussed at research team meetings. Thus, the study consisted of a team ethnography that contained within it a core team of field-focused ethnographers. The ethnographers held regular meetings together and with the principal investigator to discuss fieldwork practicalities, review emerging findings, plan analysis and debate interpretations.
Team ethnography, in this second sense of a team of ethnographers working simultaneously, undoubtedly has challenges, given that no single ethnographer has detailed familiarity with all the sites. In this case, the original plan to recruit two full-time ethnographers was adapted to one full-time and two part-time team members, so the number of allocated sites differed. Thus, while one ethnographer drew on comparisons across three cases, another worked on one site in detail, but also conducted some fieldwork and analysis for another site. Such disparity is not unknown; for example, in the Atherton et al. 75 study, two ethnographers studied three sites each, while a third studied two sites. In the study by Jarzabkowski et al. ,73 small amounts of ethnographic fieldwork were undertaken by the principal investigator and co-investigator, alongside the main ethnographer. However, it does require deliberate strategies to overcome the limitations and ensure that, as far as possible, the analysis is shared and synthesised, rather than being the production of six disconnected ethnographies. This involves a shift in how individual ethnographers may have worked in previous settings. Strategies adopted in US-PEx included frequent, repeated meetings to tell each other the emerging story in different sites; standardising data collection tools (such as interview guides and observation pro formas); and agreeing a shared coding framework for analysis. We also worked regularly with our lay panel to sense-check our emerging interpretations, and used the second and third learning community events similarly with the ward teams to check that we were reflecting their experiences. Despite this, observation in the field remains a uniquely individual act. Data from ethnographic field notes and coding notes, included in Chapters 6–8, inevitably represent one person’s interpretation of a given observation.
At the same time, team ethnography brings advantages. It can bring multiple lenses and professional experience of different settings to bear; in this case, the three ethnographers came from different disciplinary traditions (sociology, anthropology and psychology). It also provides a forum for academic collaboration, in which observations may be shared and interpretations challenged and/or confronted. 73 Such team-based reflexivity establishes a collective sense-making process that differs from those normally associated with so-called ‘lone-ranger’ ethnographers. Schlesinger et al. 76 describe this as a positive model ‘because it sets up a deliberative process that involves testing the work as it is being done’. They add that:
. . . a collaborative approach to analysis leads to a deeper shared knowledge of the field and a more fluid and less temporally segmented process of knowledge production.
Schlesinger et al. 76
The wider research team, which included experts in different types of data and methods, also regularly reflected on emerging interpretations with the core ethnography team, and directed the gaze of future ethnographic fieldwork. For example, at a full team meeting 3 months into the fieldwork period (September 2016), the ethnographers raised observations that at two sites the teams appeared to be using the project as a way to bring about desired changes that they had identified before they had been involved in the project, based both on experience accumulated through their professional lives and on knowledge of the workings of their particular wards. Discussion among the full team identified the importance of keeping an open mind about what was meant by ‘using patient experience data’.
At another full team meeting, in November 2016, observations on the extent to which the ward teams did or did not involve input from patient experience officers, and the level of seniority of these staff, was discussed. It was agreed that, although the emphasis remained on empowering front-line staff, teams should be actively encouraged to seek help from their organisation. The ethnographers subsequently communicated this to the teams as part of formative feedback and were attuned to this in their work going forwards. At the same meeting, there was further evolving conversation on what constitutes ‘patient experience data’ and where they come from, including both direct and indirect sources (such as comments from the US-PEx lay panel and ideas from other case study sites). This became an important focus for the ethnographers.
Schlesinger et al. 76 also note that a team-based approach can enable a greater range, volume and complexity of work to be undertaken, which may be crucial within the limited timespan of available funding. This relates also to the nature of ‘focused ethnography’. Focused ethnography provides a rapid and condensed alternative to the time-consuming long-term engagements in conventional ethnography. 77–79 It aims to offer outputs within relatively short time frames to inform the immediate applied needs of organisations. 75 Table 5 summarises the key differences between conventional and focused ethnography.
Conventional ethnography | Focused ethnography |
---|---|
Long-term field visits | Short-term field visits |
Experientially intensive | Data/analysis intensity |
Time extensity | Time intensity |
Writing | Recording |
Solitary data collection and analysis | Data session groups |
Open | Focused |
Participant role | Field-observer role |
Insider knowledge | Background knowledge |
In addition to within-project discussion, the principal investigator and ethnographers took part in a learning set with investigators from other studies funded under the same call using ethnographic and qualitative methods. The research team also overlaps with that of the INQUIRE (Improving NHS Quality Using Internet Ratings and Experience) project (14/04/08), of which John Powell is the principal investigator and Louise Locock and Sue Ziebland are co-investigators. This cross-fertilisation of ideas has further strengthened the analysis.
Whereas in some medical studies ‘observational research’ may refer to a study design that that is not experimental (such as cohort studies), here we use observation to mean a set of techniques used by anthropologists and sociologists to study the everyday life of a group of people. Ethnographic observation entails the researcher being present with those under study to observe, record and understand the social structure and local culture that they inhabit. Ethnographic observation is distinguished by being a holistic approach to research that also involves interviews and the interpretation of material culture.
Pre-fieldwork case descriptions
Before commencing fieldwork, the ethnographers conducted desk-based research on the respective cases allocated to them. This aimed to provide preliminary ‘case descriptions’ of the six case study sites, and included reviews of relevant grey literature [such as Care Quality Commission (CQC) reports], relevant online feedback from sites such as Care Opinion and NHS Choices, and any articles/news items relevant to each trust and ward involved in the study. The process of gathering this information was maintained throughout fieldwork and beyond its completion.
Data collection
During the fieldwork period from July 2016 to September 2017 the team of ethnographers interviewed core front-line team members, wider team members from the ward (including any patients involved) and senior managers (including directors of patient experience and directors of nursing and/or quality) in each site. Front-line team members were interviewed at several time points to capture their reflections and experiences: at the beginning of the project, during their QI work, and at the end. Senior managers were interviewed towards the end of the study to capture their reflections on the progress of the front-line teams and whether or not, and how, this had had an impact on the wider trust (Table 6). Interview guides were developed for different groups (see Appendix 5 for sample guides).
Site | Interviews | |||||
---|---|---|---|---|---|---|
Core team 1 | Core team 2 | Core team 3 | Senior level | Other | Total | |
1 | 4 | 0 | 4 | 8 | 2 | 18 |
2 | 6 | 2 | 5 | 5 | 0 | 18 |
3 | 7 | 5 | 3 | 3 | 0 | 18 |
4 | 5 | 2 | 2 | 2 | 0 | 11 |
5 | 5 | 0 | 3 | 5 | 0 | 13 |
6 | 5 | 2 | 4 | 3 | 3 | 17 |
Non-participant observations were carried out of QI meetings, conversations and workshops wherever possible. The exact nature and amount of observation varied by site, depending on its programme of work (see Table 7). Observations were guided by an agreed pro forma (see Appendix 6). As well as written field notes and individual notes, data collected included photographs (e.g. of comments boards or information displays prepared by front-line teams as part of their work).
Site | Number of visits | Total hours of observation |
---|---|---|
1 | 7 | 45 |
2 | 8 | 48 |
3 | 8 | 54 |
4 | 12 | 58 |
5 | 8 | 48 |
6 | 8 | 46 |
Total | 51 | 299 |
Front-line teams also provided information on changes made in each site and (when possible) data on staffing levels, staff sickness and vacancy rates, as part of describing the context in the winter of 2016–17. However, as noted in the findings, there were significant challenges in obtaining this information.
All participants were given an information sheet and gave written consent (see Appendix 7 for examples).
Analysis
Knoblauch80 describes the use of ‘data sessions’ conducted within groups as a further defining feature of focused ethnography, in which data are viewed, discussed and interpreted by multiple actors rather than by an individual working alone. Knoblauch describes the benefit of such collective sessions as a procedure that:
. . . opens data socially to other perspectives. In order to support this opening, data session groups are helpful, the more they are socially and culturally mixed.
Knoblauch. 80 This work is licensed under a Creative Commons Attribution 4.0 International License
As noted above, the ethnographers met regularly as a group and with the principal investigator to interrogate each other’s data and to identify the similarities and challenges raised in each other’s observations. Emerging findings were also shared iteratively with the wider research team, the lay panel and the Study Steering Committee, and at the second and third learning communities. Each of these forums became an integral part of the ongoing analysis, as ethnographic ‘tales from the field’ were shared collaboratively and opened to intersubjective interpretation by several socially and culturally mixed audiences.
In the final phase of all fieldwork, a series of ethnographer data sessions focused on the production of a coding framework (using NVivo 10) to co-ordinate the way in which data were analysed within the team. These data included all semistructured interview transcripts, field-based observations and individual field notes. Thirteen domains were identified by the three ethnographers as key areas, including, for example, ‘context’, ‘organisational culture and practices’ and ‘quality improvement’. Thematic analysis then established a wider framework that consisted of 58 separate codes across all 13 domains. Data were then coded and analysed using this framework (see Appendix 8).
Alongside this thematic analysis, one ethnographer (SP) devised a visual mapping method to help record and make sense of events in the three sites in which he was undertaking fieldwork. These ethnographic process maps used colour coding and symbols along a timeline to capture both the various workstreams undertaken by front-line teams and the key influences and events (both positive and negative). This provided an at-a-glance way of visualising the findings in each site, complementing the case descriptions and coded data and contributing to data analysis sessions. The approach was developed and refined by the team in the data sessions and a dedicated ‘mapping’ meeting. All three ethnographers produced versions of the maps for their sites (see Appendix 8 for a more detailed account of the evolution of this visual mapping technique and Appendix 9 for examples).
Realist informed evaluation
This study was not designed as a pure realist evaluation, but rather it aimed to be realist-informed. Alongside the holistic case study ethnography work and ‘thick’ case descriptions, therefore, the team considered realist explanations and candidate context–mechanism–outcome configurations emerging from the findings. Analysis workshops on case study analysis and realist evaluation were held with input from the Saïd Business School, University of Oxford, and the RAMESES II (Realist and Meta-narrative Evidence Syntheses – Evolving Standards) project (a HSDR programme-funded study to develop methodological standards for realist evaluation). 81
The chief output of this is a consideration of ‘team capital’ as an explanatory mechanism, discussed in Chapter 8, The effect of team-based capital on quality improvement projects in NHS settings. Chapter 7, on staff experience, also explores possible mechanisms through which staff experience may affect patient experience.
Phase 3: producing guidance for the NHS
Phase 3 was the preparation of an online toolkit for NHS staff on understanding and using patient experience data for QI. During the lifetime of the study, research into stakeholder perspectives on toolkits from health-care research was being undertaken as part of a Doctor of Philosophy (PhD) project at the University of Manchester,82 and early findings informed the methods adopted for phase 3. In particular, it was recommended that toolkits were more likely to have impact if they were produced and branded by a recognised source. As a result, the decision was taken not to produce the toolkit in-house, but to commission The Point of Care Foundation to produce and host it, alongside their existing and widely used toolkits on EBCD and PFCC. The core of the toolkit was the content of the resource book already produced for sites in phase 2. This was revised and edited to include findings from the US-PEx study, and illustrative films from staff at some of the participating sites. Those interviewed for the toolkit gave separate consent.
Chapter 3 Patient and public panel methods and reflections
I believe that the best patient experience comes from combining the skills and knowledge of clinical professionals with the lived experience of patients and carers. The US-PEx study was looking at that approach.
Lay panel member
The study was advised throughout by a lay panel, chaired by the lay co-investigator. This chapter describes the composition and activities of the panel, reflects on its role and reports some of the panel’s reactions to key findings.
Who were the lay panel and how were they recruited?
We advertised for lay members through a range of avenues to include as diverse a mix as possible in terms of age, ethnicity, geography, health condition and type of experience of inpatient care. We particularly encouraged people who had been involved in previous service improvement work to apply, as well as those with no improvement experience. As with many patient and public involvement (PPI) groups, the panel was lacking in those aged under 30 years; however, given the subject of study, we felt that this was not of particular concern as older people tend to have more hospital experience.
The lay panel comprised 10 lay advisors (see www.journalslibrary.nihr.ac.uk/programmes/hsdr/1415606/#/; accessed 17 April 2019), all of whom had experience of NHS inpatient care (including surgical) either as a patient or as a carer. It was expected that the motivation to become part of the panel would be from having had negative experiences; however, this was not universally the case. One panel member said of his time in hospital:
My experience was a very positive one, but I was motivated to get more involved in measures aimed at improving health care and, in particular, the patient experience.
This positive attitude was reflected by others in the group. However, there were, as expected, many stories of poor experiences while in hospital. Nonetheless, the consensus was a positive one, motivated by a wish to support service improvement, rather than to complain, irrespective of whether the experience was positive or negative.
I didn’t know anything about research at the start, but I knew a lot about being a patient and this is what I tried to put forward.
Lay panel member
Who was the chairperson and what was her role?
Lay co-investigator Jennifer Bostock chaired the panel, feeding back PPI matters directly to the principal investigator and/or to the wider research team at project meetings. Jennifer Bostock was involved from the outset, contributing to the study design, drafting the funding application and reviewing the study protocol. At the project start, she reviewed all study documents and consent procedures and advised on case study data collection methods to ensure that the study was ethical and acceptable. She led the appointment of the panel and strove to have a mix of those experienced in research and those new to it in order to ensure a balanced perspective.
What was our role in the study?
The initial recruitment of lay panel members was on the basis of the following role description.
-
Advise and guide the research team and ensure the study addresses the right issues.
-
Contribute our perspective to the first learning community.
-
Attend future 1-day learning community meetings.
-
Provide ideas about what we might include in a trigger film and comment on a draft version.
-
Help provide a sense-check on emerging findings.
-
Other roles to be developed in discussion.
Relationship with the study team: lay panel view
I was surprised that so many of us were recruited and how well our input was listened to. I felt this demonstrated a real motivation of the team to listen to and enact on the patient voice.
Lay panel member
The above sentiment was generally shared by all panel members, with another remarking:
The relationship between the lay panel and the project staff was strong. Personally I felt listened to and any comments taken seriously. I was impressed by the thoughtful feedback we received from the project leader and a willingness to really engage with the lay panel.
To sum up, one panellist simply said:
It was all beautifully run . . . and it was a privilege to be involved.
Over the course of the study, some tensions emerged, particularly around distinguishing between the panel’s role in advising the research and a wish to advise the front-line teams directly (see The temptation to steer the ward teams). These tensions were addressed through discussion with the principal investigator.
Practicalities of running the lay panel
The panel met on a regular basis, together with members of the study team and the principal investigator. They also contacted each other and the principal investigator by e-mail between meetings, and were supported by lay chairperson Jennifer Bostock and the project co-ordinator. Honoraria, travel and subsistence were offered. Jennifer Bostock provided ongoing support to less experienced and/or less confident advisers to ensure that they were all able to contribute effectively.
The size of the panel
The panel may have seemed dauntingly large to the clinical teams and an agreed approach to our work and smaller panel may have been useful.
Lay panel member
The lay panel was larger than the original plan of eight members, and although this demonstrates enthusiasm from members of the public for studies of this kind, it did present some challenges. As one member said:
On reflection, perhaps the lay panel was too large . . .? A panel of perhaps 4–5 would have been adequate.
The breadth and diversity of experience and the passion for the subject meant that, at times, the size of the panel constrained its ability to reach consensus, and during the learning communities the panel size may have felt ‘daunting’ to the front-line teams. However, although the panel was a group working towards a common goal, it would be unrealistic in a study of this nature to expect total agreement, and that reflects the individual differences, experiences and attitudes within the membership. It must be recognised that a group of individuals brought together because of significant and often extreme experiences will, at some level, always remain individuals.
Advising the US-PEx study team
The panel advised the study team throughout the research; Box 1 lists some of the key impacts the principal investigator felt that the panel had had on the study. The list was discussed and added to at the last panel meeting.
-
Identification of key themes and ‘touchpoints’ for the trigger film, and validation of the choice of clips we made on the basis of your direction (see Appendix 10).
-
The work on the trigger film continues to have a wider ‘afterlife’.
-
The set of themes identified is now being used to inform the development of a short play for The Point of Care Foundation annual patient experience conference.
-
The trigger film now features permanently on the Healthtalk website and is available for any NHS team to use freely.
-
It is signposted through the online resource we are developing with The Point of Care Foundation, and will be linked into their new training course for patient experience officers.
-
It is also being used, alongside our existing intensive care trigger film, by another of the projects funded under the same call as US-PEx (PEARL). It is being used as part of a co-design process to develop a reflective learning framework for hospital staff working in acute medicine and intensive care [URL: https://njl-admin.nihr.ac.uk/document/download/2007846 (accessed 26 June 2019)].
-
Contributing to and commenting on the resource book and poster sent to ward teams.
-
Developing the ‘quick wins’ and ‘tips for involving patients’ fliers distributed at the second learning community.
-
Raising concerns about the apparent lack of senior management engagement, sometimes poor relationships with their in-house patient experience officers, and, in some cases, the small-scale ambitions of the front-line teams. As a panel, you have helped maintain a focus on context that the ethnographers have explored in depth.
-
Related to low-level ambitions for their improvement activities, an important insight about the extent to which teams have focused on ‘things, not behaviours’ – when you know your own experiences have been coloured more by relationships and communication.
-
At the same time, recognition of how small ambitions may actually be quite a major step for some wards who have been particularly defensive and stressed, and valuing the apparently small wins that have led to greater enthusiasm in the teams. Some of you have expressed vehement support for the teams; a concern to emphasise support rather than criticism as the most effective way to motivate them; and profound gratitude for excellent patient-centred care in many cases.
-
Insights from your own experience of how feedback is received, notably the ‘do you want to complain?’ response when you just want to make a helpful suggestion. When mentioned at various presentations over the last 6 months, this has resonated particularly with audiences.
-
Willingness to reflect with me on whether we are too focused on formal ‘data’, or whether we can find a more comprehensive term such as ‘intelligence’, and whether/how the NHS can make use of informal interactions, passing comments, incidental observations . . . but also maintaining a healthy scepticism about the extent to which staff wisdom/observation can be a reliable guide to action. The frustrations (as well as good experiences) some of you have expressed about your care are an important reminder, including around not being listened to and staff not communicating with each other or you.
-
Making me think carefully about the value and impact of quantitative data, particularly given limited resources for everyone in the NHS to spend time interviewing or observing.
The following examples are taken from the notes one panellist made:
We discussed the powerfulness of a trigger film, composed of interviews with patients discussing how they felt about their care as in-patients, and commended its use to be shown to clinical staff. We discussed the pros and cons for different timings for doing such interviews, bearing in mind the fear some patients may have of criticizing when they are still in-patients.
We discussed aspects of care that were important to us, including admission and discharge procedures and the helpfulness of information being written down as well as verbal.
We discussed ways in which clinical staff might be able to encourage ex-patients to volunteer to give feedback, and what was the best timing for this from the patients’ point of view.
How did our input improve the study?
In addition to the more tangible ways in which the lay panel contributed to the study, the principal investigator, when asked this question, responded:
Helping me reflect on/make sense of the post-project survey findings and helping plan feedback to sites.
Endorsing our evolving ideas about the role of staff experience as a mediator for patient experience.
Helping us sense-check the idea of ‘team capital’ as an important factor in whether staff can or feel they can act on patient experience data – do they have the time, resources, networks and authority to take action?
The impact of the lay chairperson
An example of the impact of Jennifer Bostock’s contribution came early in discussions with the research team about the types of wards that might be recruited. Jennifer spoke of her personal experience on an emergency assessment ward, particularly on the physical and emotional challenges patients face on such wards. She highlighted her own experience of wanting to discharge herself because of the emotional impact of life on the ward. The research team were at first hesitant about including this kind of ward until Jennifer asked them to consider that patient experience data were just as crucial on these wards as they were on any other, and that staff could have just as much impact on care and patients on these wards as on any others. Jennifer was also committed to having an urban hospital as a case study, where patients and staff were from diverse and often transient backgrounds. The transient nature of both patients and staff was seen to be problematic for the research; however, the result of Jennifer’s argument was that the study recruited two emergency assessment wards, one of which was in an inner-city hospital.
Jennifer also shaped the format of the first learning community by endorsing the marketplace design (also endorsed by a lay member of the Study Steering Committee). The principal investigator took on board a criticism that there was ‘too much talk and chalk’, which made for a much better event.
The learning communities
Even just us patients being there, hearing us speak, contribute . . . sort of showed to the hospital teams that we are more than just patients. In a way we were seen more like people, not in hospital beds, but at work the same as them.
Lay panel member
Key events in this study were the learning communities, which were an opportunity for ward teams to come together and learn from the study team, from one another and from the lay panel and, crucially, to share ideas and showcase their QI projects. The lay panel was crucial in the learning communities in terms of practical support but also in terms of perception: sending a clear message of strong and meaningful PPI that, it was hoped, would inspire the ward teams.
Patient and public involvement does not come naturally to everyone. One way in which we might have improved this is illustrated in this quotation from one of the lay panel following the first learning community:
We did not at first feel able to sit with clinical teams when they were discussing ideas, nor mix with them much in breaks. We were afraid they would think we were judging them and we wanted to be seen as supportive. Maybe we got that wrong and should simply have mixed and shown ourselves to be supportive.
At the first learning community, two members of the panel gave a presentation to and took questions from the audience, which comprised representatives of the ward teams. The presentation introduced the trigger film, which the lay panel had helped to design, and gave the patient/lay perspective, including personal stories from the presenters, ideas on data gathering and tips on public involvement. The presenters invited the audience to speak to the lay panel; during the event, the panellists were available to help teams think through what kind of improvements matter to patients, to develop realistic and acceptable improvement plans, and to advise on involving patients in the local improvement interventions.
Following the first learning community, some feedback was received from some of the ward teams that suggested that they felt the lay panel’s input had been overly focused on poor care experiences. Looking back, some panel members agreed that this had been the case (see Front-line staff are human). To address this negative perception, the study team and chairperson sent a call to panel members, seeking examples of positive hospital patient experiences. This call was received with enthusiasm, with many voices among the panellists expressing positivity, which was then sent on to the ward teams. An example of this was on how sensitive communication can work wonders:
Little things, like an explanatory remark, can transform a scary experience. A nurse at my hospital said to me, ‘I’m going to remove your epidural now. I warn you I do it really really quickly because I think that’s the best way, so don’t be scared’. By the time the remark had registered with me she’d done it! No fuss, and not an ordeal at all.
Examples such as these were collated and sent to the ward teams.
At subsequent learning communities, the panel’s primary remit was to reflect on the interim findings with the research team and the front-line ward teams, and discuss how to adapt the QI to projects to overcome any difficulties encountered.
Although there were some debates both within the lay panel and with the study team about how best to employ the lay panel during these events, it was generally felt that the role and our presence was useful and appreciated by the ward teams.
Patient and public involvement at ward team level: aims and realities
Our aims
One of our aims was to encourage ward teams to involve patients or family members as part of their teams. We appreciated from the outset that some teams may involve lay people more than others, that different approaches may be taken and that the teams may have different experiences of PPI. As part of the encouragement, when inviting the teams to the learning community events we reminded them to bring their lay advisors and involve them in the feedback to the rest of the ward community.
Lack of patient involvement
In reality, as noted in Chapter 5, it appeared that PPI at ward team level was very limited. This became a source of disappointment and frustration for the lay panel. We asked ourselves and the study team why ward teams found this challenging. We hoped that our presence at the learning communities would inspire the teams to replicate our approach. Hence we were not concerned about this lack of involvement at the first learning community, hoping that, by the second, lay members would be better represented in the teams.
The absence of patients at the learning communities was seen by some of the lay panel as ‘evidence’ that PPI was being preached but not practised. However, this was not the whole story, as the lay panel did not necessarily see what was going on behind the scenes in the wards, where patients, carers and public representatives were being involved in specific ways, including in co-design projects and focus groups with people with dementia. By the second learning community, the ward teams felt comfortable and confident enough to talk about this involvement and how they had found it.
Because the nature of this study was to observe and encourage rather than dictate and supply, it was felt important that we did not intervene with regard to PPI at the ward level, although at times this was frustrating. However, we produced two quick guides that were circulated at the second learning community, ‘Quick Wins – Tips for Ward Improvement’ (see Appendix 11) and ‘Tips for Involving Patients in Ward Improvement’ (see Appendix 12), to encourage the further and meaningful involvement of local patients. These were based on consensus discussion within the group, drawing on personal experiences of ward care and patient involvement.
It is important to highlight here that this lack of involvement will be a challenge for future work of this kind. By the end of the study, the lay panel had accepted that the ward teams found PPI difficult, that they were unfamiliar with doing it and that there is much work to be done to encourage and support PPI at the ward level. Some of the lay panel members were inspired to use this experience to help facilitate positive PPI in their own local hospitals. It will be interesting to see whether PPI in QI work becomes more ingrained in future.
The temptation to steer the ward teams
The premise of this study was to enable front-line hospital ward staff to create, design and deliver their own QI interventions with the assistance of patients and public at a local level. Given the experience and expertise of the lay panel, there was inevitably a temptation to ‘hold the hand’ of the front-line ward teams and direct them towards QI ideas that the panel considered important, in effect becoming PPI advisers to the ward teams. Panellists reflected on this blurring of roles:
There may have been a lack of clarity in our role with the teams. I think the extent of the panel’s involvement could have been clearer – we may have been carried away with our enthusiasm and misunderstood our role.
Lay panel member
As one lay panel member commented:
Apart from their personal experiences, members of the lay panels possessed skills and experience from their professional lives that could be relevant. However, they were not really ‘tapped into’ as much as I’d expected. Perhaps this was because the wards leading the projects did not recognise that this resource was available, or that the experience of the lay panel was not directly relevant to their improvement initiatives.
The temptation to dictate rather than facilitate was a learning curve for the panel; as one member put it:
The staff need to hear from us, they need to listen and one way of doing that is to sit on a panel like this and hope that some of the stuff we say gets through.
The tension between what the panel members wanted to do and their role on the project was thrown into sharp relief when faced with real clinical teams battling to find real interventions that could help patients. During the learning communities, the lay panel sat and listened to the stories from the teams, stories of the kinds of wards the teams work on, the patients they care for and the numerous hurdles – bureaucratic, financial, professional and emotional – that meant that the care they want to provide is not the care that they can provide. The temptation to leap out of chairs and into the wards and do what the panel thought the teams should be doing was often so strong that the line between helping the research and helping the teams became blurred to the point of confusion. This was a learning curve for members of the panel, for the chairperson and, indeed, for the principal investigator and the rest of the study team, and in a sense it is one of the things that we had learned collectively by the end of the study. The lesson here is that there is never too much clarity around roles, especially when there is so much scope for ‘mission creep’.
An acceptance and understanding by the lay panel of both the nature of the study and the panel’s role in it led to a more hands-off approach as the study progressed. Through discussion with the principal investigator, the panel realised that they had to stand back and observe what were, in fact, the research findings.
Extent of managerial support
On the subject of the role of senior managerial support for the ward teams’ projects, one panel member remarked:
It was evident that some initiatives had come to the attention of senior managers and they were being encouraged to expand improvement projects onto other wards. Overall, I was surprised at the lack of involvement of senior managers in the project and these initiatives. Perhaps it was a deliberate attempt to stand back and let staff take the lead, but empowerment needs support.
Reflecting at the learning communities on managerial support, or lack of it, was instructive and enlightening for the lay panel and made us consider what we as patients and carers can do to help encourage senior managers to support such initiatives.
Things, not behaviours
In a project like this it must be very difficult to get people to think about the way they interact with patients (the personal interaction) so, naturally, the focus is on improving ‘things’ and processes. That’s not a criticism, but I wonder whether important opportunities for improvement might be overlooked. Maybe the ‘non-numerical’ patient response information has a part to play here so staff better understand the impact of what they say and do has on patients and carers.
This comment from a lay panel member highlights a theme that ran through the lay panel when the preliminary results were aired at a lay panel meeting: ‘it’s all about things, not about people’. From the panel’s perspective it was surprising that the ward teams concentrated often on tactile things as interventions rather than trying to use data to improve or change communication and/or behaviour of ward staff. An insightful comment by a lay panel member perhaps illuminates this issue:
My concern is that the projects could have a lasting impact, but on relatively minor aspects of the patient experience. It’s understandable why wards focus on things like a welcome pack (they are achievable and don’t require much involvement of people outside the ward), but the overall level of ambition can be rather low.
Another panellist remarked:
Some of the interventions that were put in place appeared to be so simple and straightforward that it was frustrating that a research study was necessary to provide the time for improvements to take place, for example the encouragement of soft shoes and dim lights at night. This emphasised the hurdle that hospitals have in finding time to develop best practice.
Embedding of hospital initiatives in the longer term
On the question of the long-term effect of the study on QI, a lay panel member stated that he or she ‘remained sceptical about the embedding of hospital initiatives in the longer term . . . the project was about one ward with a current team’. This prompted concerns about whether or not improvement would be sustained if there were staff changes. The answer to that scepticism lies not in the study findings but in time, and it will be interesting to see if and what changes the individual ward teams make in the long term.
Front-line hospital staff are human
I spent the first meeting really just moaning about my care, I had no consideration at all for the other side, what the staff were going through.
Lay panel member
The culture of ‘them’ and ‘us’ felt by some members of the lay panel at the start of the study meant that it was difficult initially for them to understand and accept the perspective of ward staff. As time went on during the study, the attitudes of those on the panel evolved. The importance of learning to accept that staff are human and that care is often imperfect while at the same time recognising that patients/carers have a role to play in changing it was summed up succinctly by one panel member:
Most staff want to do a good job and they put up with a lot trying to do it and how I treat them affects how they treat me.
A final major reflection from the lay panel on the results of the study is the recognition of the challenges faced by front-line ward staff. As two panellists put it:
There is no doubting the strong commitment of the teams to patients, even when their own hospital was going through major challenges and change.
I was impressed by the commitment of many of the ward staff, especially in the face of NHS pressures and the challenging circumstances we heard about in several wards. The temptation to give up and focus on the ‘day job’ must have been significant, and the wards are to be congratulated for sticking with it.
Why is patient and public involvement important in a study like this and who is affected?
One of the ways in which PPI impacts on a study like US-PEx is that it affects the researchers as well as the research. 83 Research is not all about data, protocols, results and literature; it is also about the people who conduct it. What had not been anticipated at the beginning of the study was the way in which the lay panel would affect the study team, and also how involvement would affect the panellists themselves:
The involvement of patients, carers and the public is important as a resource for the project and as a sounding board within the project to focus on the things that matter to patients. There is also a motivating effect on lay panel participants to go on to join other initiatives, especially when the experience of participation is as good as it’s been on this project.
Lay panel member
This recognition, that PPI is beneficial to lay advisors as well as to research and researchers, struck a chord.
Another panel member spoke of how being involved in the study had changed her:
I can’t tell you how much I’ve changed thanks to being involved in the project. My confidence is through the roof . . . got a new motorised wheelchair which has given me my independence back. I’m doing a course in research and I’ve just got a job interview.
The same panellist spoke of how on, a personal level, the study has had an impact on how she treats clinical staff when she goes to hospital:
Before I was rather quiet, or a bit hostile towards them, after what happened to me I thought all doctors and nurses were like that, but having met and worked with them on this project I see that they are not. I’ve heard the problems they have to put up with, the pressures they deal with every day, I wouldn’t want their jobs . . . so now I am nice to the clinical staff and they are actually a lot nicer to me now. I’ve even nominated my incontinence team for an award which they think is brilliant.
Lay panel member
What now for the lay panel?
Being a part of this study has taught the lay panel many things and it is hoped that the lay panel has in turn imparted insights to the study team and the case study participants. The panel members have learned how to work in a team, how to encourage rather than dictate, how to see things from the perspective of staff as well as patients, and the benefit of exercising diplomacy when trying to effect change. Perhaps most importantly, they have learned that no matter how disappointing health-care practice may seem, it is important to understand the reasons behind this; there is always a reason and there is always hope for improvement. They have a significant role to play in achieving this through being both part of the data and part of the solution. These lessons and many others will be taken forward in work with other researchers and, most importantly, in roles as patients, carers and health-care improvement advocates.
Chapter 4 Phase 1 results
Secondary analysis of existing national patient and staff experience data
Results from the inpatient and staff survey data analysis were used to construct a matrix for the case study selection, according to whether the trust’s average score fell in the top, middle or bottom third of the distribution for inpatient and staff survey results (Table 8).
Inpatient survey rank | Staff survey rank | ||
---|---|---|---|
Top | Middle | Bottom | |
Top | 25 | 17 | 8 |
Middle | 9 | 18 | 25 |
Bottom | 19 | 14 | 18 |
In addition, a spreadsheet was created that showed the combined results of the analysis of existing data sources for each trust. To help with the selection of case study sites, colour coding was used to easily identify those trusts in the top, middle and bottom thirds of the distribution when sorted by overall inpatient/staff survey ranks.
NHS Friends and Family Test
The average response rate to the FFT across August, September and October 2015 was calculated for each trust and included in the spreadsheet. The average response rate ranged from 6.5% to 59.9% across trusts.
Care Opinion
The number of staff registered to use Care Opinion per trust ranged from 0 to 66. Seventeen trusts had no staff registered to use Care Opinion and only four trusts had over 17 users registered.
The proportion of Care Opinion stories per trust that were read by staff ranged from 21.6% to 100%, with only 25 trusts out of 154 having read 100% of stories. The proportion of stories responded to also ranged from 21.6% to 100%; only six trusts had responded to 100% of stories.
The percentage of stories resulting in a change (planned or made) by the trust ranged from 0% to 7% across trusts; the majority of trusts (n = 111) had not reported that they had planned or made any changes as a result of the stories.
Patient experience leads survey
The survey of patient experience leads (n = 153) was completed (or partially completed) by 60 trusts, a response rate of 39%.
Respondents were given the opportunity to opt out of the survey at three points during completion. Respondents wanting to opt out of the survey were given the choice of keeping or discarding their previous responses. Two respondents opted out of the survey: one wanted their responses to be submitted and the other wanted their responses to be discarded.
The results are based on responses from 57 trusts as three cases had to be removed (two were partial, i.e. no response data were submitted, and the third was the respondent who opted out and requested that their responses be discarded). Given that only 39% of hospital NHS trusts responded to the survey, some caution is needed when interpreting the results as it is possible that those who responded hold stronger views about patient experience data.
Of the remaining trusts, 37 (65%) expressed a willingness to take part in the second phase of the study by leaving their contact details.
Headline findings
Types, methods and frequency of data collections
The majority of trusts collected patient experience data using a combination of approaches, including national surveys, local surveys, patient stories and informal feedback mechanisms such as conversations with patients.
These approaches focused on different aspects of patient experience, most commonly interactions with staff (95%, n = 54), relationships with staff (95%, n = 54) and recommendations to friends and family (91%, n = 52).
The two most common reasons for determining the frequency of local data collections were ‘staff availability and capacity for collecting data’ and ‘staff time for using patient experience data results’.
Sixty-three per cent of respondents (n = 36) said that the number of patient experience data collected was ‘about right’, with 16% (n = 9) responding there were ‘too much’ data and 7% responding that there were ‘not enough’ (n = 4).
Staff leading on patient experience
Only one trust responded there was not a dedicated person in the trust responsible for co-ordinating the collection and use of patient experience data. Two-thirds (66%, n = 37) stated that the person responsible for co-ordinating the collection and use of patient experience data was the head of patient experience/patient experience lead or manager. Only 14% (n = 8) reported that the person responsible was from the Patient Advice and Liaison Service (PALS) and 11% (n = 6) reported that the relevant person was the Director of Nursing.
Two-thirds (67%) of respondents (n = 38) stated that there was a dedicated QI team in their trust. However, nearly half of all respondents (48%, n = 27) reported that they spent ≤ 40% of their overall time on supporting the collection and use of patient experience data. Only three respondents stated that all of their time was spent on supporting the collection and use of patient experience data.
Many of the trusts surveyed used external organisations to help collect data for CQC national patient surveys (70%, n = 40) and around half (54%, n = 31) used external organisations to help with the FFT. Almost one-third of trusts (32%, n = 18) also used external organisations for local data collections.
Only half of trusts (51%, n = 29) had a specific plan or strategy for the collection and use of patient experience data, and 60% of trusts (n = 34) said that their QI strategy included how they would use patient experience data. Of those trusts responding, only four (7%) said that they did not have a QI strategy.
The majority of respondents had board-/senior-level support for the collection and use of patient experience data; 77% felt that this was ‘definitely’ the case (n = 44) and 14% (n = 8) said that it was the case ‘to some extent’.
Reporting patient experience data
The most common ways of communicating patient experience data results to staff were in meetings, in electronic written reports, on staff noticeboards and via an online portal/trust intranet. The most common ways of communicating patient experience data results to patients and the general public were on ward noticeboards and on the trust’s website.
Respondents were asked who communicated the patient experience results to staff. This was most commonly done by patient experience leads (86%, n = 49) and ward managers (79%, n = 45). Only 54% (n = 31) said that the results were communicated by the chief executive.
When asked if patient experience data were shared with other local groups or organisations, 68% (n = 39) reported that they shared the results with their trust’s patient groups/forums, and 65% (n = 37) indicated that they shared the results with HealthWatch. Only 32% (n = 18) said that they shared the results with local patient charities or support groups.
Using patient experience data
Most trusts (86%, n = 49) were aware of changes implemented in their trust as a result of patient experience data, with many of these being changes made to the way staff interact with patients (84%, n = 47), to the transactional aspects of care (77%, n = 43) and to the way in which care is provided (75%, n = 42).
Trusts were asked for the top two factors that they expected their trust to invest in to advance improvements in patient experience; 44% (n = 25) gave ‘patient and family engagement’ as one of their top two. Investment in broader culture change efforts (40%, n = 23) and staff training and development (39%, n = 22) were also highlighted.
The most common barrier to using patient experience data was a lack of staff time to examine the data (75%, n = 43), followed by cost (35%, n = 20), a lack of staff interest/support (21%, n = 12) and too many data (21%, n = 12).
Respondents were asked what they felt the solutions were for resolving the barriers to using patient experience data. Those most commonly cited were more staff to allow time for analysis, the provision of specific analytical support, and more resources, mainly financial.
Two-fifths (40%, n = 23) of respondents reported that they were ‘very positive’ about the progress that their trust was making towards improving patient experience, and 40% (n = 23) were ‘positive’. Only two trusts responded that they were ‘not very positive’ about the progress. Reasons why respondents were ‘very positive’ or ‘positive’ about the progress their trust was making towards improving patient experience included having patient experience improvement high on the agenda (including having board-level support), and having a dedicated resource and staff engagement across the trust.
Reasons why respondents were ‘not very positive’ about the progress their trust was making towards improving patient experience included a lack of resources, such as time, money and staff.
Case study site selection
The core research team reviewed the results of the secondary analysis and key results from the patient experience leads survey and then met to discuss potential case study sites. Initial discussion established which trusts might be included or excluded based on factors such as the following:
-
Trusts would be considered regardless of whether or not they were deemed ‘improvement ready’.
-
Trusts would be considered regardless of whether or not they had completed the patient experience leads survey.
-
Trusts in special measures would be excluded.
-
Trusts without a QI team would be considered.
-
The demographics and geographical location of the trust would be considered (to achieve diversity).
The team shortlisted 14 trusts based on four main factors:
-
The list comprised approximately equal numbers of trusts that were in the top, middle and bottom third based on the secondary analysis matrix.
-
The list factored in a diverse geographical spread.
-
The list included some trusts that scored more highly on the inpatient survey than on the staff survey and vice versa (i.e. mixed).
-
The list comprised trusts that had expressed willingness to take part.
From this longlist, six trusts were identified as the preferred choices. The six trusts ranged in their inpatient and staff survey ranks, and also in the extent to which they appeared to approach and respond to issues relating to patient experience (see Table 1).
In February 2016, an e-mail was sent to the six trusts inviting them to participate in the study. The invitation described what the trust would be required to do as a participating site, including giving active senior support to the project, recruiting a front-line medical ward team keen to take part, and engaging with an ethnographic researcher who would observe the team’s progress. In addition to this, the benefits from taking part were highlighted, such as receiving expert input from patient experience researchers on the research team, having ongoing improvement science and learning community support, and supporting the development of a toolkit for the rest of the NHS based on learning from the project.
Five of the original six sites contacted expressed an interest in being involved. One trust reported that it did not have the capacity to become involved in the research project. Therefore, the team selected a replacement trust with a similar profile from the secondary analysis, and this trust was very keen to participate in the research (see Table 2).
Our initial aim was to carry out the research on general medical wards in each of the chosen sites. The rationale for this was that, typically, patients on these wards have a longer length of stay than those on surgical wards, where they may be in for very short periods undergoing routine elective procedures. In addition, medical wards tend to have a higher throughput of patients than specialist wards, an important factor to consider when thinking about achieving adequate sample sizes for the baseline and post-intervention patient experience surveys.
However, from discussions with sites it became clear that there was an appetite for specialist medical units to be given the opportunity to participate. For example, one site selected operated only specialist wards and another site was keen to focus on its emergency admissions and discharge unit. The research team (including lay co-investigator JB) considered the benefits and potential issues of each ward selection in turn and in all cases decided that sites proposing a different type of medical ward for the case study should not be excluded. Indeed, it was felt that the different settings and populations would reveal different challenges to the use of patient experience data (see Chapter 3, Patient and public panel methods and reflections).
As an example, the trust operating only specialist wards demonstrated great enthusiasm for being involved and felt that it could easily find wards that would be keen to take part. To understand how the selected specialist ward compared with other wards in the study, and to be reassured that a large enough sample could be drawn for the patient experience survey, we asked the trust to provide information such as length of stay data and number of discharges per month for the ward. While the research team understood that this ward might have a very different case mix from others in the study, we felt that this could be a benefit to the research.
Chapter 5 Overview of phase 2 findings
This section presents summarised thematic findings from phase 2. Subsequent chapters explore three key themes in greater detail.
Case descriptions and anonymity
As noted in the results of phase 1, the six sites selected included a range of types of ward, including two general medical wards, a gastric medicine ward, two emergency medical assessment units, and a longer-term rehabilitation medical ward. The sites were geographically diverse, including both urban and more mixed ‘town and country’ catchments, and covering north, south, east, west and midlands locations. All were in England.
Appendix 1 contains an anonymised description of each case study site, including the nature of the trust and the ward concerned, what their previous experience had been of using patient feedback, who was involved and why they took part, other relevant contextual factors, what they chose to work on and how, what was achieved and what challenges were encountered (Figures 2–7).
‘What was achieved’ is described both in terms of findings from the baseline and follow-up survey and interviews, and from the ethnographic material gathered throughout the study. These sources of data provide different and equally important lenses through which to view each case.
The primary aims of the case studies were to understand:
-
how front-line teams use different types of patient experience data
-
what could make such data more credible and useful for front-line staff.
Our research therefore focused mainly on what staff understood by patient experience data for QI, how and why they used these data (or not), what they found helpful or challenging, how the wider context affected what they did, and what support they needed. Measuring the number of changes made and any impact on reported patient experience was not the main goal of the study, although it forms part of the picture.
It is undoubtedly the case that some sites went further with their QI work than others, but our intention is not to rank them in terms of ‘success’. The sites were selected to reflect a range of previous levels of experience and confidence in using patient experience data for improvement, and so it was never expected that they would all move at the same rate. Any suggestions made about what might have helped and what teams or organisations could have done differently should be seen against this background. It should also be noted that learning and cultural change may occur independently of any specific service changes.
To give staff confidence that they would not be judged publicly and could share with us difficulties as well as success, our findings are anonymised throughout. The summarised case descriptions in Appendix 1 draw on the detailed site-by-site descriptions developed by ethnographic team members as part of the analytic process. These contained a considerable amount of sensitive and identifiable information that we do not present in the report. Our findings on major themes in the data are organised thematically rather than case by case. In the more in-depth thematic chapters (see Chapters 6–8), the roles of people interviewed or observed are identified, but they are not linked back to specific sites in case this inadvertently identifies them.
Reflections on case descriptions and summarised findings
Changes made and follow-up survey results
In every site, projects aimed at improving patient experience took place. The nature of the improvement projects ranged in different sites from introducing a single intervention to a series of interlinked projects on multiple topics. The scale of these varied, and, as one of our lay panel members commented, many of them focused on physical ‘things’ (such as information packs or sheets), rather than behaviours. The extent to which they drew directly on patient experience data also varied, as did the degree of involvement from patients and family members. The way staff did or did not use different types of patient experience data, what counts as ‘data’, and what staff found engaging, useful or difficult are pivotal questions, which are explored in depth in Chapter 6 (see Making sense of patient experience ‘data’: where do ideas for change come from?).
As noted in Chapter 1, the absence of large-scale changes in local QI initiatives is common. However, apparently ‘small’ changes may be momentous for the individuals affected, and may also require substantial co-ordination, effort and resources to achieve and sustain, which we explore in Chapter 6.
In addition to specific improvement projects, it was anticipated that involvement in patient-experience focused QI work might lead to broader changes in culture, attitude and behaviour on the wards, and potentially within the wider trust. Again, the extent to which this happened in different sites varied. Some sites chose to focus more on cultural change and improving staff experience as an indirect route to improving patient experience (see Chapter 7, Improving staff experience).
A few statistically significant improvements in patient experience scores were noted, as were many non-statistically significant improvements. In some cases (e.g. sites 3 and 5) these appeared to be linked to specific targeted QI, but many other score changes appeared to be unrelated to the project.
The relative lack of significant positive score changes may reflect a number of factors. In particular, the winter pressures of 2016–17 (see Wider contextual factors: NHS pressures and ‘organisational distress’) affected teams’ ability to focus on the project and delayed QI work. We sought to allow for this by delaying the timing of the follow-up survey by a few months, but it is still likely that we missed the impact of QI efforts after momentum picked up again in the spring of 2017. In practice, teams continued working into the summer and beyond, and at the time fieldwork ended activities were still going on in most wards.
Second, the effect of some QI work may take time to filter through to patients. The impact of a project targeting call-bell use and response times, for example, may be detectable fairly quickly, whereas a more diffuse project targeting staff morale as an indirect route to improving the care experience may not bear fruit immediately.
Third, the nature of front-line-led QI based on local patient experience makes it impossible to anticipate what projects sites will select. The survey was designed to measure a broad range of areas that are already known to be important to patients, drawing on existing national survey question sets, but the detail of specific projects may not be captured. It was fortuitous that there was a question about call-bell response times that matched work at one site on call bells as a known local issue. However, there were no specific questions on welcome or discharge packs, and questions about communication or information may be too broad. This shows the importance of tailored local data collection.
Wider contextual factors: NHS pressures and ‘organisational distress’
The winter of 2016–17 was undoubtedly a time of intense pressure for the NHS in England. One member of staff coined the term ‘organisational distress’ to distinguish what was happening from the usually anticipated ‘winter pressures’ (although at the time of writing these pressures now seem set to continue year-round). This affected the teams’ ability to continue engaging with service improvement, and in some cases the fact that they managed to keep projects going at all is important. It also affected the researchers’ ability to maintain close contact with the sites to collect data and offer support. There were examples of researchers turning up for planned ward visits, to observe meetings or to conduct interviews, only to find that all staff had been diverted to deal with a crisis in emergency admissions. Planned webinars to share learning during the course of the project were abandoned because front-line staff could not find the time to take part, and offers of online peer-to-peer learning and one-to-one calls or visits from improvement experts on the team were not taken up. The number of front-line team members attending the second and third learning community events reduced noticeably from the first event.
In some cases we were able to gather local data that we hoped could shed some light on whether organisational distress, such as staffing, vacancy and sickness levels, affected some sites more than others. However, such was the pressure on wards during the study that even collating this information was difficult, and the figures are too incomplete to permit meaningful comparison. The international cyber-attack that affected NHS England in May 2017 also affected trusts’ local data collection.
Ethnographic data would suggest that these pressures in the system were an issue for all sites. We did not set out to compare the improvement activities of the ward selected in each site with other wards in the same trust. We are therefore unable to conclude whether work on an explicitly patient experience-focused improvement project at a time of intense pressure was more or less likely to be maintained than other kinds of QI work in other wards.
Continuing pressure on the NHS has implications for both QI work and future research access.
Why did sites agree to take part? Local context and relationships
Sites approached had varying levels of performance on past patient and staff experience surveys, and varying experience in using patient feedback for improvement. For some, taking part in research was explicitly a way to acquire new knowledge and skills, whereas for others it was seen as a good fit with existing QI strategies and practice, and an opportunity to build capacity further at ward level. One ward had recently moved to a new location with a change of leadership but with the same staff team and client group; the project offered a focus for the ward’s new identity.
Despite some delays and false starts, sites with ‘positive’ reasons for taking part generally embraced the project and made progress. The wider organisation in these cases both provided support and encouragement, and recognised the results.
In two cases, participation was offered to wards by trusts as a way to move on from previous negative experiences at ward level, including in one case a critical CQC inspection and in another the presence of a long-term patient whose needs had adversely affected the ward environment for both other patients and staff. In a third case, the whole trust had been through a period of poor performance followed by a merger. In all three, improving staff experience and rebuilding morale was a particular priority compared with other sites; in some instances, staff used the project as a vehicle to gain greater recognition for their regular work, and to challenge perceived negative management assumptions about their wards.
It is perhaps no surprise that in the two of these three sites where past performance had been an issue, relationships between front-line staff and the wider organisation remained tense. This affected the front-line teams’ ability to marshal support and resources, and to involve stakeholders who might have had a positive impact. However, in the site where low staff morale was connected more to the ward environment than to performance issues, the project seemed to act as a catalyst for a wide range of QI activities. Staff felt that this had led to improvements in staff morale and ward identity, useful collaboration with wider stakeholders in the organisation, and trust-level celebration of their work.
Who was selected to be part of the core team?
The composition of front-line core teams to work on the project varied from entirely nursing-focused to a varied mix of nurses, health-care assistants, ward clerks, allied health professionals and doctors. Progress was generally greater when the ward team was able to draw on a range of professionals from different backgrounds who were able to access help and resources from across the organisation. This did not necessarily mean that staff had to be senior or from a particular profession to enable change. For example, in one site a ward clerk played a central role in making practical changes, in another a health-care assistant with creative skills made a substantial contribution, and in another the involvement of a pharmacist and junior doctors in training part-way through changed the dynamic and focus of the project. However, nursing and medical staff who felt senior or empowered enough to garner wider organisational support and interest also made a difference. This is discussed further in Chapter 8 (see The effect of team-based capital on quality improvement projects in NHS settings).
During the course of the study, staff turnover and change of roles occurred, but the impact of this was not necessarily predictable. One site faced multiple changes, losing some key staff members, as well as changing the function of the ward. However, because this was a site with both strong quality leadership at trust level and a close relationship with the patient experience office, the project was able to continue. In another site the involvement of new staff had a galvanising effect. Elsewhere, however, continuity of core team membership did not guarantee progress.
Patient involvement in core teams was lower than expected. Teams were encouraged to include a patient as part of the core team attending learning community events, but only two teams brought a lay partner with them to the first learning community event, and none attended the third event. Practical difficulties in recruiting a patient team member in time were a problem for some sites, given the delays in R&D approvals; travelling distance and the time patient team members would need to commit were also cited as issues. This is not to say that patients and families were not involved in specific local QI projects, but we had anticipated that patient involvement in core teams would be an important driver; its absence may have contributed to limited change in some cases. We suggest that direct patient involvement is something about which front-line staff lack confidence, despite being generally committed to the value of being patient-centred. Greater training, support and practical sharing of good examples may be helpful.
Involvement of central patient experience/quality improvement teams
A key point of divergence between sites was whether or not patient experience office staff were involved as core team members.
Against the background of organisational distress, a strong supportive relationship between ward teams and a central in-house patient experience/QI function generally seemed to exercise a protective effect, enabling wards to be more ambitious in the number and range of QI activities attempted.
In recruiting sites, we emphasised the importance of genuine front-line leadership, and encouraged teams to send a core team of front-line staff (and patients) to the first learning community. Some teams chose to interpret ‘front-line’ to include a key individual from their local patient experience team who had been assigned to work closely with them and who attended the learning community. Others brought a ward-only team to the event but liaised closely with the patient experience office during the project. Occasionally, a more distant relationship was observed, where there was little involvement of or contact with the in-house patient experience team, despite an emphasis at the first and second learning community events on the value of identifying key organisational stakeholders and sources of support that could be harnessed. These sites tended to focus on a smaller number of improvements and faced greater implementation challenges. Whether this was a causal link or an association is debatable. It may also be that the research study emphasis on front-line leadership misleadingly implied that only front-line staff should be involved.
The involvement of patient experience teams is discussed in more detail in the thematic chapters (see Chapters 6–8), which are also introduced briefly below.
What patient experience data did staff use and how did they use these?
It was not self-evident to front-line staff in most sites how to use patient experience data for improvement. The understanding of what is meant by patient experience data is commonly quantitative survey results, which might be characterised as the NHS being mesmerised by measuring. The purpose of the initial learning community was to raise awareness of other forms of experience data, including narrative and observation, and help teams to think through how such data might be useful for improvement rather than measurement.
The ward-specific baseline surveys conducted in each site were well received as timely and relevant, even though they were not originally intended to form part of the data for improvement. Ward teams found the infographics produced to accompany the baseline survey particularly engaging as a guide to action, and appreciated having survey results specifically about their ward.
Narratives were generally recognised as a useful and powerful form of data, albeit one that could be complex to analyse and use. It was apparent at the first learning community that observation was a less familiar technique, but that it generated considerable interest and some experimentation. Online feedback on Care Opinion also attracted initial interest, but a lack of ward-level subscription to Care Opinion in some trusts was an obstacle for teams that tried to engage with it.
One team arrived at the first learning community having already decided what to work on, although their ideas evolved during the study. Some took up ideas from other teams that they learnt about at the learning community. One site used EBCD, and several used some form of comment board to gather ideas. Staff making improvements to make care more patient-centred could not always point to a specific source of patient experience ‘data’ that led to that project. Sometimes they reported acting on what they felt they already knew needed changing as a result of caring for and observing people on a daily basis and talking to colleagues. These ‘soft’ forms of intelligence, often unrecognised as data, proved to be a rich resource for QI ideas.
These issues are discussed in more depth in Chapter 6.
Staff experience as a route to patient experience
Some sites more explicitly chose to focus on staff motivation and experience as a precursor to good patient experience, through indirect cultural and attitudinal change, and by making staff feel empowered and supported.
Staff participants identified several potential interlinked mechanisms through which this may occur:
-
Motivated staff provide better care (which leads to better patient experience).
-
Staff who feel that their experience is taken seriously are more likely to be motivated and receptive to feedback.
-
Involvement in patient-centred QI is itself motivating.
-
Improving patient experience can directly improve staff experience.
This is discussed in Chapter 7.
Team ‘capital’ in NHS settings
Drawing on the above findings, we propose a key mechanism mediating between the various contexts in our case studies and the outcomes observed, namely ‘team-based capital’ in NHS settings. By this we mean the extent to which staff command a range of practical, organisational and social resources or power that enable them to set agendas, drive process and implement change. These resources include not just material or economic resources, but also, for example, status, time, space, relational networks and influence. Teams involving a range of clinical and non-clinical staff from a range of disciplines and levels of seniority were able to assemble a greater range of capital than those that adopted a unidisciplinary, ward-focused approach, particularly when compared with teams that drew directly on the support of individuals from the patient experience office.
This is explored in our final thematic chapter, Chapter 8.
Chapter 6 Making sense of patient experience ‘data’: where do ideas for change come from?
The original HSDR programme call for proposals through which this study was commissioned reflected concern that the NHS continues to collect data on patient experience that it does not use effectively to improve services. The meaning of ‘data’ was, to some extent, taken for granted, but was queried by our lay co-investigator when designing information sheets for study participants. Therefore, we agreed this broad definition:
By ‘data’ we mean any information from patients about what has happened to them, such as answers to survey questions; patient stories and interviews; complaints and comments; and online feedback.
This was reiterated in the ‘resource book’ prepared for participating teams, and at the first learning community meeting (see Chapter 2).
Therefore, although we set out to include a wider range of data, a key theme in our findings is that what counts as ‘data’ is a central question. Even our original broader definition privileged data that are formally collected and recorded, and overlooked ‘soft intelligence’.
What ‘data’ did staff use and how did they use these?
Consistent with previous research (see Chapter 1), the study provides further evidence that it is not self-evident to front-line staff how to use patient experience data for improvement, or even what ‘patient experience data’ means.
Despite an emphasis on making use of existing data, teams were not always aware of sources of information they could draw on within the trust. Even if data are available and known about, they often do not provide sufficient locally relevant information. Several sites set out to collect (or use) new data as both more meaningful and more motivating.
Survey data
An expression of this was the extent to which ward teams received the baseline experience survey results as timely, relevant and actionable data. This ward-level survey was intended not for this purpose, but simply to provide a baseline measure against which to assess any measurable change in patient experience after the teams’ QI work. The findings were shared with teams as part of a formative approach. However, in some sites they were welcomed as more locally meaningful and inspiring than many existing data, such as FFT results and national survey findings. The baseline results were accompanied by an infographic (Figure 8 provides a fictionalised example) highlighting the areas of most and least positive experiences on each ward. Some patient experience officers reported that staff found this data visualisation particularly engaging and clear as a guide to action.
Quotations from interviews are verbatim but edited for readability by removing repeated words, ums and ers. Ellipses presented in square brackets indicate that a more substantial portion of text has been removed.
I loved the way that you presented the ‘this is what your patients are saying about you’. You did it in a very pictorial format.
Was that the Picker Infographic?
Yeah, yeah. That was really good and it was eye-catching and it made people look at it because it was bright, it was easy reading and it was eye-catching, and I think that’s what we need more and more . . . So anything like that, keep it simple and because as I say, we need to get away from this, ‘this is going to take me a lot of time away from my day job.’ . . . I just loved it. It was really, really interesting, and interesting way of showing it rather than my graphs and things. People get bored and they get number-blind as well.
I think one of the best things that we’ve had to give them is the infographic from that survey, because it’s quick to look at. It’s easy to look at. It’s easy to digest. And straight away they could see the problem areas with the red areas. I think that’s how they need to have the data presented. Just giving them graphs and data doesn’t mean anything to them. They don’t want to know all that. They just want to know, ‘where do I need to concentrate and how do I do it?’ That is what they need. The infographic does that.
One ward manager regarded the baseline survey and infographic as an authoritative guide to what was most important to patients. She felt that it cut through the confusing number of data available from the patient experience office, and was a more legitimate pointer to action than staff’s own ideas about what needed improvement:
There was so much material. It was difficult to see the wood for the trees. The two that I found the most interesting were where they were categorised into about six or seven different aspects, and then marked in red. If we’d have had that information before the 2 days away [the first learning community] that would have been key ‘cause we had three clear issues that the patients felt that we could concentrate on, and no matter how much we wanted to do other things, we had to just say, ‘Actually this is what’s important to the patients, so let’s focus on these’. Because you could drown otherwise . . . ‘Let’s not go off-piste, let’s concentrate on these three and do them really well.’
Ward manager
In another site, the patient experience officer involved in the team reported finding it hard to discern areas for improvement from existing data sources, and they returned to the baseline survey and accompanying patient experience interviews to identify new ideas:
So initially I did an analysis of all the data I could find, through our real-time feedback. Which didn’t end up being very useful, because we have FFT scores and we have comments, but I think too many of them are very positive. Which is nice, but didn’t offer any room for improvement, really. And I think some of that’s due to the fact that we know when you ask patients while they’re still in the building they give you a different response to once they’ve left and had time to reflect. So that didn’t really yield very much. Then we tried to do patient diaries. That didn’t really work very well. And then we I suppose had a closer look at the Picker survey and the interviews you’d done [baseline interviews with patients]. And that’s when we pulled out actually maybe there is something we can do about danger signals to look out for. And maybe there is something we can do to look at the discharge process. So I suppose the Picker survey and then some of the sort of comments from your interviews really ended up being the data that we actually acted on.
Patient experience officer
In other cases, the data from the survey chimed with, and confirmed, what staff had already identified as issues from other data or from their own experience. As one head of patient experience said:
We had the issue coming up about patients not always feeling informed enough about their discharge. And some of that had come through the Picker survey as well.
Comments boards
The boundary between collecting data and using them for improvement is not always clear. For example, some sites collected patient and family comments using a ‘bubble board’ or a ‘what matters to me tree’, where people could write feedback on a thought bubble or a leaf (Figures 9 and 10). Although at one level this was data collection, it was also experienced as a cultural shift towards more patient-centred care, a visible expression of commitment to the goal of improving experience. Taking action on specific suggestions was only part of the story:
We had one lady that, she’d had some really sad news, and just sitting with her for 5 minutes, chatting to her; she didn’t want to chat about what she was upset about. So, just chatting to her and saying, ‘There’s a bubble there for you if you want to write anything down,’ and she just wrote a little something. I think it was just a face with a little sad mouth, and that was enough for her. So, I think it just takes something off of someone’s shoulders . . . One lady wrote a whole bubble just on the porridge. She really enjoyed the porridge, and it was the best porridge that she had and it made her feel happy, so we put it up on the wall; we showed her it was up there and she was happy that her comment was up there . . . Since we’ve been doing this, people are recognising things I think a little bit more, and I think that’s what’s changing; not the bubbles on the board and things like that. It’s recognition that somebody needs to have a good experience.
Health-care assistant
In discussing their comments board, one ward manager explained how it seemed more tangible and meaningful to staff at all levels than, for example, traditional audit data. The physical data collection artefact became itself a motivator, an emotional and even enjoyable representation of care:
We have an awful lot of audits, and they’re important, you know. We know we have a lot of things that challenge the quality of the care we give, or that, you know, we have to write reports on, that the other staff find . . . I think it goes over their head a lot of the time, you know, they can’t relate to it. Whereas this, from whatever band you are, this you can relate to.
So, what’s different about this; what is it about this that grabbed people in the way it has?
Well, it’s more to do with how patients experience, how emotionally they feel about the ward, rather than ‘Were their intravenous antibiotics given on time? How many days length of stay? Was that extended?’ You know, it’s about how they feel about it. We get thank you cards, but to see it on a tree when you come in the ward, I just thoroughly enjoyed it.
This sense that comments boards yielded more meaningful information was echoed among staff at lower band levels. One health-care assistant commented ‘the data that we get from that is quite real’. Others noted that being encouraged to think about patient experience data had changed their thinking:
Do you think some of the improvements that have been made on the ward could have been made without using the patient experience data? How much do you think it’s depended on patient experience?
I think a few of them obviously we could have done, but would we have thought about them without the patient experience? I mean with things like the hearing aids going missing. We just became used to it. It’s just become part of our normal daily routine; complaints about this. It’s just part of the normal daily thing. So we’ve never thought about it before as to how can we change this. This Oxford project has opened us up far more, I think, to suggestions and to improvements. There’s always been the capability of doing it. But unless we actually have the thought of doing it in the first place, it’s not going to happen, is it?
So what do you think is the biggest of success so far with this project?
I think it’s bringing everybody together as a team and getting them to see that patient experience matters . . . We know that, but actually doing something about it is different . . . I think people are recognising that it’s not just about caring for somebody; it’s about making their experience good.
When you do your training it’s all very regimental – this is what you do, this is how you do it. And then when you come to work it’s the same. So, you come in, you do your washes, you do your meds, do your pills, you do beds, you do this, you do that, and then you do your notes and then you do it all over again. It almost seems like there’s no time for anything else . . . But actually there is, and it’s just finding the time . . . ‘We’ve got half an hour here, let me sit down, let me have a chat with the patients, see how they’re getting on, do they want to give us any feedback, is there anything that we could change, have they done their menu, do they like the food, like do they need to speak to somebody?’ It is just sitting down and having that extra 5 minutes that they appreciate . . . It does make you more conscious. It’s less task-focused and more patient-focused, which is what the project is about.
In two sites, staff were also invited to record ideas for change on a staff suggestions board. Some team members felt that this played an important part in generating culture change and motivating and empowering staff by giving them a sense that their experience mattered too:
The board that we had up at the beginning was ‘What ideas have you got?’ They’ve actually seen them happen. ‘What would you like to do?’ And they’re thinking, ‘Actually, well, you know, we have got some power here’. Because sometimes, particularly for the lower grades it feels sometimes that you’re a hamster on a wheel and your views aren’t always taken into account.
Ward manager
I think that came from the need for [ward manager] to try and involve all the staff in that ward. And I think she was right. They had been through a very bad time on that ward. And I think the staff really appreciated that they felt some sort of ownership. They could see that the senior staff were serious when they said, ‘We are going to make improvements on this ward’. They could see it in action. And they could see that they were being invited to take part. So I think that was a very good move on . . . And to be able to do those quick wins with that. She had done things straight away.
Patient experience officer
The staff suggestions board generated a few comments that were challenged or removed, either because they were insufficiently specific to inform change, or because the board was considered an inappropriate place to air grievances (e.g. about pay).
Experience-based co-design and narratives
One site used an adapted form of EBCD, a QI method with which the trust was very familiar. The head of patient experience (who was directly involved in leading the EBCD process) chose not to use the generic trigger film provided, instead drawing on existing data from ‘real time’ and ‘right time’ surveys and conversations. These were used to ‘trigger’ initial discussions with patients and identify priorities that were then taken forward in further co-design activities. At the time of the initial co-design discussions, the ward had a relatively long-stay patient group, so staff could establish more relational rapport than in a high-turnover acute ward:
I’m a great fan of trigger films actually. But I like them to be owned by the organisation. I like them to be our films about our people. I don’t actually want to inherit trigger films from elsewhere. And I know that that work has been done to speed up the process and make it easier and facilitate for other people. And I can get why that is. But I think if we’re speaking to 700 people a month every month in ‘real time’ as well as surveying thousands of people every month at the ‘right time’ we don’t need to go elsewhere for a trigger film. We’ve got our own story. And so although we didn’t have one trigger film we did go in to the first meeting with a good understanding of what people that month had said about the ward in that moment and at that month and for the whole of the year that preceded that. And I think that’s a good enough, that’s a good enough trigger. I mean, what are trigger films for? They’re to remind you, set the tone, make the information more resonant, and illustrate the emotion behind this commitment to improve. And I think all of our data, bringing our data to life, looking at the free-text comments, that is our trigger film.
Director of patient experience
The ward manager, initially sceptical of EBCD, explained how she came to view it as something different and collaborative – with patients, but also with colleagues:
It was a lot different to how I thought it was going to be. And it was a lot nicer than I thought it was going to be . . . just to be able to sit down together and to discuss things that perhaps weren’t going so right with your ward and things that were. And it’s nice to have that good feedback. It’s nice to be able to give that good feedback. And also if there are problems, it’s nice to sit down and work it out together . . . When you’re the ward manager you think like the weight of the ward’s on your shoulders . . . Doing the experience-based co-design, it was about sharing and owning something together and working together to achieve something. And I think the session what we did where it was the staff and the service users . . . it was just really nice to sit down with your patients and do something different other than what you would normally do as a nurse.
Ward manager
Experience-based co-design draws particularly on narrative to spark improvement ideas. Other ward teams also reported using narratives, conversation and focus group discussions with patients and carers. The power of stories to generate empathy, understand patients’ experiences and challenge taken-for-granted practices was a common theme, particularly among nursing staff:
I said, ‘Would you be happy to come and talk to us about them so the staff can hear?’ Because it’s very true that they say when a patient puts their pyjamas on and they get into bed, they lose their sense of identity . . . The nearest thing we could do to get the staff transported to that experience was to be talking to somebody that can describe how they felt. You need to put yourself in the place of the person who’s having the treatment, and any way that that can be done, either by sitting and listening to somebody, or being a patient, or just having time to think how you might feel if you were being bed-bathed with a curtain where somebody’s pulling the curtain open and saying, ‘Gladys, do you want another cup of tea?’ when you’re there with half-naked, you know. It’s so part of the environment to us, you’ve really got to rethink and step back, and anything you can do to make people feel they are in that place, and to be looking from the inside out instead of looking from a nurses’ uniform at this, a patient. I love it because I think that’s the most powerful thing.
Ward manager
Observation and diaries
Staff generally did not seem to have previously considered observation as a source of patient experience data, but they found this appealing when introduced to it at the first learning community. Several experimented with observational exercises, including the pro forma in Figure 11.
Another site experimented with the ‘15 steps challenge’ approach:
[XXXX] explained that he had experimented with the observation model since returning from the first learning community. He explained that he had asked a volunteer (who visits the ward regularly) to do a 15 steps observation ‘from entering the hospital to sitting by a patient’s bed’. He described it as a ‘mystery shopper’ exercise in which ‘your brief is to be a patient for the day’. The volunteer was further advised (by XXXX) to ‘be as stupid as possible and ask as many questions as necessary’ . . . The volunteer provided verbal feedback that essentially comprised of the following detail/observations:
Ethnographic field note
The same site also tried patient diaries but found that these did not yield actionable improvement ideas because they were ‘too positive’. (Positive feedback, although welcomed, seemed rarely to be considered explicitly as a useful source for improvement, as we discuss further in The patient feedback silo and the role of the patient experience team.)
[T]hey trialled the diary on the ward with several patients. However, the information that they got back from these diaries was, once again, regarded as unsuitable and inappropriate in terms of quality improvement. In short, data they received here was considered ‘too positive’ and there was nothing critical on which the team could build.
Ethnographic field note
However, brief end-of-shift staff diaries were used successfully at another site (Figure 12), alongside the observations in Figure 11.
Online feedback
Ward teams were introduced to online feedback, particularly Care Opinion (then known as Patient Opinion), at the first learning community. Although trust-level managers might be well aware of Care Opinion postings, the majority of front-line staff seemed unfamiliar with these, but, having seen them, were enthusiastic about their potential. Three ward teams attempted to use Care Opinion but found few relevant postings at ward level, and discovered that their trust’s subscription was only at whole-trust level; this meant that ward staff were unable to respond directly to comments made, and, as a result, the teams took it no further. In a fourth trust, one member of staff reported that the trust was making greater use of Care Opinion following the first learning community meeting. However, there too the subscription did not extend to ward staff and the team did not make direct use of it. Lack of internet access on the ward was also mentioned as an issue.
This weak engagement with Care Opinion echoes the findings reported from the phase 1 national survey of trust patient experience leads. Thus, despite early enthusiasm, efforts to use it quickly faltered without organisational endorsement.
The patient feedback silo and the role of the patient experience team
Patient experience feedback (particularly unsolicited, in the form of NHS Choices or Care Opinion postings, but also solicited survey data) is often treated in a silo, managed at board level and handled separately from data about safety and other aspects of quality. Ward teams rarely seemed to access unsolicited data themselves. Most wards were routinely sent FFT data, but staff often reported having neither the time to look at them nor the skills to interpret them. Again this echoes the findings from the Phase 1 survey. In some sites, analysis was managed on wards’ behalf by the patient experience office.
Negative feedback was recognised as having valuable formative potential; staff reported that it was welcomed, and even sought in some cases, but that it was not always readily available. This may reflect a lack of connection with (for example) local teams responsible for complaints or Care Opinion data, and whether or not complaints are seen in the organisation as a resource for QI:
But our problem is we don’t really get that many bad comments. We get a lot more feedback saying about the wonderful staff and like the fantastic caring and the support that we’ve given to the families.
Ward clerk
At the same time, the potential for negative feedback to generate a defensive and resistant response was noted. One patient experience officer described how her team had had to work hard to demonstrate the value of listening to negative comments, particularly in the case of one critical narrative, which staff had felt to be unfair:
Speaking to some of the people in my team who were involved right from the start of doing the ‘real-time’ work on the wards, they would say there was a lot of resistance, a lot of suspicion. And then as we kind of demonstrated that it was kind of an open process, that it was actually helping to improve things on the wards, there was more acceptance and more wards wanted to join in the process. I found there was one ward in particular I used to hit a bit of a wall. I never felt that welcome at the start and there was always, ‘Oh, the doctors are doing their round’. There seemed to be excuses as to why I couldn’t be there and I used to get quite frustrated. But I have a really good relationship on that ward now. I think it’s about just bringing people round. I had an issue on a ward with a patient which was a really tricky one. A patient who’d had quite, not a very nice experience and I did a patient story around that which is where we go into more detail, with that patient’s consent, about what’s happened. And that story didn’t go down well at all on the ward and the staff got upset and I think they felt it wasn’t fair, that this was the patient’s perception of what had happened, and that they didn’t have the right to reply, and, ‘you’re only seeing it from the patient’s point of view’. But the point we try and make is actually, you know, your perception as a member of staff, you don’t understand the patient’s perception. That’s all we’re trying to do is to provide a mirror to that. And I was quite upset because I felt really kind of caught in the middle. But actually after a few days, the ward manager e-mailed me and said, ‘Actually, I know you’re just doing your job and you’ve given me food for thought and I’m going to try and change something as a result’.
Patient experience officer
An ethnographer working at this site observed how tensions between the patient experience officer and the ward manager played out and were resolved, partly through the intervention of a health-care assistant, which moved the debate on to a more positive footing:
[Health-care assistant] looked quite concerned and walked up and down the line reading each card and post it notes with real care and attention. He seemed genuinely concerned by the range of negative points on the wall. I listened in to what [ward manager] was saying in response to the waiting times. In short, she was not surprised by any of the comments on the wall and stated she was familiar with all the points raised. . . I could hear the frustration and possible anger in her voice [as she spoke to the patient experience team] . . . ‘We’ve tried this before, we had a dedicated person whose role was to respond only to call bells, but it didn’t work.’ Adding, ‘nothing in this project is going to combat that.’ [. . .] At this point [health-care assistant] chipped in and stated that he recognized these difficulties and that although they were negatives each point could be returned as a positive at the subsequent staff version of this event . . . [He] suggested that the comments about dignity and waiting could be fed back to staff to raise awareness of how staff talk to and treat patients. He believed that reflection on these issues will help recognize the problem.
Ethnographic field note
Positive comments were valued at ward level for boosting for staff morale:
I mean some of our cards are lovely, you know, they put some real detail in there, you know, and really personalise them, sometimes with names, certain staff’s names. Yeah it does give you a bit of a boost, it makes you feel like you’re doing something right . . . it’s a really positive experience, isn’t it, rather than a kick in the teeth [laughter] when you’re trying your best . . . It’s lovely, it’s just a small thing but it means a lot, yeah definitely.
Health-care assistant
However, their organisational potential for learning and improvement seemed under-recognised. At trust level, positive feedback on Care Opinion tends to be responded to with a ‘thank you’ to close the conversation, rather than exploring with wards how it could be used to identify what patients value and to improve care, or even as a performance indicator.
In the same way that patient experience data are siloed, we found that patient experience offices or teams also tended to be functionally separate from teams dealing with patient safety, QI, and/or complaints or patient advice and liaison services:
So you don’t have a huge amount of communication with them [QI team]?
No, no, not at all in fact, which is a bit of a shame, really. I know. I should probably do more around that really. It will be really interesting for me to know what are they actually even working on; I have no idea, no idea. Because then, we also have a service development team . . . And they have QI within their job titles as well. And they have been working with patient experience data because they’re doing a whole project on waiting times . . . purely off the back of patient experience data. So that’s been really positive. But they sit under HR [human resources]. It’s so hard to navigate.
With some notable exceptions (including one site in this study where the patient experience team were driving QI), patient experience as a function within many trusts has largely focused on data collection, ensuring that surveys are completed and FFT response rates are maximised, with a certain amount of passive feedback of findings to ward teams:
At a very kind of hands-on level, talking sort of nurses, HCAs [health-care assistants] kind of thing. How much engagement is there with patient experience do you think?
Oh that’s difficult. I don’t honestly know the answer to that. I mean I guess the sisters and charge nurses run that independently on their own ward. You get some that are far more engaged than others. But to be honest with you, because I’m one person on my own, I don’t know. I have to feed the information and hope that it’s reaching the right people basically . . . I go to things like the band 5, the band 6 development days, I go and talk at trust induction, loads of different places, but actually how much it’s filtering down – I would say that it is more, because I can see it in the FFT, as in I get so much more feedback now; the postcards are literally . . . can barely even keep up with them, we get so many in now. So I know they’re being handed out; I know that staff are engaged in that. But yeah, I don’t know honestly how much they’re involved in improvement work on the wards.
Thus, patient experience staff may not regard (or feel empowered to regard) QI on the basis of patient experience as within their remit. Meanwhile, QI teams often do not see patient experience as a particularly useful or valid source of information for QI. In some sites, this was beginning to change, and, in at least one site, this was partly because of participation in this study:
[Patient experience data] is new to the [QI] team. It was very much safety-focused before. And so everything was mainly quantitative. So, you know, it was very much about, you know, doing time trials of, you know, ‘we took Mr Potato Head apart and pulled him back together and we tried to do it quicker and quicker’. And my joke was, ‘but we didn’t ask Mr Potato Head how he felt’. So we might have got quicker, but it might not be better for the patient.
Head of patient experience
A member of the QI team was included in an initial set-up telephone call with this site but did not get closely involved, seeing it as a project primarily for the patient experience team. By the end of the study, however, the two teams were jointly reworking QI training materials to be more patient-centred:
[We’ve] looked at all the slides and we’ve rewritten their delivery and put in all the different patient experience methodologies, focus groups . . . We’ve adapted what they’re doing and everybody who’s taking a project [on QI] must have a patient experience component of it, otherwise why are they doing it? They can’t just improve safety without doing something about experience.
Head of patient experience
Even in cases where the patient experience team saw QI as definitely within their remit, front-line staff were not always aware of what support the patient experience team could offer. Realising that this was the case during the study prompted one team to change its title:
I was surprised to start off with that [ward manager] didn’t know what patient experience did and that we had those resources to actually help the wards do this improvement work, which makes me wonder, ‘wonder what everybody else thinks patient experience do’. So [head of patient experience] has changed our name to patient experience improvement team, which actually is a bit better. Perhaps it says a bit more on the tin of what we do. So that was surprising that she had no idea what we did.
Patient experience officer
Some teams found the practical support of their patient experience team invaluable in helping them make sense of data. Close involvement of the patient experience team – getting alongside ward teams and supporting them in using patient experience data – had a positive effect. Staff felt more confident and more motivated, and were more likely to take on a wider range of projects. Their work was more likely to be recognised and spread beyond the confines of the ward, and to be treated as an integral part of trust quality work.
In other sites we observed a degree of mutual misunderstanding between patient experience officers and front-line teams. One ward sister expressed an ambivalent view that greater input might have been practically helpful, but that, essentially, the front-line team could manage alone:
Yeah, I don’t think we’ve really needed it. I mean perhaps, just breaking down the feedback would be helpful maybe if [patient experience team member] could perhaps do that. But other than that, I don’t think we’ve really needed any assistance.
Ward sister
The relationship between front-line staff and the patient experience office is about more than practical data handling or project management; it reflects wider issues of power, trust or mistrust, and credibility. This relationship, and its impact on the work undertaken, is explored in more depth in Chapter 8.
Using ‘soft’ intelligence and staff experience as data
Stories, informal comments and the daily ward experiences of staff, patients and family are all important but often underutilised resources for QI.
In coding an interview with a core ward team member, one ethnographer noted:
There’s a really sad thing that [XXXX] says, which for me sums up the problem with the industry of patient experience data and quality improvement. She says ‘I hadn’t actually heard of patient experience until I went to the Learning Community’. What she means is that she hadn’t heard of patient experience as A THING; of course she knew about patients’ experiences – plural and embodied on the ward. This focus on data is so unhelpful! Instead, we should be talking about ‘listening to patients’ and ‘learning from patients’, which is probably what nurses and HCAs do anyway on a day-to-day basis.
Ethnographic coding note
Tangible, but informal, feedback from patients in the form of thank-you cards, gifts of chocolates or biscuits, or giving staff a hug before going home is virtually never treated or recorded as the expression of patient experience that it in fact represents.
There is a continued sense that data have to be countable (or formally recorded in some way) to be perceived as actionable. This may lead to a culture in which there is pressure to turn a minor comment into a complaint before it can be regarded as useful ‘data’ or to demonstrate that it is being treated seriously – and yet, as soon as a helpful, if mildly critical, comment becomes a complaint, it may raise defensive barriers.
We found that ward staff made use of multiple sources of soft intelligence but were not always aware that they were doing so. Staff could not always point to a specific source of patient experience ‘data’ that led to a particular improvement project. Sometimes they reported acting on what they felt they already knew needed changing as a result of caring for and observing people and talking to colleagues:
It [QI intervention] fits right, but I don’t think they’ve done a survey.
Sums up what I think is happening with a lot of the teams . . . the issues are known to staff and they are responding, without being able to identify any specific piece of patient feedback.
In one case, an ethnographer observed how a staff member was imaginatively channelling patient concerns through her use of language:
What strikes me about [XXXX’s] interview is how she speaks in the voice of patients and relatives (using the first person) when she’s narrating what’s needed or what’s happening on the ward. This happens at various points throughout the interview. She puts herself in their shoes. She uses direct speech, often as a dialogue between herself and the patient or relative . . . What this says to me is that her actions are based on empathising with patients and relatives rather than on data that has been extracted from them . . . Empathy and imagination should be positive resources in QI work! Suddenly data come to seem rather dry.
Ethnographic coding note
Staff gleaned insight into patient experience through their close interactions with patients and an experiential understanding of how the ward works. During the course of the study, and partly through discussion at learning community events, staff consciously expanded their interpretation of data (as did the research team and lay panel):
It depends how we look at data. I think in the ward staff before, if you said to them, ‘What’s patient experience data?’ they will say ‘Surveys’. I’m saying to them now data is any feedback at all, wherever that’s coming from and in whatever form, whether that’s coming from focus groups from the patients or anecdotal feedback from staff and patients, it’s all patient experience data . . . 9 out of 10 wards here, if you asked them what data was, they would say surveys: the National Survey, the FFT. Which I must admit when we started this project, the first thing we took along to the ward was all their survey results.
Patient Experience Officer
In this site, the ward manager introduced a suggestions board to allow all staff to contribute improvement ideas, and to show them that their views and insights were welcomed and sought:
I think that’s reinforced the feeling that, you know you need to remember that the staff are doing a job all the time. They have ideas that might improve the service.
. . .
Things like the hearing aid boxes I think came from the staff suggestion board, didn’t it, rather than from the patient experience data?
Yeah, all of that just came up because we could see the distress. Because like [ward clerk] is the person, the go-to person that they all go to when their hearing aids don’t work. She deals with a lot of the lost property, the distress of the patients that have lost an expensive hearing aid and they can’t communicate at a time when they most need to communicate clearly; the safety . . . That came from, you know, our own experience of what causes them distress.
In some cases, areas for improvement that staff identified could be ‘backed up’ by patient experience data, meeting a perceived need to justify the use of staff wisdom:
We could back those things up by other things that were being shown on the data. We could show from data that patients do better if they’ve got the same staff all the time and that the FFT positives all goes down if there is bank staff. So one of the things [ward manager] brought in was being able to offer ward staff the opportunity to fill vacant slots, rather than bringing in bank staff. So, we could, as I say, we could reinforce it with other data.
Patient experience officer
Conversely, ‘soft’ patient experience data were sometimes seen to require validation in the form of staff experience, as in the following case described by a matron:
I think the caveat with patients’ stories is, it really depends on how true you think that is for the rest of your department and that’s always difficult. But, actually, nurses on wards usually know if something has a ring of truth to it. They will hear that more, won’t they?
Matron
Where patient concerns are identified by staff rather than from patient experience data, there is a risk that staff perspectives are or continue to be privileged over patients’ perspectives when the two diverge. A matron acknowledged that staff perceptions may not be a reliable guide:
The challenge is there are several, many staff who work within the trust who will be completely unable to see the impact of their behaviour or their – let’s say – their brusqueness. No, if we have to tell people to say, ‘Hello, my name is . . .’, which we do. So that’s a real fine balance, isn’t it? ‘Cause there are people who won’t get that at all. They won’t see that that’s bad patient experience.
Matron
A head of patient experience raised concerns about not being able to demonstrate that improvement projects really did reflect patient preferences:
See, for me I’d want to say, ‘Well where’s the evidence? Where’s the evidence to substantiate that?’ and, ‘Yes, you might be right and I might agree with you’, or, ‘Yes, I agree with you and we can back this up because of this, or this, or this, or this’. But without any evidence, or something tangible to substantiate that, I think we can – not make mistakes – but we can go wrong in thinking we know what patients want.
Head of patient experience
However, another head of patient experience argued that there were limitations to both ‘hard’ and ‘soft’ data. Given that the views of some patients (e.g. people with communication difficulties or people living with dementia) can be systematically neglected by surveys, she felt that soft intelligence, held by the staff who work closely with them, can play a valuable role:
I think, you know, staff who are close to their patients can see the best. You know, this survey is very much reliant on people who can fill it in. So, actually, you’ve got lots of other patients that you’re perhaps not considering when you just take that data. And it’s also very lengthy. When you look at all the questions, you know, I am bored after 10 questions, let alone, I think there’s 87 or 89. Whereas some of the dementia patients might not be able to tell you, but you can see just by doing certain activities or things with them, that made a difference to them.
Head of patient experience
Discussion: collective sense-making and ‘mindlines’
In analysing our findings, we suggest that staff were engaged in a process of sense-making over time from a range of formal and informal sources of intelligence about patient experience. As described in the field of organisational studies,52,84 sense-making (drawing on symbolic interactionism)85 is a collective interpretive process of social interaction, whereby people arrive at a shared meaning that enables them to move to a course of action.
Ancona86 argues that sense-making:
. . . involves coming up with a plausible understanding – a map – of a shifting world; testing this map with others through data collection, action, and conversation; and then refining, or abandoning, the map depending on how credible it is.
Ancona86
For Weick,52 this perceived credibility or ‘reasonableness’ in planning a course of action is more important than seeking an objective justification. He argues that there is no single factual truth waiting to be discovered, but rather a range of perspectives that may be more or less useful in determining a way forward.
The concept of ‘clinical mindlines’ is a form of sense-making articulated by Gabbay and Le May. 87 Originally developed in the context of understanding how clinicians in general practice use research evidence to make care decisions, it can also be applied to the combined use of formal patient experience ‘data’ and soft intelligence. Gabbay and Le May87 summarise this as follows:
We found that clinicians rarely accessed, appraised, and used explicit evidence directly from research or other formal sources; rare exceptions were where they might consult such sources after dealing with a case that had particularly challenged them. Instead, they relied on what we have called ‘mindlines,’ collectively reinforced, internalised tacit guidelines, which were informed by brief reading, but mainly by their interactions with each other and with opinion leaders, patients . . . and by other sources of largely tacit knowledge that built on their early training and their own and their colleagues’ experience . . . Mindlines were therefore iteratively negotiated with a variety of key actors, often through a range of informal interactions in fluid communities of practice, interactions with and experience of patients, and practice meetings. The result was day to day practice based on socially constituted knowledge.
Gabbay and Le May. 87
This process resonates with the way in which ward teams in our study could not necessarily identify a linear process of specific pieces of patient experience ‘data’ prompting specific QI projects. Occasionally they could, but more often than not staff described a more informal and iterative process of drawing on clinical experience, observation, interaction and conversation – with each other and with patients and family members – and coalescing around priorities for improvement. Commonly, there is talk of how patient experience data can lead to ‘lightbulb’ moments in changing staff attitudes. Taking a ‘mindlines’ approach might suggest that there can also be a slower process of illumination, a gradual dawning rather than the flicking of a switch.
In some cases, taking part in this study provided a channel for staff to legitimise and act on these existing concerns, to make visible previously neglected problems and to create alliances for improvement. In effect, staff were seeking to create for themselves what Sheard et al. 48 describe as ‘structural legitimacy’: creating the authority and resources to take action, and making a case to higher levels of the organisation. This is an idea we develop further in Chapter 8.
At the same time, pressure at trust and national level to privilege formal, often quantitative, data remains strong. In another HSDR programme study funded under this call (HSDR 14/04/08: INQUIRE),31 an ethnographic analysis of how NHS trusts use online feedback for improvement suggests that data are more likely to be regarded as actionable for QI if they are structured, sought, solicited and/or sanctioned by the organisation. (‘Solicited’ feedback is where the trust invites feedback through channels such as Care Opinion or real-time feedback devices. This contrasts with ‘sought’ feedback, which is from sources that the trust does not actively direct people towards, such as Facebook and Twitter, but is sought out and used by the trust nonetheless.)
Equally, data that are unstructured, unsought, unsolicited and/or unsanctioned may be marginalised or disregarded, or they may simply never come to anyone’s attention. The fact that in our study Care Opinion stimulated so much early interest but failed to gain traction once staff were back in their trusts illustrates the impact of a lack of positive organisational sanction. Unintentionally, we may have promoted the use of survey data in making available at-a-glance summaries of baseline survey data.
These findings have led the research team to pose ourselves a number of questions:
-
Does this mean that we define patient experience data too narrowly, and do we need to broaden the definition to encompass these informal sources?
-
Do data have to come directly from patients to count as patient experience data, or can they come from the accretion of staff intelligence, multiple informal observations, discussion and embodied experience?
-
Does it matter if staff harness patient experience data to legitimise action on what they already know (or believe they know)?
-
How do we liberate staff creativity, empathy and action, but avoid relying on potentially mistaken assumptions without checking if these reflect real patient concerns?
Gabbay and Le May87 acknowledge that clinical mindlines ‘may seem to be a dangerous shortcut’, in that they may fail to engage with known research evidence. The same may be true of patient-centred improvement if it fails to engage with patients’ actual needs and preferences, and reinforces continued staff assumptions that they ‘know what their patients think’. The publication of numerous personal accounts by health-care professionals of revelatory experiences once they themselves become patients suggests that it may remain difficult for staff to fully appreciate the priorities and vulnerability of patients. 88
At the same time, we should not rule out the possibility that staff may indeed sometimes ‘know what their patients think’. Accumulated staff wisdom, informal observation and experience may sometimes be a more reliable guide to action than high-level survey data, or comments from a subset of patients who feel particularly strongly about something and who have the capacity and inclination to respond via sanctioned and solicited patient feedback systems.
The rhetoric of ‘putting the patient first’ is, of course, important, but it may carry with it an unfortunate implication that staff should always come second. Staff who feel valued and supported may feel more empowered and confident to engage with patient experience ‘data’. There is certainly a growing body of evidence of, at the very least, a close association between staff motivation/retention and positive patient experience, and most likely a direct causal relationship. 13–16,89 Chapter 7 reports further on improving staff experience as a strategy.
Gabbay and Le May’s87 recommendations for avoiding the ‘dangerous shortcut’ in use of research evidence can be adapted for the use of patient experience data. One key point is that individual practitioners ‘do not have the time (nor usually the skills) to rigorously review and combine all the key sources of tacit and explicit knowledge themselves’ but that key opinion leaders can be trained and supported to undertake this synthesising task – and to understand how to support professional networks and communities of practice, working with rather than against clinical mindlines.
This chimes with our findings that sites with close patient experience team support generally made more progress and were more ambitious in scope than those without. Patient experience officers are an emerging professional group whose role could develop into one of this supportive leadership, provided that they are trained not just to collect data and maximise survey response rates, but also to value and interpret a much broader range of insights. Done well, and perhaps in partnership with a more traditional QI function, this role can help liberate the curiosity of front-line staff, give them permission to use their practical wisdom, guide them and develop their skills to use patient experience insights, and strengthen their confidence to welcome feedback and work directly with patients.
We leave the final word in this chapter to one of the patient experience officers, who sent an e-mail reflecting on how her thinking about her own role had changed and what the trust will be doing differently:
I also learnt a lot from the project and I now do some things differently as I upscale to other wards – one of the main things that became clear was that it was no good just giving the data to the wards – they need support and guidance to understand what the data is telling them and how to implement changes.
What also became clear was that very valuable data could be obtained from just general conversations with patients and once the staff felt empowered to make changes, they did so quickly for these patients.
For us, the success of the project was that the staff became a team and like all new teams they need time to learn to trust each other and to work in a supportive, cohesive way which will certainly lead to improvements in the patient’s experience.
Chapter 7 Improving staff experience
In this chapter, we review findings on improving staff experience and the link between staff and patient experience. Some sites explicitly chose to focus on staff motivation and experience as a precursor to good patient experience. Here the focus was on using staff experience not simply as a conduit for gathering or understanding patient experience, but as a legitimate target for action in its own right, which would in turn influence patient experience through indirect cultural and attitudinal change.
In some instances, the deliberate focus on staff appeared to be consciously informed by an awareness of the evidence linking staff and patient experience:
I think there’s an increased awareness, as there quite rightly should be – if you keep your staff happy and your staff motivated and your staff well looked after, and look after their mental well-being and general health as well, then you know there’s evidence out there that that impacts directly on patient outcomes, doesn’t it? So it’s getting that link, and we’ve certainly introduced a lot of well-being initiatives for staff to try and promote that.
Head of patient experience
The exact nature of causality in the observed relationship between staff and patient experience is complex. Analysis of interviews suggests that staff themselves identify several potential interlinked mechanisms. These can be grouped as follows:
-
Motivated staff provide better care (which leads to better patient experience).
-
Staff who feel that their experience is taken seriously are more likely to be motivated and receptive to feedback.
-
Involvement in patient-centred QI is itself motivating.
-
Improving patient experience can directly improve staff experience.
Motivated staff provide better care
Staff in several sites suggested that staff who felt well supported and valued would be better able, emotionally and psychologically, to provide good support to patients. The patient experience officer and ward manager in one site explained this perspective, arguing that good staff experience might be a necessary, but not sufficient, condition for good patient experience:
Happy staff don’t necessarily make happy patients. But unhappy staff will never make good patient experience. So I think if you’re just going to work on patient experience, you are going to make your job twice as hard. I think you’ll get there, but it’s going to be harder. If you can get the staff engaged and come in with you then you’re half way there. So I think on any ward that we take this forward, our first steps, learning from this project, is to look at the staff first. And then the patient experience. So doing it in two steps, because otherwise I think we are almost setting them up to fail just straight away by bringing patient experience without looking at the ward set-up.
Patient experience officer
The patients are very vulnerable. They’re very scared. They’re in a time in their lives where they may never have been in this environment before, and they’re extremely vulnerable. So, making them feel safe and cared for is, you know, hugely important . . . If the staff are smiling and they feel that they can give the care they want, that will come across to the patients, and the patients, in their vulnerable state, will feel more reassured.
Ward manager
The head of patient experience in another site used similar language to describe this relationship. This comment also suggests that getting good FFT scores was in itself motivating:
We’ve actually done a lot of work recently around staff experience. Because in our opinion if you look after the staff they’ll look after the patients. If you don’t have happy staff you’re not going to have happy patients. [Director of nursing] is amazing for that and she’s done so much work . . . We had our highest [FFT] recommendation rate we’ve ever had last month. It’s improved by about 3% over the past year since the director of nursing came into post. And I genuinely believe it’s because we have done so much work with staff.
Head of patient experience
This sentiment was echoed by a health-care assistant in the same site describing the positive effect of staff ideas being discussed and acted on by a local staff council:
[The staff council] is like a group of people that the staff – like we’ve got a light-bulb moment we put in the, by the staffroom. If you have a light-bulb moment you put it up. Somebody put up ‘team building’. Somebody put up ‘clock in staffroom’. Somebody put up ‘a bladder scanner’. So, they take the light-bulb moments; they take it to the council; they then discuss them, try and work on them to try and improve how the employees feel about where they’re working . . . Because I think what we found from doing this project is if the staff are happy and they want to work on the ward, that’s going to reflect onto the patients. The patients are then going to be happy and then that will reflect on the data we get. So that’s how we’re sort of interlinking it.
Health-care assistant
Staff who feel that their experience is taken seriously are more likely to be motivated and receptive to feedback
A number of sites used some form of staff suggestions board to gather views on how both staff and patient experiences could be improved. The rationale was twofold: first, that staff had a valid perspective on patient experiences and ways to improve them (‘soft’ intelligence, as described in Chapter 6); and, second, that seeking staff views and demonstrating that these were acted on would be affirming and empowering to them. This in turn could positively affect their morale and motivation:
But I think that came from the need for [ward manager] to try and involve all the staff in that ward. And I think she was right. And I think the staff really appreciated that they felt some sort of ownership. They could see that the, the senior staff were serious when they said, we are going to make improvements on this ward. They could see it in action. And they could see that they were being invited to take part. So I think that was a very good move on [ward manager]’s part, actually to get them in. And to be able to do those quick wins with that. She had done things straight away.
Patient experience officer
It was also suggested that staff who felt that their views were listened to and their expertise was respected would be more receptive to patient feedback and more likely to want to get involved in QI. The importance of creating a sense of ownership, confidence and ability to act was a common theme:
I don’t think you can do any patient experience until you’ve sorted out your staff experience. They’ve got to come first, and then look at the data. And then – or get them to look, to help them look at the data . . . And empowering them to make that decision. ‘Cause I think if they listened to their gut feeling, that’s often right. And I think what we can do with the data is to prove that what they thought was right or we can back that up. ‘Cause I think they know, don’t they, they’re on the shop floor. When I look at their Post-it notes that they did at the very beginning, some of the ideas that came up there were very much ideas that then the patients brought forward at the focus group or the data has shown to be right. But the staff knew that.
Patient experience officer
Managers argued that respecting staff’s practice-based experience and expertise meant that interventions were more likely to be implemented effectively:
The advantages are, as we know, that actually if the change comes from within, the team are more likely to do it, and also they really understand the workings and the systems on their wards, and so they know what makes it work. So, the advantages of using front-line staff definitely outweigh not using them. I’m just saying because the nursing director says do it, doesn’t really instil any confidence in anybody, let alone me. But if the staff come up with the good ideas – and they’ll also know where the waste is. They’ll also know why they do something in a certain way, and know how to change it.
Director of nursing
You need to remember that the staff are doing a job all the time. They have ideas that might improve the service . . . I’ve always said, ‘This is our team’. If we get told that we have to do something, the team will decide how they do it. Because it’s their ward, and I think that sort of strength and that feeling that the ideas board, their ideas are real. They’re not up in airy-fairy team-building days where you’re all climbing a mountain. They say, ‘We want this because this will make our job easier’.
Ward manager
This suggestion that staff-led improvement may make staff’s job easier as well as improve care for patients is explored further in Involvement in patient-centred quality improvement is motivating.
Pressures on time and the demands of a challenging work environment can impede a readiness to engage with patients and their concerns. One ward manager described the relaxed state of mind needed to engage with patient concerns. A ward clerk in the same site described the way that stressed staff tended to put up a barrier between themselves and patients and families:
It’s like being on the M25 . . . It’s very difficult to find time to sit and, you know, in a relaxed way listen . . . You’re constantly co-ordinating a ward. You really have to actually say, ‘Look, shut the door, I have the time, you tell me’. It’s a state of mind. I think you really have to be quite relaxed to be able to deal with complaints.
Ward manager
We’re just living on a knife’s edge, you know. It’s quite hard, and you sort of put yourself a bit of a barrier up – but that barrier comes through to everyone, do you know? . . . Some patients do make our lives an awful lot harder and of course it’s going to come across isn’t it, in the staff? It’s going to come across with the way the relatives view us.
Ward clerk
Involvement in patient-centred quality improvement is itself motivating
Some participants described how staff derived fulfilment directly from their involvement in patient-centred QI work. One ward manager explained how it aligned with the intrinsic motivations that drew many staff to work in the NHS:
They’re all really keen. Because I think at the heart of it, they all want to make a difference to – it sounds terribly cheesy – but they all want to make a difference to the patient. That’s what a lot of us are here for. That’s what we do extra hours for. That’s what we work for, and that’s what we work in the NHS for, you know. It’s not an easy place to work but if we have something that can make patients feel more positive about the place, and staff. I mean, go for it, you know. I think they’re all showing a real interest in it.
Ward manager
A matron explained how the trust felt that offering staff the chance to do something different by participating in the project could help re-motivate them after a difficult few months:
I think from a ward level they’ve really embraced it because they’re a ward who were struggling with vacancies, difficult kind of patients, and well-established members of staff of whom a large proportion had left in the last year . . . So the reason partly that I’d chosen them was it gave them an opportunity to be involved with something that was really beneficial, not just for patients but also for staff experience and staff engagement.
Matron
As one health-care assistant remarked, going on study leave to the learning community events as part of the project in itself felt rewarding:
Our ward very quickly, you know, we called it the Oxford project and so that’s very much still what it is, ‘Oh, the Oxford project’. And in our staff meetings we have updates on the Oxford project . . . I can’t say that there’s been any negatives. And, you know, another one of the bonuses is the fact that we’ve got to go to lovely places. So we got to go to Oxford. You know, never in my life would I have, I would never have thought I would have gone on a study session at Oxford University.
Health-care assistant
The head of patient experience in another site commented on the morale-boosting effect not just of taking part in QI through the project, but also of being recognised for it through an award within the trust. Again, the equation of ‘happy staff with happy patients’ is evident:
Yeah, we won a patient experience award – that was amazing. It was so good . . . That was about celebrating compliments and positivity, and how do we share the good that comes out, because yeah, like you say, constantly being compared to the national average and being under it and, you know, what else are we supposed to do? How do we get there? This was all about celebrating our staff basically.
And do you think that that helps them to make changes and improvements, getting positive feedback?
Yeah, it boosts morale, it makes people realise actually they’re doing a really good job, but there are a few things that you could improve on. That’s very different than ‘you’re doing rubbish’ . . . I think it makes a massive difference. Happy staff equals happy patients as well; I really believe that. If you support your staff, then they will naturally look after your patients, yeah.
Improving patient experience can directly improve staff experience
Staff identified a possible circular relationship between staff and patient experience. They suggested that poor patient experience could lead patients to behave in ways that made the working lives of employees more difficult. This could emerge from an increase in workloads (e.g. if staff are unable to respond in a timely way to call bells, the patient’s need could become more acute, requiring more staff input in the long run). It could also lead to poor patient–staff interactions, when patients were frustrated, bored or uncomfortable. This in turn could demotivate staff.
Some wards focused on introducing more activities to relieve patient boredom. One rehabilitation ward had already employed a health-care assistant as an activities co-ordinator, an unusual role in an acute hospital setting. He described the benefits of a new interactive computer system:
What’s come out of the project is kind of a computer system what helps patients engage . . . I’d like to think it’s probably gonna be the biggest thing what’s come out of the project . . . It’s amazing the difference [it] makes . . . [It] has reminiscences, has activities, we’ve got horse racing, interactive horse racing . . . We’ve had patients say, ‘I just love [it]. I’ve never had access to this in a hospital before’ . . . Patients are set for the afternoon watching an old movie or watching Dad’s Army. And it engages patients. We’d like to think that we want more interaction in terms of one-to-one stuff or group stuff. But actually sometimes, you know, patients sitting watching a movie, it’s engaging.
Health-care assistant
The ward manager in this site noted that one aim of promoting activities to avoid boredom and frustration was to reduce restlessness and avoid falls. Although they could not be certain that the interactive computer system was the cause, the number of falls on the ward had reduced since its introduction.
Another site had introduced activities boxes with cards, dominoes, colouring books and word searches. This site had also introduced textured gloves (or ‘fiddle mitts’) to occupy restless hands, particularly for people with dementia. Towards the end of the project, a health-care assistant showed these to the ethnographer and described their effects on patients and staff on the ward:
She felt the boxes and mitts were effective first as an indication that they cared about patient well-being and that, in itself, was a valuable message to send to patients. She explained that they were effective in keeping people occupied who otherwise get really bored . . . The introduction of the activity boxes and twiddle mitts has improved patients’ moods . . . That has, in turn, improved staff experience and staff morale.
Ethnographic field note
A ward manager in another site explained how interventions to improve people’s sleep and reduce pain could also directly improve staff experience:
Because even down to the eye patches, you’d say if the patient’s got more sleep their pain threshold’s going to be better, which means they’ll probably need less analgesia, which means that they’re going to be demanding less on the nursing staff . . . It sounds like one small step for mankind; do you know what I mean? But in a place like this where you’ve got a multidisciplinary team and so many different factors, every single improvement will touch someone in some way. [The doctors] think, ‘Eye patches, pah!’ you know. ‘What’s that compared to an endoscopy blah?’ you know. But in actual fact it’s simple but it’s hugely important.
Ward manager
Conclusions
A strong focus on patient experience is a priority for the NHS, but may risk implying that patient and staff experience are somehow in tension with or in opposition to each other, rather than being mutually reinforcing and compatible. Approaches such as EBCD seek to pay attention to both, seeing clinical staff as part of the solution and not part of the problem.
Our original ‘theory of change’, based on previous work,39 was that many experiences that matter most to patients happen in front-line encounters; and that bottom-up engagement in person-centred improvement (as opposed to top-down managerially driven initiatives) can be motivating for front-line staff. Conversely, QI that is perceived to be top-down and not to engage with staff values of patient care is unlikely to lead to sustainable impact and embedded cultural change. Furthermore, at a time when NHS staff have felt under extreme pressure, acknowledging the importance of their own knowledge and experience and paying attention to staff morale and motivation may in turn lead (directly or indirectly) to more patient-centred care.
Our data show that a relationship between staff and patient experience and engagement was understood by front-line staff. These findings align with Luxford et al. ’s90 conclusions that a prerequisite for patient-centred care is a sustained focus on staff experience, and with Shaller’s91 argument that:
-
to achieve the commitment of front-line staff to patient-centred care, they should be directly involved in the design and implementation of processes
-
to deliver patient-centred care, it is important to nurture an environment in which the organisation’s most important asset – its workforce – is valued and treated with the same level of dignity and respect as they are expected to show to patients. As one of his participants argued, to deliver patient-centred care an organisation needs to be human-centred, not just patient-centred.
In Luxford et al. ’s90 study, the focus on staff experience tended to be manifested in public demonstration, such as award ceremonies and newsletter articles. Although this was important to some staff interviewed in our study, small, tangible changes in staff’s working environment – even as small as a clock in the staffroom – were also experienced as a demonstration of being valued by the organisation.
Our findings suggest that a sense of control resulting from engagement in the process of change can act as a buffer to feelings of burnout and can re-motivate staff. This resonates with findings from Maben et al. ,92 who note that staff seek to alleviate the burden of a high-demand/low-control work environment, low staffing and lack of ward leadership by finding personal satisfaction caring for what they describe as ‘the poppets’: patients they like and enjoy caring for. Other patients who have a poor relationship with staff may feel more like ‘parcels’ and seek to avoid being labelled as difficult or demanding. Staff in our study were clearly aware that they found it harder to be responsive when they were stressed.
Maben et al. 13 conclude that good staff well-being is an antecedent to good patient care, echoing the intuitive observation in this study that ‘happy staff means happy patients’. There is a growing body of evidence to support this. Early work on the NHS staff survey by West et al. 93 showed clear links between staff well-being at work and patient experience and outcomes. The strongest predictor of patient mortality, for example, was the percentage of staff who felt that they were working in well-functioning teams. Significantly for this study, lower mortality also occurred in trusts where staff felt that they had opportunities to influence and contribute to improvements at work.
Dawson’s16 recent quantitative analysis of survey results from 2014 and 2015 found an association between staff experience and patients satisfaction. The following staff experience variables were found to be key predictors of patient satisfaction across a number of domains:
-
work pressure felt by staff
-
percentage of staff believing that the trust provides equal opportunities for career progression or promotion
-
staff satisfaction with resourcing and support
-
percentage of staff feeling satisfied with the quality of work and patient care they are able to deliver
-
percentage of staff experiencing physical violence from colleagues in past 12 months
-
percentage of staff experiencing discrimination at work in past 12 months
-
effective team working
-
percentage of staff agreeing that patient feedback is used to make informed decisions
-
percentage of staff witnessing potentially harmful errors, near-misses or incidents in last month
-
fairness and effectiveness of procedures for reporting errors, near-misses or incidents.
The three most important factors specifically for black and minority ethnic staff experience were:
-
percentage of staff agreeing that their role makes a difference to patients
-
percentage of staff experiencing discrimination at work in past 12 months
-
percentage of staff able to contribute towards improvements at work.
However, Dawson16 concludes that good patient care:
is about more than the absence of negative experiences: it is about being able to work effectively together in effective teams, with well-designed jobs.
Dawson,16 p. 7.
Similarly, an analysis by Sizmur and Raleigh15 recently reported that patient experience is negatively associated with a range of workforce factors: higher spend on agency staff, fewer doctors and especially fewer nurses per bed, and bed occupancy. A high proportion of agency staff will make team working more difficult to achieve. This study also found that staff-reported experience and patient feedback were correlated in several areas, particularly between staff perceptions of care quality and patient-reported experience.
In North America, it has been argued that ‘care of the provider’ should be added to the ‘triple aim’ first proposed by Don Berwick of enhancing patient experience, improving population health and reducing costs, and indeed that achieving the triple aim is impossible without paying attention to the work life of health-care clinicians. 94
An evaluation of Schwartz Rounds® as an intervention to improve staff experience through reflective peer discussion found a statistically significant improvement in staff psychological well-being. 95 Reported outcomes also included ‘increased empathy and compassion for patients and colleagues and positive changes in practice’. These reflections suggest that a binary top-down/bottom-up view of change is too limited, and that a whole-organisation focus on staff experience and well-being is important in creating a receptive context for patient-centred improvement.
As well as reflecting this growing evidence on the importance of staff experience and providing a positive working environment, we suggest that this study adds two new points:
-
The act of undertaking QI activity could in itself improve staff morale, appealing to staff’s intrinsic motivation to provide good patient care. This could operate through the variables of a sense of ‘effective team working’ and increasing the sense that ‘patient feedback is used to make informed decisions’.
-
Improvements to patient experience may reduce their sense of boredom and frustration, and create more positive staff–patient relationships, thereby improving staff’s own experience and making work less stressful.
Further research is needed to test these propositions.
Chapter 8 The effect of team-based capital on quality improvement projects in NHS settings
Introduction
In this chapter, we explore the impact of the way in which front-line ward teams were constituted, the mix of staff of different grades and professions that was included, and how this interacted with each local context to affect the outcome of their QI work.
Each team sought to assemble individuals whom they felt were best placed to contribute to the project and its underlying QI ethos. Up to five core team members (including a patient or a family member) were invited to attend the learning community events, although in practice only one family carer and one other lay person were involved in any of the core team memberships. Each team could involve additional staff, either as core members or to contribute to specific aspects of a relevant improvement programme. For example, in one site, five junior doctors contributed to the project as part of their introduction to clinical practice on the ward involved. The constitution of the teams and whom they chose to involve locally was intentionally left broadly to them to determine, although it was emphasised that the first learning community was aimed at front-line staff. Director-level staff were discouraged from attending in order to give front-line teams ownership of the project.
The disciplinary and professional composition of each team varied considerably. For example, two teams consisted almost entirely of front-line ward nursing staff, whereas others included a mix of clinical and non-clinical roles, including, for example, ward manager, medical registrar, senior sister, health-care assistant, quality nurse, ward clerk, activities co-ordinator, patient experience officer, head of patient experience, pharmacist and matron. The collective core membership of the teams also reflected noticeable variation in NHS band level. As an illustration, one team consisted almost entirely of nursing staff at bands 7 and 8 (likely to be in leadership roles on the ward), in contrast to another team that comprised staff from bands 3–9.
Conjoining staff resources
The assemblage of individuals from a range of NHS band levels is important in understanding the extent to which teams were able to make progress. The multidisciplinary and mixed-seniority composition of each team brought together a range of NHS resources in newly formed patterns, resulting in collective team benefit. These NHS resources may be described as distinctive physical, material, personal, experiential and relational characteristics associated with specific roles and positions in the participating NHS trusts. For example, the range of resources available to a ward-based senior sister differs significantly from those accessed and owned by a patient experience officer; and a patient experience officer with a clinical background will wield different resources from a colleague in the same role without clinical qualifications. The formation of the six teams resulted in the conjoining of assorted NHS-specific resources to be deployed as part of QI work. Although such pooling of employment-related skills and resources is by no means unique in interdisciplinary NHS teamwork, participation in the study created new assemblages that might not otherwise have occurred.
As will be discussed throughout this chapter, the overall success (or otherwise) of the various improvement projects was observed to be strongly influenced by the range of skills and resources available to each team. In analysing these findings, therefore, we draw on the sociological concept of capital.
Capital
Capital is a familiar concept in social science. Although capital has been defined in a number of ways,96–98 it typically focuses on the way in which resources and assets interact within human relationships. It is the construct of capital as advocated by Pierre Bourdieu97,99 that has perhaps become the most influential in contemporary sociology, and that informs this analysis.
Bourdieu’s97,99 typology of capital proposes that it exists in four distinct and different forms: economic, cultural, social and symbolic capital. Each of these is summarised below. For Bourdieu, capital emerges from interactions with two further key components of his theory of social action (practice): field and habitus. Habitus can be defined as the way one unconsciously acts, interacts and behaves within the social world, according to the norms, traditions and unwritten rules of one’s particular group. Field provides context for habitus; it is the arena where one acts and that both frames conduct and provides opportunities for creating capital. For example, in this study, it is the NHS workplace. Habitus, field and capital do not exist in isolation from each other.
Capital: Bourdieu’s typology
Economic capital is perhaps the most familiar form of capital, which may exist at both individual and institutional level. Examples of economic capital include fiscal wealth, possessions, housing status and the range of physical resources and materials available to and owned by individuals and institutions in society.
Cultural capital emerges from social position (or social class) but may also be recognised within organisations and institutions (especially employment settings) as part of specific norms and values, and membership of particular institutions. Examples of cultural capital include knowledge, education, qualifications, credentials (such as work-related job titles), expertise and specific work-related skills.
Social capital relates to the range and depth of social networks available at an individual and societal level. Social capital may be viewed as purposely creating, establishing and maintaining relationships with others to acquire further social capital. Bourdieu97 considered social capital to be intrinsically linked with the perpetuation of class-based social inequality, reproducing cliques and conventions associated with elitism and power.
Symbolic capital relates to further embodied forms of socially recognised values, norms and achievements held at an individual or an institutional level. Examples of symbolic capital include an individual’s reputation, prestige and social status, which serve to reinforce and reproduce particular power relationships. Examples of symbolic capital may be found within any hierarchically organised institution such as the NHS.
The formation of teams of people from different disciplines and levels of seniority established a network of individuals with assorted levels of the four forms of capital described above. The conjoining of their respective individual capital constructed a form of new collective capital, a specific form of capital relevant to NHS teamwork that we term ‘capital in the NHS’ (i.e. capital that is specific to NHS services and service providers). We suggest that the four forms of capital combine within each of the six teams to establish a mechanism that drives and facilitates the various US-PEx QI work.
Here we consciously use the term ‘mechanism’ to allude to realist evaluation, to propose that, in each of our NHS contexts, this team capital is a mechanism that helps us explain the outcomes in each site: what teams were able to achieve, and where they came up against constraints or lack of power.
Findings: capital in NHS settings
In the following section, selected examples demonstrate the range of physical, embodied and institutionalised forms of capital available to ward teams as they developed their QI projects. We look first at positive examples of capital. Although these are discussed within Bourdieu’s97 four categories, in practice they often work in combination.
Economic capital in NHS settings
As noted above, Bourdieu’s97 construction of economic capital relates to physical objects and artefacts that demonstrate some form of ‘wealth’ within a particular field of practice. In the following illustrations, this ‘wealth’ includes assets and resources such as departmental infrastructure relating to office/ward equipment, staff capacity and time, and materials for designing and implementing improvement projects.
‘Space’ was a recurring theme across the case study sites. This was physical space in terms of places to meet, but also metaphorical space in terms of finding time within the ward habitus of busy days and shift patterns with limited handover times, and ‘head space’, as one person described it, to focus on QI against competing daily priorities.
In the interview extract below, a senior sister explains how resources provided by one individual from the trust’s patient experience office not only facilitated the work conducted by the team but also provided ‘relief’ in terms of capacity and demand at ward level.
Quotations from interviews are verbatim but edited for readability by removing repeated words, ums and ers. Ellipses presented in square brackets indicate that a more substantial portion of text has been removed.
I am happy with it just because they’ve got the resource to be able to do that. Because it’s not, although if you look at my job description, improving is obviously something that’s in there. But I’ve got a full-time job that doesn’t really give me any additional time to take on a big project. And this is a big project. So you do need a project manager that’s got the time and resource and that’s her job. So yeah, l’m more than happy for them to do it. It’s a relief that I’ve got someone else in the background. Because I know if it was left to me to organise meetings, it probably wouldn’t get very far. It’s quite good to have someone from the outside saying, ‘We’re gonna meet on this day’.
Senior sister
In the following example from another site, the degree of ‘organisational distress’ caused by the pressures of winter 2016–17 reduced the capacity and availability of some clinical staff to fully engage with the project. However, ward-based resources (office space and equipment) were made available to a staff member from the patient experience office, and his presence as an extra, non-clinical, person on the ward enabled QI work to continue on a more ad hoc basis while clinical staff managed increased demand on their time. In this case, the ward manager’s economic capital (in the form of office space) facilitated work by the patient experience officer (who brought economic capital in the form of time) for the benefit of the collective effort:
The problem is [ward manager] has been under such pressure, it’s been really difficult for us to even meet now, which we were doing quite well at. We’ve struggled with getting together with [ward manager] being able to free time up. Which is why we’ve offered [patient experience officer] from our team to come and help out. [Ward manager] has offered for him to sit in her office literally and try and start working on something, the information pack, with her throwing bits of information at him. Because that might be the only way we can get it done.
Head of patient experience
The senior sister reflected on the benefits of this combining of distinctive resources held by individuals from two different departments:
Who is in the best position to do patient experience improvement work on the ward? Is it patient experience officers coming onto the ward? Or nursing staff based on the ward?
I think it has to be collaborative. I think one or the other alone would find it very difficult. For the reason of the resources that we’ve got. And the patient experience team with the lack – not the lack of, but they’re not here all the time. And it’s much easier for you to evaluate anything about patient experience if – we’re here all the time. So we know what’s going on, on the ground. Whereas someone coming in, you have to establish what’s going on. So that’s why it works better having the two groups.
This convergence of economic capital from the patient experience office with cultural capital derived from the knowledge and expertise of the front-line ward staff was observed across other sites. For example, one head of patient experience recognised from the outset the pressure on ward staff and the consequent need to provide support with resources from the patient experience team:
You need to do that for the staff for projects. Because it’s around understanding what their demands on their time are, and however much their commitment is, at the end of the day patient care comes first. And if you’re on a really busy shift all day, every day, you don’t have the kind of space or the emotional energy or the time, so it’s around supporting them as much as possible to get a better output.
Head of patient experience
On an ethnographer’s first visit to this site, it was observed that, although they did not have clinical backgrounds, patient experience team members seemed attuned to life on the wards for patients and staff, and directly familiar with the pressures faced by clinical staff. In this sense they brought a form of cultural capital, including experience of past QI projects, as well as economic capital in the form of time:
It became clear from the discussion that [the two patient experience officers present] are, in a sense, quite ‘front line’. Although they are not clinicians, they are hands-on, operational staff involved in working directly with patients and staff . . . Examples were an EBCD project on the stroke ward and ‘time for tea’ on [ward name] where patients and staff would gather for tea and cake to facilitate social contact between patients and between patients and staff.
Ethnographic field note
The patient experience officer in the project team at this site recognised that a combination of resources was needed for the projects to be successful:
I do have a great understanding of the pressures on the ward staff. And I sort of learned a way to get them involved actually by me doing a lot of the work, but them coming up with the ideas and taking the ownership of it. I have the head space. I am not tied up, they are so under pressure. And I have, I can stand back. I do have the luxury of actually just sitting here and thinking about what’s the next thing to do . . . I couldn’t do it without them . . . I would come up with these ideas, but they would tell me whether this is likely to work on the ward or not. Because they could be more realistic about it. Sometimes I’ve got rose-tinted glasses on and they can bring me back down to, back down to earth, really and say, ‘We would love to be able to, but we can’t’.
Patient experience officer
This was confirmed, in hindsight by another team member:
Without working with the patient experience department, we wouldn’t have done half the stuff. I mean it’s been the fact that we’ve all worked together that we’ve got stuff done. Because I haven’t got the time to do it.
Ward manager
The patient experience team member’s contribution might relate directly to their access to patient experience data, and their expertise in manipulating and interpreting it. However, the value of their role was also seen as organiser and ‘project manager’. In relatively well-resourced patient experience departments, they were more likely than front-line ward staff to be able to spend time sourcing materials, generating resources, taking plans forward and generally keeping up momentum:
I think we could not have made the progress that we did without [patient experience officer]. So, I think it’s very easy, months slip by, don’t they, when you have great intentions. You have a meeting and everybody is enthusiastic. Everybody has their actions. Months go by and nothing happens, because when you are busy, time flies and if no-one holds you to account for that or nobody project manages it and then everybody gets demotivated, ‘cause nothing happened. So I think the good about having [patient experience officer] was she was able to come back and not exactly hold people to account, but say, ‘Do you remember we discussed this last time and what’s happened about that? What can I help you with?’
Matron
Conversely, teams with limited interaction with their patient experience office seemed to make less progress (see Negative capital in NHS settings).
Economic capital in the form of small amounts of funding to support QI interventions was obtained in some sites. The ward manager in one site explained how they had used their participation in the US-PEx study as leverage to obtain funding for interactive computer games and entertainment to relieve patient boredom:
We were able to go and seek some additional funding to be able to purchase those, but as part of the work that we were doing. So, yeah, and I think having those on our wards helps the experience.
So doing the US-PEx project helped to get that?
No, just helped us because it was part, doing the US-PEx project and our aim was all about well-being and activities and things. So that took our focus to obtaining these pieces of equipment, yeah, which are quite expensive. But we were able to go and put in a bid to [a] charity and get some, and obviously plead our case, and get some money for that.
Obtaining even small amounts of funding could be difficult. In another site, the team applied for funding from the League of Friends. They called on the support of a consultant, whose perceived greater symbolic and social capital made them feel more comfortable applying for funds:
So I’ve supported them in that they want to do 250 welcome packs on a trial basis and see how it impacts on patient experience, and so we’ve put in a bid to our League of Friends and they’ve got to present their case next Tuesday. So, all I’ve done is the paperwork to say, ‘This is what we want to do, and we want £1000 please’. And I’m pretty confident actually, this is just what the League of Friends like to do, so I’m sure they won’t turn us down . . .
What was the reason for you doing that [the paperwork] and not them [nurses/health-care assistants]?
Because I’ve done loads of bids before, usually for equipment and yeah, I know how to do it, and I think they looked at it and thought, ‘Oh god this is going to take us hours,’ but in fact it doesn’t. You know, it took me half an hour. It’s then just getting the signatures because it’s quite . . . so they have to get our signature and then the divisional manager’s signature. And then the finance manager’s signature. So, three signatures before it can actually go to the League of Friends. So, yeah I suppose the nurses would struggle with that because they don’t know these people and that’s something that I could just do at the click of a button.
A staff nurse in the same ward commented:
I’m a bit nervous about the presentation in front of the League of Friends, because obviously if they say no then we’ll have to reassess the situation again. But I’m hoping that with [consultant] on our side then she definitely speaks volume and she’s got a presence about her that I think is quite, you know she’s just quite important really in this trust, so that should work in our favour.
Staff nurse
Although the funding application was successful, the front-line team was uncertain about what would happen when all of the funds had been spent (see Negative economic capital).
The anxiety of the more junior staff associated with feeling unable to command economic resources (and, therefore, unable to continue their QI project) contrasts with the easy confidence of the consultant, who assumes that she need only ask and, perhaps as a result, carries ‘a presence about her’.
Social capital in NHS settings
Social capital relates to the range of social networks and alliances available at an individual or organisational level and may include qualities such as leadership, empathy, reciprocity and trust. In the context of this study, social capital may be viewed as individuals working to create, establish and/or maintain relationships/alliances with others in order to access and accrue further social capital for collective team benefit. This is illustrated in the following examples, commencing with the view that leadership within the team is a shared and delegated activity in which joint leadership serves to secure access to particular networks or resources. Indeed, the following illustration also demonstrates how it can be a mechanism for accruing further individual capital (here in terms of individual promotion):
I do notice occasionally, when I’m observing [the team], that sometimes you appear to be the leader leading, but sometimes it’s [ward manager] too.
So [ward manager] hasn’t done any of the project documentation, she doesn’t do that when we’re meeting, she doesn’t know when we’re going to Oxford . . . That’s how I see the role of a project manager. I don’t necessarily see a project manager’s role as leading every meeting. So I’m running a project, I have about 15 other things that I do alongside this. As [ward manager] is the team manager for me, it’s really important that she is really involved. She knows the patients better than me. She knows what the ward needs better than me. And if she wasn’t on board, there’d be no point doing this project . . . But in terms of what’s going on, the contact for Oxford, making sure we’ve got all the dates, making sure the project’s actually moving, making sure the documentation’s done, that’s me. But [patient experience officer] is supporting me. She’s kind of my kind of project support, and she’s been fab. And she’s actually just been promoted within the [patient experience] team. This [the study project] is something that she’s picking up as additional.
In the following example, a patient experience team member in one site contrasts well-functioning relationships between her team and the clinical areas in her trust with what she had seen of other participating sites. Here, the embedded and routine structural practices (or ‘habitus’) of the relevant trust are characterised by networks and alliances across the ‘field’ of clinical and non-clinical departments, resulting in a form of naturally occurring social capital:
Does that mean there was resistance to your office? Is that what you mean?
No, I don’t think to our office. I think our team is fairly well (established) . . . We’ve got a good relationship with clinical areas ‘cause we help them out like with reporting and things. So listening to some of the other trusts at the last [learning community event] it felt quite, it almost felt like they, they weren’t, they’re not like scared of the patient experience office, but . . . it felt like there was no relationship. Whereas we regularly meet with, communicate with matrons and sisters. ‘Cause [name], our lead, is part of the corporate nursing team and she goes to all of the nurse and midwifery boards, she goes to nursing leadership meetings, between our team and front-line staff. So I don’t think there’s, it’s not a resistance to our office.
In the following example, a senior director reflects on the individual leadership qualities of the ward manager (a form of social capital), which in turn provide benefit (symbolic and cultural capital) to the trust as a whole. She is recognised as a leader who listens and acts, is able to engage with criticism, and is well liked and respected both in her team on the ward and by those in senior positions. Her personal social capital therefore benefits relationships with those above her, below her and across the ward:
In the [EBCD] meeting, [ward manger] was also there. And I was just wondering if there was any kind of conflict there between [ward manager] listening to her colleagues talking about the ward [in a way] that might have been critical.
But she is a member of staff and she’s a pivotal member of staff. So to have a staff meeting without one of your key leaders doesn’t make sense. But you would be mindful of power balances . . . but not in [ward manager’s] case. But if you were aware of a leader who you think would dominate or suppress people, then you would give them other means of recording their views. So people did have confidential diaries. It wasn’t at odds with what people said on the day. And actually, in the meeting people were able to say both. They were able to say ‘I feel sad about this. I, we could do more there’ and also, ‘I feel very proud’. And actually their strong reinforcement of the leadership on the ward is real. You know, that they want to say that to [ward manager]. Not because they were sucking up. But I think genuinely they feel very supported by her as a manager. Because they’ve had a change in manager as well. And they’re describing a shift of performance and culture on the ward that [ward manager] can take credit for. So, no, I think it was right to have her there.
In another site, a health-care assistant described the difficulty of persuading peers on the ward to become involved in the project and spreading her personal enthusiasm for a patient-centred approach:
I feel sometimes that not everybody is joining in . . . Not necessarily us four [core team]; us four are doing it. But other people, they don’t seem to be. See, like I’m the sort of person that is, if I’m working in a bay for example, and I’ve done all my jobs and I’ve got 5 or 10 minutes, I’ll sit down with a patient and have a chat or do some colouring or do something. I find a lot of the others don’t do that; they’d rather have a chat with each other . . . I wish everybody would have the same enthusiasm as me.
Health-care assistant
However, social capital in the form of the leadership of the junior sister at the same site sparked staff activity beyond the core team, reflecting the capital associated with a higher staff band:
So we’ve got people sort of not involved in the group but outside the group that have come in and helped us put together things, and have taken our views on board and actually developed something that the whole ward would enjoy and all the patients will enjoy . . . So we’ve got people involved that we know are very passionate about people as well and passionate about their roles, and engaged them; because if we engage them we can start engaging down the line. And then those individuals then will engage the new starters on the ward to engage as well. So in engaging the right people, so it gets engaged down the line.
Junior sister
Symbolic capital in NHS settings
As noted earlier, examples of symbolic capital according to Bourdieu’s97 classification include an individual’s reputation, prestige, honour and status within a particular social setting. We have noted above an instance where a ward consultant’s symbolic capital helped the team obtain funding. The overt courting of symbolic capital associated with ‘medical prestige’ and ‘clinical power’ was a deliberate strategy embraced by the team in another site. In the first illustration, it was not necessarily additional clinical expertise or knowledge (cultural capital) that assisted the team, but the prestige of the specialists’ clinical role, and their status as gatekeepers to particular organisational processes:
We didn’t think about involving the discharge team because we were told that it was [the ward’s] project only. So we didn’t even think that that was something that we could do. Otherwise we probably would have done from the beginning. So it definitely made a massive difference. For a lot of the discharge stuff you would be completely reliant on a doctor engaging with his colleagues and doing the legwork and gathering data. And also with the patient information leaflets – you have to have a medic to have a look at them. And unless you’ve got somebody actually already that knows about the project, you’re constantly chasing people round. I don’t think it was their expertise that made the difference. I think it was just having them and the role that they’re in was necessary for the project.
Senior sister
In the same ward, the inclusion of a medical registrar as an additional core member provided impetus as a direct result of the symbolic (and social) capital attached to his post, which meant that he could accelerate a particular area of improvement, recruit additional help and delegate activity as part of a ‘junior doctor project’. By contrast, the patient experience project manager recognised that her own symbolic capital was insufficient to provide influence or motivation within the nursing team. This combined with a lack of economic capital in the form of time to slow down progress:
The biggest turning point has been getting the medical team on board. ‘Cause I feel like as soon as we had [medical registrar] involved, then suddenly things have –
Why was that, then? Why do you think he increased the pace of the project?
So before, I feel like all the energy was coming from me. But I’m not part of the clinical team. Well, it feels like it’s me, you know, trying to keep things moving. But actually, it didn’t feel like there was a lot of motivation in the clinical team. Or a combination of not enough time, they didn’t have enough time and there wasn’t enough drive to make things happen . . . He has a team of junior doctors, so he’s delegated things. They’ve had to do a project, so it’s kind of all tied in quite well. But, yeah. I mean we’ve got the information sheets done now.
Cultural capital in NHS settings
As with economic, symbolic and social capital, cultural capital may exist at institutional or individual level. Cultural capital typically becomes manifest in overt or covert demonstrations of expertise in, or knowledge of, particular forms of practice. In NHS settings, this may relate to the clinical and medical knowledge associated with a particular ward or patient group. Similarly, expertise in QI methods and handling patient experience data reflects another form of relevant cultural capital. The following illustrations explore the conjoining of cultural capital from these two distinct sectors within one trust.
In the following quotation, a clinical ward team member highlights the value of the patient experience and QI expertise that senior staff in the trust brought to the ward. She also implies an increase in individual-level capital across the team concerned:
I have more of an understanding now and an appreciation and respect for what [director] was trying to achieve and why [she] was pushing for us to do [QI] in this way . . . So I think that she had so much involvement at that time just because she had had so much experience in working in that way before – and trying to ensure that we did it to the best of our ability. But just to get the best out of it. And I think she was probably the right person and that was the right thing to do. And she’s then gone away and left [head of patient experience] and myself and [other team members] to run the project.
Ward manager
The ward manager reflected on the comparative success that the patient experience office had had in working collaboratively with wards across the trust as part of routine practice, not specific to this study. This had led to the production of meaningful knowledge (i.e. cultural capital), but also social capital in the form of a network of relationships:
So what’s your explanation?
For why it works well? So within the trust the patient experience team will be known by every ward within the organisation. But the likes of [patient experience officer] and other people who do the same job as her, every ward gets to know your patient experience team because they come on and they do your audits. And they always come on and they form really good relationships with you. [Patient experience officer] is fabulous in doing what she does.
The patient experience officer in question, although agreeing that relationships were generally very strong and collaborative across the trust, reported that, in fact, she had occasionally encountered resistance or defensiveness to critical patient feedback in other wards. This suggests that the cultural capital she had to offer was not always straightforwardly recognised or welcomed.
The team at this site had the widest range of staff in terms of level of seniority, and offered an example of how an individual at one of the lowest band levels within the NHS (a band 3 assistant from the patient experience office) could contribute. This non-clinical team member devised a qualitative and quantitative method for monitoring call-bell use (already identified as a particularly problematic issue) and produced a wealth of data for QI. This ward-level awareness of persistent call-bell use was combined with individual creativity and knowledge of QI to produce cultural capital for the trust:
So the standout thing for me from your [call bell] report was the quality of that work. So that leads me to ask, where’s your training come from that you were able to produce such a high standard [report]?
Just over time, I guess. Just working in the trust and working in patient experience as well. There’s a lot of data, sort of working [with data]. I’m quite good with Excel. I did a course in it back in high school, like a diploma in digital applications or something like that. And that was heavily based around Excel. So I’ve got quite a bit of knowledge round it.
OK. So was it something that you think you brought to the table? Or was it training provided by the office team?
No, it wasn’t. It was me that did it. You know what I mean? So it was me that brought it to the table.
The ward manager in another site emphasised the importance of acknowledging the cultural capital of all ward staff, engaging them in developing ideas for QI, and generating a sense of solidarity and cohesion, because ‘it’s their ward, and . . . their ideas are real’.
This was endorsed by the team’s patient experience officer:
My main observation of the whole project is how it’s reinforced the importance of having the staff engaged in doing this and starting from that level, working up. Whereas before we’ve always sort of got the data. We’ve sent it to them and just said, ‘Send us your action plans’ and left it to them. Now, I think involving the staff and getting the staff to come up with the initiatives has made such a difference. That’s been the key thing for me. In the presentations I’m doing now, I’m changing the presentations to put the emphasis not on the initiatives that we put forward, but on the importance of having the staff on that.
Patient experience officer
‘Negative’ capital in NHS settings
So far the emphasis has been on how teams of individual NHS staff were able to harness various forms of capital to have a positive impact on QI. We now turn to negative capital: the lack or absence of capital, the impact that this had on staff and how this constrained their ability to make progress.
The range of negative experiences articulated by front-line staff typically reflect forms of structural inequality in the NHS, and disparities connected to roles and responsibilities within the organisation. Experiences of negative capital were more typically described by team members on lower NHS band scales (bands 3–5), whereas examples of positive capital more commonly came from staff at bands 5–9. These illustrations highlight various barriers and difficulties encountered by front-line staff at lower band levels. As we have seen in the previous section, both clinical and non-clinical staff at lower band levels could and did wield capital. However, their ability to do so was greater when working in teams with access to a broad range of professional networks and forms of capital, complementing the cultural capital of front-line nursing and health-care assistant staff.
Negative economic capital
In this section we highlight how reduced access to economic capital of various forms may dis-able participation in particular forms of improvement work.
As noted earlier, access to dedicated space for meetings and discussions was a common limiting factor. In one site, an ethnographer described a search for a room by a member of staff:
[She] left her room and took us to the family room, but noting that it was now occupied by junior doctors in meeting she decided we would have the meeting in the reception area! En route to the reception area, she spotted an empty single bed room (approx. 8 ft by 4 ft room) and decided to hold the meeting there.
Ethnographic field note
In another site, staff held meetings off site and in their own time because there was neither figurative ‘space’ in the working day, given the demands of their work and differing shift patterns, nor an attractive or accessible physical space in which to meet.
The impact of limited access to other kinds of resources was also evident. Sites often had to improvise with whatever materials they could find, for example to make displays, comments boards or leaflets. In one case a lack of adhesive tack meant that a set of printed posters stayed in a pile until someone from the patient experience office brought some tack to the ward. Although sometimes quite small amounts of funding (perhaps from trust funds or the League of Friends) could enable teams to introduce improvements, such as welcome or discharge packs for patients, these were sometimes hard-fought battles:
It’s hurdles all the time; there’s nothing that’s straightforward. We have found hurdles all of the time . . . Trying to get money to start with. You know, we were phoning up in our own time, phoning people, companies – ‘can you do this for such a price? Can you do this for such a price?’ You know, ‘what is it for?’ ‘Mm, well I can do it for this price’. So, we were bartering all of the time, and that’s like ‘really, should we be doing this?’ And then we initially paid for it out of our own money and we got the money back.
Health-care assistant
This interviewee was also concerned about the precariousness of funding. These comments demonstrate the relationship between low symbolic and social capital and low economic capital, and how hierarchies may disempower staff and create inefficiencies:
We’ve done this all on our own. Yeah so, which is really upsetting really because, you know, we’re trying to do something for our patients you know. And management don’t seem to be there with us. Or they’re keeping very quiet.
Yeah. Is there anything that would have been helpful to have from management? Ideally what would you have appreciated them doing?
Well, it’s like we’re now on our last couple of boxes [of packs for patients], which we’re all worried about . . . What I can see happening is it will stop because we’ve run out and then it’ll just get pushed under the carpet and then that will be forgotten about . . . I’m here at quite a low level, I have to rely on [senior sister], I have to rely on [consultant], and they’re busy; they’ve got their own things going on.
Yeah. Do you think they would go to the higher levels?
I would hope they would. Because we’ve had such brilliant feedback, but I can see it dwindling and going and getting lost. It’s got to be charity funded because there’s no way the hospital is going to fund it. I mean, because there’s posters going around about saving every penny that you possibly can. They’re not going to be funding that; you know they’re not going to fund it. Because, although we can make them for £1.19, we have 30 or 40 admissions a day.
One site had been placed in financial special measures. Staff were keen to maintain the intervention, but were worried that it might be difficult to maintain funding even for cheap items:
So, do you think that improvements that have been made as part of this project will continue over time?
Yes, I don’t think we’ll even try to take away what we’ve done because it’s working. You know, why would you sort of go back to how it was before, which isn’t working, and everything that we’ve implemented I believe, well hope, I say I believe. As long as we can carry on getting the funding for things like the masks, you know the plugs, because we’re not even allowed to buy pens any more, you know, 5 pence for a pen.
In the following illustration from another site, a junior member of staff (at NHS band level 4) articulates difficulties with accessing IT. These are partly personal difficulties (relating to cultural capital) but also relate to a lack of economic capital in terms of lack of IT access associated with NHS band level:
Because, it’s like I said to you, and primarily because I’m a real technophobe, so that’s me, I don’t check my e-mails sufficiently, but . . .
Is that because of work demands or because of other things?
I just hate checking the computer.
At home or at work?
I can’t check, I haven’t got security level to do that [from home] . . .
Oh right, you’re locked out?
Yeah. It’s like on a ‘need to know’, I’m not high enough. [Senior sister] has got access. But she’s a band 8.
Negative symbolic capital
As suggested above, band level is a question of differences in not just economic but also symbolic capital. The following account of work routines on the ward demonstrates the relative lack of power of staff at lower NHS bands (in this case band 4) and the ‘gravitas’ that a more senior staff member brings:
So, yeah, there’s quite a lot of cascading in e-mails and training. So there’s now a registered nurse that’s on board with it. And I’ve got a band 2 health-care assistant, so there’s a band 2, a band 4 and a band 6, because she’s one of the senior staff nurses. Because sometimes you do need a bit of gravitas . . . In general people tend to listen more to a higher banding.
Health-care assistant
Similar problems of communication and interdepartmental co-operation were evident in another site. In this example, the member of the patient experience office assigned to the project felt that the ward team had incorrectly assumed that she lacked clinical experience and therefore did not involve her as much as she had hoped. Her perception was that she did not have sufficient prestige or clinical standing to be welcomed on to the ward. This meant that the ward team perhaps missed out on the cultural capital she could have brought to the project in terms of knowledge of patient experience feedback. Perceptions started to change when she wore a nurse’s uniform, a marker of symbolic capital within the ward:
So then going through to now . . . I’ve never seen you again [after 1 year of fieldwork].
No, you haven’t. I didn’t get an invite to the meeting after the one in October. I think I contacted you and had an e-mail literally the day after you’d been up. Never got an answer from [senior nurse] at all. I kept e-mailing and asking when the next meeting was, ‘When was the next meeting? Did you . . . ? You know, was I able to attend, de, de, de, dah?’ And I just, kind of, basically thought, ‘Well, do you know what? You’re obviously not interested in having me there so that’s fine’. I’ve got plenty to do, and my role had in fairness had completely taken off.
And later:
. . . I think it wasn’t until I went into uniform that people actually started to actually talk to me . . . It’s almost like they started to take me seriously. Because if you’re a person that’s not in uniform and you’re going on the ward, irrespective of who you say you are, you’re asking them to do something that they see as increasing their job load or whatever, ‘you can’t possibly understand because, you know, you don’t wear a uniform’. So ‘where do you think you’re coming from? You’re not a nurse’. And actually, I am a nurse. I have been where you are . . . and so, they do start to respond to you differently.
The next example comes from a site with good interdepartmental collaboration and a diverse project team. Despite this good internal working, an encounter between a creative and enthusiastic band 3 health-care assistant and a senior clinician from another trust (not part of the study) demonstrated how the symbolic capital of status and position interacts with the cultural capital of knowledge and experience to reproduce institutional inequality. The health-care assistant had developed a visual method for reporting and promoting the progress of the team’s project. Despite these attempts at building on personal cultural capital, the individual was subsequently humbled at an awards ceremony by a more senior and prestigious clinician:
I was gonna ask you, do you still keep those [reports] up to date?
Not, not really, no, to be honest with you. I had a bit of a moment in December when I went to an awards [ceremony]. And I had this woman have a good old go at me. And I think she kind of took advantage of lack of knowledge behind a certain area . . . Basically we went to this awards ceremony and my colleague was talking about her work. And then this other person said, ‘I’m up for an award because I’ve done x, y and z. I’ve got 30,000 Twitter followers’, all of that jazz. And I was interested, she seemed really passionate. And my colleague said, ‘Oh, well, [health-care assistant] does a lot of work behind dementia’. So we started talking and, yeah, she had a good go at me . . . I think her words were, ‘You’ve got a lot to learn’, basically. And I went, ‘I appreciate that and I know I have. And people like you, I love coming to these events and learning from you’. She actually won an award as well . . . And didn’t motivate me to say the least . . . I felt a bit flat . . . I call them mood hoovers. When they suck all the goodness and life out of you. They’re mood hoovers.
This criticism was so shaming and demotivating that the health-care assistant ceased all further work on the method concerned. He commented that band 3 ‘is quite low. And I have to face quite a lot of adversity. And I’ve been really tired’. The fact that team working on the project in this site was strong and diverse meant that good progress with QI was maintained despite this incident, although more might have been achieved if it had not happened. However, it does point to the possibility that the NHS may not be using all of the available talent.
A colleague in the same trust, also at band 3, appeared to recognise the limitations and constraints associated with his lower band level, within both the patient experience office and the wider trust. This was despite having personally led, designed and managed a significant QI project for the ward:
Thinking about the future. Would you be able to use this work as the basis for any future work or –?
I’m not sure. I’ll only know when the time comes. You know what I mean? I couldn’t tell you.
Is that because of your position? So if you had a more senior position in the office . . . ?
Oh, I’d have a lot more influence over certain things and stuff like that. You know, I’m only band 3. So . . . [shrugs shoulders].
Negative social capital
Social capital is a mechanism for increasing opportunities by the deliberate harnessing of alliances and relationships for personal or team advantage. In the following observation, the opposite seems to be the case. The patient experience officer felt that members of the ward team resisted forming alliances (which might have benefited the improvement project) in the belief that it would maximise the team’s prestige and credit within the trust:
What I have found about certain colleagues on that team is they’re very insular . . . It was very, ‘No, this is our project, you know, you don’t need to be here.’ . . . I just don’t think it occurred to them to invite me. I don’t think, I don’t know whether it was an intentional ‘cut out’. I just don’t think it occurred to them . . . I don’t think they thought I could bring anything to it because I’m not ward-based. So I genuinely don’t think they thought I could bring anything to it. I’m not suggesting it was a conscious thing.
Patient experience officer
Negative social capital may also be produced by the loss of a significant team member. In the following quotation, a head of patient experience considers the impact of a replacement ward manager on the core team’s collective efforts following the departure of a key individual from the ward (also ward manager). In the extract, the loss of collective capital following the departure of a band 8 NHS employee from the core team is noted:
I think obviously her moving on is not ideal in terms of the project because she was so into it and [replacement member of staff] that’s come in she knew nothing about the project but she certainly seems interested in it . . . So I don’t think it could ever be positive that [ward manager] left, because she was involved in it, and she had this excellent leadership and everyone really liked her, and they’re devastated to see her go. So it couldn’t possibly have a positive impact her leaving but I’m hoping that she left at a good time in terms of some of the stuff being implemented.
Head of patient experience
In one trust, a ward team composed of relatively junior staff was not well networked with colleagues at higher levels or with the patient experience team. They felt that the QI work was unlikely to be sustainable:
Would you do it again?
Truthfully, probably no . . . It’s all about, you know, time and people being behind you, and if you haven’t got the time, and you haven’t got the people behind you, you’re already come unstuck, haven’t you, really? We want to do the best for our patients but at the end of the day you can only do so much.
The director of nursing at the trust had been initially unaware of the negative experiences and sense of feeling unsupported among front-line team members. She recognised that, with more senior support, the project could have been more ambitious and achieved more, and individuals could also have gained more from it:
I think the project itself for those that are involved has gone well. I must say, from my point of view, is I wasn’t aware of it early. I’m a little bit disappointed in terms of that some of those individuals weren’t perhaps supported as much as they could have been, thinking about potential opportunities . . . With some support and guidance and more involvement from senior people, they could have encouraged those individuals more. But I think the value to the individuals that participated, you can’t detract from that. I think that was a real positive . . . I just think they could have been perhaps encouraged to think about – you know, I’m not saying that’s a bad idea in any way, but, you know, were there any other opportunities, how could they have expanded on that . . . It was a golden opportunity to take part in that and just could they have got anything more from it . . . There was so much opportunity there but I’m not convinced from my understanding about how much buy-in there was from the more senior people.
Director of nursing
One team wanted to address issues surrounding patients’ experiences of being discharged from hospital, and identified the need to raise this at the regular consultants’ meeting. However, both the ward team and the patient experience officer working with them seemed unsure how to get permission to attend the meeting. As a result, the plans to improve experiences of being discharged remained unrealised.
Negative cultural capital
If cultural capital refers to the breadth and depth of culturally relevant knowledge held by individuals and organisations, then limitations of such knowledge may be also defined as negative cultural capital. As already noted, more than one of the project teams chose to restrict their core membership almost entirely to ward-based nurses and health-care assistants. To some extent this was a literal interpretation of the original guidelines from the research team to focus on the importance of a front-line-led approach. However, other teams chose from the outset to interpret ‘front-line’ to include a key individual from their local patient experience team who had been assigned to work closely with them and who attended the learning community.
The inadvertent impact of a unidisciplinary, ward-focused approach was the restriction of the type and range of cultural capital more widely available, particularly when compared with teams that drew directly on the support of individuals from the patient experience office. This constraint became most evident in two teams. Both were motivated and made progress with their chosen projects, but a lack of QI skills and experience of handling patient experience data perhaps reduced the scope and ambition of work they might have undertaken. Lack of dedicated time was also cited as an obstacle. In one site in particular this led to significant demotivation and a strategy of basing their work on similar projects conducted elsewhere in the country rather than using their own local data. As one team member (role anonymised) explained:
Why do you think the team pulled back and were less motivated?
OK. I think real or perceived pressures at work. Like the volume of activity. And I think some of that is perception rather than reality. And I’m not sure that the team ever really got to grips with what they were going to do. Unless I missed something. It always felt like they were doing a bit of pinching with pride from other people. And trying to base what they were going to do on what patients had said, but never really getting there.
And throughout the project, the type of data that was used. How would you summarise the type of data that the ward team decided to work with?
I think it was ill thought through, erratic and inconsistent.
Yeah? Can you explain a bit more about that? Why do you think that?
Well, you can’t set up a survey with a set of questions and then decide you’re going to lump the answers together to make it report what you want it to report. Rather than what it really does.
In one site where the core team was made up of relatively junior ward-based staff, their progress appeared to be impeded by a lack of understanding of the trust structures and processes. The patient experience office at this site called a meeting after the first learning community event to allow the core team to share what they had learned and discuss what support might be offered. Yet from a health-care assistant’s perspective, the purpose of the meeting and the roles of people present remained opaque, preventing staff from making the most of the opportunity to benefit from the cultural capital of the patient experience team:
And there was a lady at the top of the table; I couldn’t even tell you what her name is but apparently she’s quite high in the hospital. Don’t ask me her name because I don’t even know . . . I think the lady at the top, she called the meeting.
You don’t know what she’s in charge of?
Well no she is in . . . it’s something . . . well, no it’s gone . . . I don’t know; I haven’t got a clue. I should know but I can’t remember . . . I couldn’t understand why we went to that meeting, and I know it sounds stupid and I went, ‘Well what are we here for?’ But apparently we had to.
Perceived lack of knowledge (i.e. cultural capital) was sometimes used to justify the exclusion of certain staff members from the project, for example non-clinicians who were not perceived to have the same understanding of front-line practice:
So [consultant] has been very supportive, and [senior sister], but nobody else really. But then I don’t really expect them to be, to be perfectly honest. Sometimes I’d rather just be left to my own devices. They’re not on the front line; they don’t do what we do in a day; even [senior sister] doesn’t really. [Consultant] does but in a different kind of manner, so I think we are the front-line staff.
Staff nurse
It may also be the case that ward teams who had been through a difficult time saw the project as a vehicle to rebuild internal team confidence and pride, and to demonstrate their own ability to manage improvement. The ethnographer in this site noted a strong sense of front-line team camaraderie to the exclusion of others who did not share the privileged insights that they felt derived from their daily proximity to patients. Although this proximity to patients is itself a form of capital, including a wider range of staff from the organisation could have strengthened the capital available to them for QI.
Conclusion
The formation of teams of people from different disciplines and levels of seniority automatically establishes a network of individuals with assorted levels of the four forms of capital listed above. The conjoining of their individual capital constructed a form of new collective capital, a specific form of capital relevant to NHS teamwork that we term ‘team capital in NHS settings’ (i.e. capital that is specific to NHS services and service providers). We suggest that the four forms of capital merge as one within each of the six teams to establish a mechanism that drives and facilitates the various US-PEx QI work.
Here we consciously use the term ‘mechanism’ to allude to realist evaluation in order to propose that, in each of our NHS contexts, this team capital is a mechanism that helps us explain the outcomes in each site: what teams were able to achieve, and where they came up against constraints or lack of power.
The illustrations above highlight a wide range of strengths, weaknesses, opportunities and threats that may be encountered in the formation of teams consisting of mixed band levels from a range of clinical and non-clinical disciplines. Strengths and opportunities relate to the potential for teams to apply innovative and creative methods within improvement projects, the naturally occurring conjoining of assorted capital, the value of clinical and non-clinical staff to work as one in genuinely collaborative approaches, and the possibility of maximising staff potential by involvement in such multidisciplinary teamwork. The assembling of such diverse teams has the potential to better negotiate institutional norms and practices and facilitate the QI efforts of the teams involved. In short, conjoined capital provides opportunities for access to material and embodied resources that enable good practice and facilitate improved performance.
Weaknesses and threats, however, relate to the teams’ potential to maintain the routines of the institution and to observe the hierarchical expectations of power (or lack of power) associated with team members’ band level. Teams that choose to restrict membership to a particular discipline (whether clinical or non-clinical), consisting of individuals from similar band levels, may severely limit the level of conjoined capital within their network and inadvertently establish a non-collaborative partnership. As demonstrated in the section on ‘negative capital’, such practice may serve to dis-able positive outcomes and constrain the goal of QI.
The relative lack of patient and family involvement in core team membership could be regarded as a missed opportunity that could have added to and strengthened the resources and skills available for teams to draw on.
In terms of the practical implications arising from this analysis, it is clear that the formation of the team responsible for such QI work is paramount. Purposive selection and formation of the improvement team should aim to recruit, where possible, both clinical and non-clinical expertise and, equally, vary the levels of seniority within the team. In such selection procedures, the design process establishes a pool of naturally occurring (cultural, social and symbolic) capital. In practice, the conjoined capital to emerge (with access to a wide variety of resources, networks and alliances) from this process places the QI team at an immediate advantage in developing further team-based capital.
Second, leadership for QI needs to recognise and realise opportunities from the collective capital available within the assembled team. This approach challenges normal hierarchy, and instead aims to generate a more equal and magnanimous relationship within the team and provides opportunities for individuals to promote and develop social and symbolic capital. It fosters the development of innovative working relationships and methods for QI that differ from established institutionalised norms and practices. For example, health-care assistants and consultants may work alongside non-clinical staff to design an action plan. Innovative work such as this is driven by the conjoined capital within the team, and may also significantly challenge the institutionalised norms and practices of the health-care setting concerned, but with the aim of producing a beneficial outcome for patients and staff.
These suggestions are by no means radical and are consistent with other research findings. For example, in their review of QI, Alderwick et al. 100 identify a series of lessons for NHS leaders, which include sharing responsibility for QI with leaders at all levels; developing skills and capabilities for improvement; focusing on relationships and culture; enabling front-line staff to engage in QI; involving patients, service users and carers; and not looking for ‘magic bullets or quick fixes’. The emphasis on relationships, capabilities and working as a coherent group within an established system aligns with the account of team capital given above.
Chapter 9 Dissemination and impact: developing guidance and training for the NHS
In addition to the normal academic outputs of this report, conference papers and peer-reviewed journal articles (in which our lay and staff co-investigators JB and MG will be involved), a key part of the dissemination strategy for this research was to develop an online toolkit for NHS staff on understanding and using patient experience data for QI.
During the lifetime of the study, research into stakeholder perspectives on toolkits from health-care research was undertaken as part of a PhD project at the University of Manchester,82 and early findings informed the methods adopted for phase 3. Several projects funded under the same HSDR call established a learning set at the outset of the study period, and a number of these projects had planned to develop a toolkit, either as an output of the study or as an intervention to be tested during the study. Therefore, time was set aside at one learning set meeting in March 2017 to share thinking about toolkits, with the involvement of the PhD student (Charlotte Sharp). Several investigators were also interviewed anonymously for the PhD project. The learning set meeting acted as a focus group to sense-check preliminary themes.
A further learning set meeting was organised in October 2017 as a dissemination event with policy-makers and senior managers from NHS England, the Department of Health and Social Care and NHS Improvement. Emerging findings from the funded studies were presented to representatives from a wide range of organisations, both from the HSDR programme-funded projects and from the PhD study of toolkits.
It was noted that toolkits are increasingly common as a dissemination route, but that both researchers and funders had reservations about their usefulness. The word ‘toolkit’ itself evokes some cynicism and resistance. Nonetheless, the potential value of some form of practical guidance for NHS staff arising from health-care research was recognised.
It was also recommended that toolkits were more likely to have an impact if they were designed iteratively rather than imposed, if they were produced by a recognised source with skilful design and marketing, and if they were disseminated and sustained by an external champion.
Influenced by these findings, and in discussion with our Study Steering Committee, a decision was made not to develop our proposed toolkit in-house, but to commission The Point of Care Foundation to produce and host it. The Point of Care Foundation already offers two well-known and well-used toolkits on EBCD and PFCC; adding to this ‘stable’ of existing resources is anticipated to maximise awareness and use of the US-PEx resources. Furthermore, The Point of Care Foundation already has strong branding and access to high-quality design and content preparation. Finally, its work has the support and involvement of NHS England.
The core of the toolkit was the content of the Resource Book already produced for sites in phase 2. This was revised and edited to include findings from the US-PEx study and illustrative films from staff in some of the participating sites. Draft ideas for the toolkit were discussed at the third learning community with front-line teams, and an invitation to take part in short interviews for the toolkit was issued. Those who took part from three of the six sites consented separately to being identified, but their comments are not linked to the anonymised data in this report. Lead responsibility for writing the toolkit and sourcing the illustrative case study material was taken by Eleanor Stanley, commissioned by The Point of Care Foundation. Eleanor spoke to members of the research team, front-line ward staff and patient experience officers to guide the writing process.
Figure 13 provides a screenshot of an example page.
One key decision has been to avoid the word ‘toolkit’ and instead to describe the resource as a guide. It is intended to be a resource that staff can dip in to rather than a step-by-step ‘how to’ guide. This reflects the findings that there is no one right way to improve patient experience, and that soft intelligence can be as useful as more traditional formal ‘data’ in prompting reflection.
The guide is now available online [see www.pointofcarefoundation.org.uk/resource/using-patient-experience-for-improvement/ (accessed 12 June 2019)].
Findings from INQUIRE (HSDR 14/04/08) will also be added to the guide as a module at a later date.
The Point of Care Foundation has also designed a new four-module training programme for NHS patient experience officers, the first of which was led by principal investigator Louise Locock. This has been a fortuitous development, in that we have been able to develop the training course and the online guide side by side. The guide provides an ongoing resource to staff as they move through the modules, and graduates of the training programme will be a natural route to further dissemination through their trusts. Early cohorts will also be invited to give further feedback on the content and to suggest revisions or additions. The second cohort started training in October 2018.
Finally, a further dissemination event was held in June 2018 between projects funded under the HSDR call and the NIHR Dissemination Centre. At the event in October 2017 with policy-makers and managers, it was clear from presentations by each HSDR programme-funded study that there was considerable consistency in findings and that a set of common themes and concerns was emerging. Those who attended the event expressed strong interest in a synthesised digest of findings across the different studies. This remains under discussion with the NIHR Dissemination Centre.
The Study Steering Committee has suggested preparing a short summary of findings aimed specifically at front-line nursing staff through RCNi (a subsidiary of the Royal College of Nursing), and our lay panel has proposed an infographic to summarise the findings. They have discussed preparing a paper for publication based on their chapter.
Chapter 10 Conclusions and implications
This study set out to explore whether or not and how front-line staff engage with patient experience data; what they find credible, useful and motivating; and how they can best be supported to work on patient-centred QI. Our original theory of change suggested that high-level organisational support is necessary, but not sufficient, for person-centred service improvement; that many experiences that matter most to patients happen in front-line encounters; and that bottom-up engagement in person-centred improvement (as opposed to top-down, managerially driven initiatives) can be motivating for front-line staff, consistent with evidence that patient experience seems to be better in wards that have motivated staff.
Over the course of the study, several key themes emerged to refine and add to our original assumptions:
-
All but one trust responding to our phase 1 survey reported having a dedicated person responsible for co-ordinating the collection and use of patient experience data.
-
However, only half of trusts responding had a specific plan or strategy for the collection and use of patient experience data; 60% said that their QI strategy included how they would use patient experience data.
-
Survey data remain the most commonly recognised and valued form of patient experience data.
-
Staff strongly welcomed more locally relevant, ward-specific survey data.
-
Ward teams found the infographics produced to accompany the baseline survey particularly engaging as a guide to action.
-
Other sources of data, such as patient narratives and observation exercises, are intuitively appealing and motivating to staff, but staff may lack the confidence to use these for improvement. Cultural preferences for survey data remain.
-
Positive comments from patients are welcome and motivating for staff, but they are sometimes seen as not useful for QI.
-
Unstructured and unsought online feedback, such as Care Opinion, is rarely used proactively. Although staff may find this feedback interesting and potentially useful, they do not necessarily have the organisational support to work with it.
-
Other forms of ‘soft’ and informal intelligence (e.g. daily interactions with patients and families, staff observations and experiences as they work, informal comments) may all be valuable routes to understanding and improving patient experience, but are often not formally recognised or authorised as such by either front-line staff or senior leaders.
-
Staff enacting improvements to make care more patient-centred could not always point to a specific source of patient experience ‘data’ that led to that project. Sometimes they reported acting on what they felt they already knew needed changing.
-
What counts as patient experience ‘data’ could be expanded to include these informal staff perspectives, although this is not without risk, if it leads to staff assuming that they know what patients want and do not need to ask them.
-
There was more focus on tangible changes, such as leaflets and discharge or welcome packs, than on behaviours, given the difficulty of instigating wider cultural change at front-line level.
-
Improving staff experience was a focus in its own right in some sites. This was on the basis that a good working environment enables and motivates staff to provide a better care experience for patients.
-
Getting involved in QI may in itself be motivating for staff, appealing to intrinsic values of care and giving staff a greater sense of control.
-
Teams that combine a range of staff from different backgrounds and levels of seniority command higher levels of ‘capital’ for QI, in terms of time, skills, resources and networks, and the ability to mobilise support from the wider organisation.
-
The close involvement and support of patient experience officers was particularly important in adding to ‘team capital’ and helping to offset the effects of winter workload pressures.
-
Teams involved patients to varying degrees in individual QI projects, but involving them in the core team managing the work was rare. This could represent a missed opportunity to further strengthen the resources and skills available to teams.
In analysing these findings, we have drawn partly on the idea that staff are sometimes engaged in unconscious ‘performance’ of particular norms and behaviours, and partly on symbolic interactionism. 85 Symbolic interactionism suggests that people make sense of the social world around them in an interpretive process of creating meaning; these meanings are arrived at through social interaction with others. In the unique setting of the NHS, this is through interaction with colleagues from their own and other disciplines, and with patients and families.
As staff seek to make sense of multiple sources of formal and informal intelligence about patient experience, we argue that they are engaging in a process of collective interpretation and agreeing a plausible course of action. This is not so much about establishing ‘facts’ as about building meaning and motivation.
We propose broadening the sense-making concept of ‘clinical mindlines’, as developed by Gabbay and Le May87 to encompass both clinical and non-clinical stocks of knowledge, which we might perhaps describe as ‘team mindlines’. This includes the perspective and knowledge of the ward clerk, the health-care assistant, the patient experience officer, the staff nurse, the ward manager and the consultant – and, ideally, the patient and their family. Team mindlines may find a parallel in the patient safety literature with the concept of ‘exnovation’: understanding how good safety is often accomplished through staff’s existing, taken-for-granted practices or ‘hidden competence’, rather than through explicit new innovations. 101
The richness of these varied and complementary perspectives joined together in a single team creates ‘team-based capital’, which we suggest is a key mechanism through which patient-centred QI can be achieved in the NHS. Team-based capital operates through the differing knowledge and skills that individual team members bring, but also through their access to material resources, time and influence. This resonates with studies showing that there is lower patient mortality in trusts with a higher percentage of staff who feel that they work in a well-structured team, whose members work closely and effectively together, and where staff report feeling able to influence and contribute to improvement work. 93
Working together may also generate a sense of confidence and authority to act. Our findings have much in common with Sheard et al. ’s48 analysis of the conditions for responding to patient feedback in the context of patient safety research, particularly:
-
structural legitimacy (i.e. staff perceive that they have sufficient ownership, autonomy and resource available to establish a coherent plan of action in response to patient feedback)
-
organisational readiness (i.e. the capacity for interdepartmental working and collaboration at meso level, and senior hospital management/organisational support for staff to work on improvement).
During the lifetime of this study, evidence that staff experience is a predictor of patient experience continued to grow. 15,16 Indeed, staff experience may be the single most important factor affecting patient experience. From the patient safety field, the study of ‘human factors’ has stressed the importance of going beyond a ‘checklists’ approach and addressing aspects of individual staff experience such as hunger, dehydration, tiredness, emotion and stress, as well as leadership and the extent to which the team collectively is looking out for each other. 102
This is not to say that we should abandon efforts to use patient experience data in QI and simply focus on improving the working lives of staff, although clearly the latter is important. But it perhaps means that we should focus more on the importance of making involvement in patient-centred improvement enjoyable, creative and inspiring as part of improving staff experience.
Participating in QI that appeals to staff’s existing values and ethic of care, and goes with the grain of what they already know through ‘soft’ intelligence, can itself be a rewarding experience, provided that it is not imposed top-down. Feeling part of a collaborative team with colleagues from different backgrounds and levels of seniority can not only unlock resources needed for QI, but also be an enjoyable working experience and improve the working environment.
Thus, staff and patient experience do not need to be seen as an either/or choice, but as mutually reinforcing. Patient experience data remain vital in triggering action and feeling, reminding staff why they value good patient experience and why poor patient experience is at odds with what motivates them to come to work. Numerical data can help build a sense of urgency and a case for change. The narrative persuasive power of patient stories (and, indeed, staff stories) and the immediacy of direct observation can reconnect staff at risk of burnout with the emotions of patients and the importance of relationships. 103
Although there is a risk that staff could simply trust their own judgement and fail to test it against patients’ own needs and preferences, this is not inevitable. Involving a range of staff from different backgrounds may help; the support and involvement of team members from the patient experience office may play a particularly important role here in guarding against easy assumptions. This new cadre of NHS staff have the potential to act as brokers between accumulated staff wisdom, formal patient experience data and ‘soft’ intelligence, helping staff to involve patients directly in what they do and supporting their sense-making to include patient perspectives. However, the development needs of this group of staff have yet to be fully recognised. 104 Improved formal training and recognition could strengthen their organisational legitimacy and credibility. This could complement other NHS leadership programmes aimed at middle managers, such as the Elizabeth Garrett Anderson programme (www.leadershipacademy.nhs.uk/programmes/elizabeth-garrett-anderson-programme/), that invite participants to use patient experience data to support service improvements for their work-based assessed assignments.
Our revised theory of change would still hold that high-level organisational support is necessary, but not sufficient, for person-centred service improvement; that many experiences that matter most to patients happen in front-line encounters; and that bottom-up engagement in person-centred improvement can be motivating for front-line staff. To this we would add the following key summary points:
-
The organisational endorsement or rejection of specific forms of patient experience intelligence as ‘data’ affects whether or not staff feel that these are actionable.
-
A focus on staff experience and staff ‘intelligence’ helps to create a context in which person-centred improvement can flourish.
-
Strong team-based capital increases the likelihood of change.
-
Partnership with, and close involvement of, patient experience office staff is a particularly important way to strengthen team-based capital.
Although our fieldwork was conducted in acute hospital ward settings, the themes that emerged in analysis have relevance for wider NHS settings, such as mental health, community services and primary care, and for those working to improve care across boundaries. Our findings in relation to team-based capital and the specific value of involving a varied range of people and disciplines could be particularly relevant to QI in integrated care settings. What is often characterised as a challenge – trying to bring together multiple perspectives – could instead become a factor making patient-centred improvement more likely than siloed working. This resonates with findings on the value of formal and informal networks to support QI:
[Networks] can focus on it directly and exclusively – unlike most organisations, which have other primary functions, whatever their commitment to quality. Networks can provide a neutral environment where individuals from different organisations, disciplines and constituencies can collaborate on an equal footing, freed from the constraints and competition created by more hierarchical structures.
The Health Foundation,60 p. 5
Limitations
With regard to the phase 1 survey of patient experience leads, we note that, given that only 39% of hospital NHS trusts responded, it is possible that those who responded held stronger views about patient experience data.
This study was primarily an exploratory ethnographic study of how and why NHS front-line staff do or do not use patient experience data for QI. It was not a ‘what works?’ study designed to demonstrate whether particular types of patient experience data or QI approaches are more effective than others. We therefore cannot draw firm conclusions on this point, and nor can we comment on the cost-effectiveness of different approaches.
The study used a focused, team ethnography approach, which brings both challenges and advantages. The nature of ethnographic work is discussed more fully in Chapter 2, ‘Team’ and ‘focused’ ethnography.
The research team adopted a formative and supportive approach to working with the participating sites, and did not wish to create a climate in which the sites were being ranked on ‘success’, particularly given their very different starting points and amount of organisational experience with patient-centred QI methods. There is always a difficult balance to be struck between helping sites achieve the most they can, and letting events unfold and observing what choices sites make and what helps or hinders them. It is arguable that, by providing learning communities and ongoing support, the study created an artificial environment that could result in overperformance compared with normal practice.
In fact, as noted in the report (see Wider contextual factors: NHS pressures and ‘organisational distress’), the pressure on staff was such that the uptake of webinars and ongoing improvement support was very limited, and several ward teams struggled to make headway. Furthermore, most trusts have QI and patient experience support teams in-house who could provide similar input (even if, as our results show, this is not always taken up in practice).
‘Winter pressures’ during the study period affected sites’ ability to pursue their improvement projects and, indeed, to take part in study fieldwork on some occasions. It was hoped that participating sites would be able to provide some routine data (e.g. on vacancies and sickness levels) to enable us to judge whether some sites had experienced this more than others as part of understanding the local context in each site. Unfortunately, this was not possible in all sites, and the figures are too incomplete to permit meaningful comparison. However, ethnographic data would suggest that these pressures in the system were an issue for all sites.
We did not set out to compare the improvement activities of the ward selected in each site with other wards in the same trust. We are therefore unable to conclude whether working on an explicitly patient experience-focused improvement project at a time of intense pressure was more or less likely to be maintained than other kinds of QI work in other wards.
The relatively low level of patient involvement in the core ward teams was a source of disappointment. With hindsight, perhaps more could have been done to address this; the research team strongly encouraged teams to bring a patient with them to the first learning community, but this was not compulsory. A few teams whose R&D confirmation came through only just before the event would have found it challenging to find a patient team member at such short notice. The lay panel advising the research team offered to get involved with sites, although this was not strictly part of their role (see Chapter 3), and the offer was not taken up. At the same time, the low level of involvement is an interesting finding in itself.
There is some discussion of the limitations of the before-and-after patient experience survey in Appendix 4. One issue was the timing of administration. The benchmark survey covered patients discharged in January–March 2016. The sampling period for the follow-up survey was originally planned to cover the same period in 2017. However, because the unusually severe winter pressures affected teams’ ability to pursue their QI projects, the research team was concerned that using the same sampling period would mean assessing sites while it was still at too early for any changes to have had an impact. Although, ideally, the post-intervention survey would have been conducted over the same calendar months to eliminate any seasonality effect on people’s responses, the risk of potentially not capturing the impacts of the interventions was considered to outweigh this limitation.
In the event, the survey showed very few statistically significant changes in patient experience. This partly reflects the difficulty of creating a survey instrument that can be used across all contexts but is sufficiently sensitive to the great variety of local change initiatives and priorities. It may also reflect the fact that some sites chose a longer-term strategy of improving staff experience as a route to improving patient experience.
Despite these limitations, the study offers important theoretical and empirical insights into how staff engage with patient experience data for QI, and how team composition and involvement from the local patient experience team can affect what happens in practice.
Implications for practice
Building on the above summarised findings, we offer the following suggestions for NHS practice, in no particular order of priority:
-
Provide specialised training and career development pathways for trust staff involved in patient experience teams. Recruitment could look to include a diverse range of staff from both clinical and non-clinical backgrounds.
-
Strengthen existing moves towards greater integration and co-working between teams responsible for patient experience and QI in trusts. This includes seeing the role of patient experience teams as going beyond data collection to working on improvement. For QI teams, it includes focusing on patient experience and patient involvement, as well as targeting safety and efficiency.
-
Improve the availability of more tailored, local, ward-specific surveys, and provide support and facilitation for staff to use these (e.g. infographics).
-
Increase training in using less obvious forms of data, and supporting both front-line staff and boards to see these as legitimate, actionable sources of improvement ideas. The new training programme in foundations in patient experience from The Point of Care Foundation is a step in this direction.
-
Promote the inclusion of a broad mix of staff in teams working on QI to maximise their collective ability to generate ideas, garner organisational support and effect change.
-
Offer front-line staff more training and support in involving patients in QI, both in individual projects and in teams overseeing QI work.
-
Consider and evaluate interventions to improve staff experience as a route to improving patients’ experience, as well as interventions focused more directly on patient experience.
Implications for research
We offer three key suggestions for future research:
-
There is a growing body of evidence that staff experience is one of the most important predictors of patient experience, but much of this work is based on observation of the association between measures of staff and patient experience, rather than on interventional research. We do not yet know what kinds of interventions to improve staff experience are most likely to be effective and what the costs of these would be. We therefore suggest developing and testing interventions focused specifically on staff with patient experience as the outcome, with a health economics component.
-
Studies focusing on the effect of team composition and diversity on the impact and scope of patient-centred QI.
-
Building on both this study and INQUIRE, more research into how organisations, and especially front-line staff, use and respond to unstructured feedback and soft intelligence.
Acknowledgements
The authors would like to thank the following.
The ward teams at the six participating case study sites, all of whom exhibited a strong commitment to the project at a time when the NHS was under phenomenal pressure, and without whom we would not have had the privilege of conducting this work. Often teams worked on the study in their own time and persevered with their QI projects in very challenging circumstances.
The six participating trusts and their senior teams for agreeing to take part, and supporting the research and the ward teams.
The patients at the six participating case study sites who variously completed surveys, gave interviews, and contributed in other ways to the improvement projects at the six case study sites.
Professor John Gabbay and Professor Andrée Le May, University of Southampton, who expertly facilitated three successful learning communities, drawing on their significant joint experience of teaching practitioners to use knowledge innovatively and to evaluate the outcomes.
James Munro, Care Opinion, a study collaborator, who analysed usage data from the Care Opinion platform to contribute to the baseline picture of patient experience data collection and use; and Sarah Ashurst, who provided expertise at the first learning community.
Joanna Goodrich, Bev Fitzsimons (The Point of Care Foundation) and Eleanor Stanley (freelancer), who worked with the principal investigator to develop the online resource on using patient experience data for service improvement. They also worked with us to develop and deliver a 1-day module as part of a training course for heads of patient experience of NHS trusts. The module builds on and links to the toolkit, the findings of US-PEx, related studies and the wider literature.
Geoff Wong, who provided expertise in realist evaluation methodology.
Angela Aristidou (Saïd Business School), who attended the first learning community to provide further expertise on organisational change to the ward teams.
Vanessa Eade, who was instrumental in obtaining the necessary ethics and research governance approvals at a time of significant change, with the Health Research Authority recently assuming the role of unifying governance of health research in the UK. We are grateful to Caroline Jordan and Kristy Ravenhall, who provided unwavering support and practical help in organising three large learning communities, as well as frequent full-team, PPI and Study Steering Committee meetings. Thanks also to Jade Howard for editorial assistance and practical support during the production of this report.
Colleagues working on other studies funded under the same HSDR call with whom the principal investigator and some research team members met at intervals as a ‘learning set’ to share and reflect on each other’s findings and generate further ideas.
Charlotte Sharp, PhD student, University of Manchester, for sharing work on toolkits.
Sue Pargeter, Research Manager at NIHR, who was a pillar of support and source of sound advice.
Martin Dixon, Assistant Research Manager at NIHR, for steering us through the editorial process.
The anonymous reviewers for their many helpful ideas and supportive comments.
The University of Aberdeen for allowing the principal investigator time to complete the study following her appointment as Chairperson of Health Services Research.
Rachel White, Catherine Thompson (NHS England) and Julia Holding (Head of Patient Experience at NHS Improvement), who provided advice to the ward teams as ‘improvement advisors’.
The members of the lay panel (Barbara Bass, Tina Lonhgurst, Georgina McMasters, Carol Munt, Gillian Richards, Tracey Richards, Gordon Sturmey, Karen Swaffield, Ann Tomlime and Paul Whitehouse), who engaged with the research study and remained ‘critical friends’ throughout, providing a sounding board to the research team and encouragement to the ward teams.
The external members of the Study Steering Committee, chaired by lay member Joanna Foster (former chairperson of the Nuffield Orthopaedic Centre and of the NHS Chairs’ Forum): Tony Berendt (Medical Director, Oxford University Hospitals NHS Foundation Trust), Caroline Shuldham (Experienced Nurse Executive and former Director of Nursing and Clinical Governance), Joanna Goodrich [until stepping aside during online resource development phase (The Point of Care Foundation)], Leigh Kendall (Milton Keynes University Hospital NHS Foundation Trust), and Bernard Gudgin and Manoj Mistry (lay representatives).
Angela Coulter, Ray Fitzpatrick, Crispin Jenkinson and Sian Rees, who were co-investigators on the study, and contributed to the original design and conduct of the study.
Gavin Hubbard from our local CLAHRC (Collaboration for Leadership in Applied Health Research and Care), who designed infographic material for the resource book.
Stanford University for providing Louise Locock with library access and study space in January–February 2018 while she was working on this manuscript.
We also acknowledge support in kind from the NIHR Oxford Collaboration for Leadership in Applied Health Research and Care at Oxford Health NHS Foundation Trust for the development of the resource book.
Contributions of authors
Professor Louise Locock, principal investigator, led the overall design and provided academic leadership for the study, managed the ethnography team, led the development of the handbook and online resource, sat on the Study Steering Committee, led the writing of the final report and gave final approval of the manuscript.
Mr Chris Graham, co-investigator, contributed to the overall study design, led the phase 1 and the two phase 2 surveys (before and after the QI work), provided expertise to the ward teams at the learning community events, sat on the Study Steering Committee, co-authored chapters of the final report and gave final approval of the manuscript.
Ms Jenny King, co-investigator, contributed to the overall study design, co-led the phase 1 and the two phase 2 surveys, conducted and analysed patient interviews (before and after the QI work), provided expertise to the ward teams at the learning community events, was lead author of Chapter 2 on the study methods and of Chapter 4 on the phase 1 survey, co-authored other chapters of the final report and gave final approval of the manuscript.
Dr Stephen Parkin, researcher, contributed to the study design, conducted and analysed patient interviews (before and after the QI work), conducted the ethnography at three case study sites, was lead author of Chapter 8 on team-based capital in NHS settings, co-authored other chapters of the final report and gave final approval of the manuscript.
Dr Alison Chisholm, researcher, contributed to the study design, conducted and analysed patient interviews (before and after the QI work), conducted the ethnography at one case study site, was lead author of Chapter 7 on improving staff experience, co-authored other chapters of the final report and gave final approval of the manuscript.
Dr Catherine Montgomery, researcher, contributed to the study design, conducted and analysed patient interviews (before and after the QI work), conducted the ethnography at two case study sites, co-authored chapters of the final report and gave final approval of the manuscript.
Ms Elizabeth Gibbons, co-investigator, contributed to the study design, conducted and analysed patient interviews (before and after the QI work), provided expertise to the ward teams at the learning community events, co-authored chapters of the final report and gave final approval of the manuscript.
Dr Esther Ainley, researcher, contributed to the study design, conducted and analysed the phase 1 and the two phase 2 surveys, conducted and analysed patient interviews (before and after the QI work), co-authored chapters of the final report and gave final approval of the manuscript.
Ms Jennifer Bostock, co-investigator and lay representative, led the design of the PPI activities and contributed to the design of the overall study, chaired the lay panel, supported other PPI members, was lead author of Chapter 3 on patient and public involvement, co-authored another chapter and gave final approval of the manuscript.
Ms Melanie Gager, co-investigator, contributed to the overall study design, advised the research team on working with ward staff, provided expertise to the ward teams at the learning community events in her capacity as a senior intensive care unit sister and QI facilitator, co-authored chapters of the final report and gave final approval of the manuscript.
Dr Neil Churchill, co-investigator, contributed to the overall study design and was involved throughout by providing insight into the relevance of the study for NHS policy, contributed to the final report and gave final approval of the manuscript.
Professor Sue Dopson, co-investigator, contributed to the overall study design advising specifically on NHS organisational culture and change, provided expertise to the ward teams at the learning community events, contributed to the final report and gave final approval of the manuscript.
Professor Trish Greenhalgh, co-investigator, contributed to the overall study design, co-authored chapters of the final report and gave final approval of the manuscript.
Dr Angela Martin, co-ordinator, contributed to the study design and managed the conduct of the study, co-authored Chapter 3 on PPI, contributed to the final report and gave final approval of the manuscript.
Professor John Powell, co-investigator, contributed to the overall study design, provided expertise to the ward teams at the learning community events, contributed to the final report and gave final approval of the manuscript.
Dr Steve Sizmur, statistician, contributed to the overall study design, provided statistical analysis expertise, contributed to the final report and gave final approval of the manuscript.
Professor Sue Ziebland, co-investigator, contributed to the overall study design, provided expertise to the ward teams at the learning community events, contributed to the final report and gave final approval of the manuscript.
Chapter number | Authors |
---|---|
Chapter 1 | Locock, King and Greenhalgh |
Chapter 2 | King, Ainley, Gibbons, Parkin, Locock, Graham and Sizmur |
Chapter 3 | Bostock, Martin and Locock |
Chapter 4 | King, Ainley, Graham and Sizmur |
Chapter 5 | Locock, Gibbons, Parkin, Chisholm, Montgomery, King, Gager and Sizmur |
Chapter 6 | Locock, Parkin, Chisholm, Montgomery, Gibbons, Gager, King and Bostock |
Chapter 7 | Chisholm, Locock, Parkin, Montgomery, Gibbons and Gager |
Chapter 8 | Parkin, Locock, Chisholm and Montgomery |
Chapter 9 | Locock |
Chapter 10 | All authors |
Contributions of others
The members of the lay panel (Barbara Bass, Tina Lonhgurst, Georgina McMasters, Carol Munt, Gillian Richards, Tracey Richards, Gordon Sturmey, Karen Swaffield, Ann Tomlime and Paul Whitehouse) contributed to Chapter 3 in particular.
Data-sharing statement
All data requests should be submitted to the corresponding author for consideration. Access to available anonymised data may be granted following review.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- Department of Health and Social Care . High Quality Care For All: NHS Next Stage Review Final Report 2008.
- Care Quality Commission . National Results from the 2014 Inpatient Survey 2015. www.cqc.org.uk/publications/surveys/surveys (accessed 29 April 2019).
- Coulter A, Locock L, Ziebland S, Calabrese J. Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348. https://doi.org/10.1136/bmj.g2225.
- Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013;3. https://doi.org/10.1136/bmjopen-2012-001570.
- Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med 2013;368:201-3. https://doi.org/10.1056/NEJMp1211775.
- Black N, Varaganum M, Hutchings A. Relationship between patient reported experience (PREMs) and patient reported outcomes (PROMs) in elective surgery. BMJ Qual Saf 2014;23:534-42. https://doi.org/10.1136/bmjqs-2013-002707.
- Greaves F, Pape UJ, King D, Darzi A, Majeed A, Wachter RM, et al. Associations between internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf 2012;21:600-5. https://doi.org/10.1136/bmjqs-2012-000906.
- Meterko M, Wright S, Lin H, Lowy E, Cleary PD. Mortality among patients with acute myocardial infarction: the influences of patient-centered care and evidence-based medicine. Health Serv Res 2010;45:1188-204. https://doi.org/10.1111/j.1475-6773.2010.01138.x.
- Jha AK, Orav EJ, Zheng J, Epstein AM. Patients’ perception of hospital care in the United States. N Engl J Med 2008;359:1921-31. https://doi.org/10.1056/NEJMsa0804116.
- Murff HJ, France DJ, Blackford J, Grogan EL, Yu C, Speroff T, et al. Relationship between patient complaints and surgical complications. Qual Saf Health Care 2006;15:13-6. https://doi.org/10.1136/qshc.2005.013847.
- Edgcumbe DP. Patients’ perceptions of hospital cleanliness are correlated with rates of meticillin-resistant Staphylococcus aureus bacteraemia. J Hosp Infect 2009;71:99-101. https://doi.org/10.1016/j.jhin.2008.09.009.
- Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lehrman WG, Rybowski L, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev 2014;71:522-54. https://doi.org/10.1177/1077558714541480.
- Maben J, Peccei R, Adams M, Robert G, Richardson A, Murrells T, et al. Exploring the Relationship Between Patients’ Experiences of Care and the Influence of Staff Motivation, Affect and Wellbeing. Southampton: NIHR Health Service and Delivery Organisation programme; 2012.
- Charmel PA, Frampton SB. Building the business case for patient-centered care. Healthc Financ Manage 2008;62:80-5.
- Sizmur S, Raleigh V. The Risks to Care Quality and Staff Wellbeing of an NHS System Under Pressure. Oxford: Picker Institute Europe and The King’s Fund; 2018.
- Dawson J. Links Between NHS Staff Experience and Patient Satisfaction: Analysis of Surveys from 2014 and 2015. NHS England: Workforce Race Equality Standard (WRES) team; 2018.
- DeCourcy A, West E, Barron D. The National Adult Inpatient Survey conducted in the English National Health Service from 2002 to 2009: how have the data been used and what do we know as a result?. BMC Health Serv Res 2012;12. https://doi.org/10.1186/1472-6963-12-71.
- Care Quality Commission . NHS Inpatient Survey 2016.
- Robert G, Cornwell J. What Matters to Patients? Developing the Evidence Base for Measuring and Improving Patient Experience. Project Report for the Department of Health and NHS Institute for Innovation &Amp; Improvement King’s College 2011.
- Dr Foster Intelligence . The Intelligent Board 2010: Patient Experience 2010.
- Ziebland S, Coulter A, Calabrese J, Locock L. Understanding and Using Health Experiences: Improving Patient Care. Oxford: Oxford University Press; 2013.
- Bate P, Robert G. Bringing User Experience to Healthcare Improvement: The Concepts, Methods and Practices of Experience-based Design. Oxford: Radcliffe Publishing; 2007.
- Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open 2016;6. https://doi.org/10.1136/bmjopen-2016-011907.
- Lee R, Baeza JI, Fulop NJ. The use of patient feedback by hospital boards of directors: a qualitative study of two NHS hospitals in England. BMJ Qual Saf 2017. https://doi.org/10.1136/bmjqs-2016-006312.
- Martin GP, McKee L, Dixon-Woods M. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety. Soc Sci Med 2015;142:19-26. https://doi.org/10.1016/j.socscimed.2015.07.027.
- Burt J, Newbould J, Abel G, Elliott MN, Beckwith J, Llanwarne N, et al. Investigating the meaning of ‘good’ or ‘very good’ patient evaluations of care in English general practice: a mixed methods study. BMJ Open 2017;7. https://doi.org/10.1136/bmjopen-2016-014718.
- Gallan AS, Girju M, Girju R. Perfect ratings with negative comments: learning from contradictory patient survey responses. Patient Experience J 2017;4:15-28.
- Meisel ZF, Karlawish J. Narrative vs evidence-based medicine – and, not or. JAMA 2011;306:2022-3. https://doi.org/10.1001/jama.2011.1648.
- Nutley S, Powell A, Davies H. What Counts as Good Evidence. London: Alliance for Useful Evidence; 2013.
- Adams M, Maben J, Robert G. ‘It’s sometimes hard to tell what patients are playing at’: how healthcare professionals make sense of why patients and families complain about care. Health 2018:603-2. https://doi.org/10.1177/1363459317724853.
- Dudhwala F, Boylan AM, Williams V, Powell J. VIEWPOINT: What counts as online patient feedback, and for whom?. Digit Health 2017;3. https://doi.org/10.1177/2055207617728186.
- Martin GP, Aveling EL, Campbell A, Tarrant C, Pronovost PJ, Mitchell I, et al. Making soft intelligence hard: a multi-site qualitative study of challenges relating to voice about safety concerns. BMJ Qual Saf 2018;27:710-17. https://doi.org/10.1136/bmjqs-2017-007579.
- Reeves R, West E, Barron D. Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial. BMC Health Serv Res 2013;13. https://doi.org/10.1186/1472-6963-13-259.
- Iedema R, Merrick E, Piper D, Walsh J. Emergency Department Co-Design Stage 1 Evaluation – Report to Health Services Performance Improvement Branch, NSW Health 2008.
- Piper D, Iedema R, Merrick E. Emergency Department Co-Design Evaluation Program 1 Stage 2 – Final Report to Health Services Performance Improvement Branch, NSW Health 2010.
- Robert G, Cornwell J, Locock L, Purushotham A, Sturmey G, Gager M. Patients and staff as codesigners of healthcare services. BMJ 2015;350. https://doi.org/10.1136/bmj.g7714.
- Tsianakas V, Robert G, Maben J, Richardson A, Dale C, Griffin M, et al. Implementing patient-centred cancer care: using experience-based co-design to improve patient experience in breast and lung cancer services. Support Care Cancer 2012;20:2639-47. https://doi.org/10.1007/s00520-012-1470-3.
- Tsianakas V, Robert G, Richardson A, Verity R, Oakley C, Murrells T, et al. Enhancing the experience of carers in the chemotherapy outpatient setting: an exploratory randomised controlled trial to test impact, acceptability and feasibility of a complex intervention co-designed by carers and staff. Support Care Cancer 2015;23:3069-80. https://doi.org/10.1007/s00520-015-2677-x.
- Locock L, Robert G, Boaz A, Vougioukalou S, Shuldham C, Fielden J, et al. Testing accelerated experience-based co-design: a qualitative study of using a national archive of patient experience narrative interviews to promote rapid patient-centred service improvement. Health Serv Deliv Res 2014;2. https://doi.org/10.3310/hsdr02040.
- Locock L, Robert G, Boaz A, Vougioukalou S, Shuldham C, Fielden J, et al. Using a national archive of patient experience narratives to promote local patient-centered quality improvement: an ethnographic process evaluation of ‘accelerated’ experience-based co-design. J Health Serv Res Policy 2014;19:200-7. https://doi.org/10.1177/1355819614531565.
- Palmer VJ, Chondros P, Piper D, Callander R, Weavell W, Godbee K, et al. The CORE study protocol: a stepped wedge cluster randomised controlled trial to test a co-design technique to optimise psychosocial recovery outcomes for people affected by mental illness in the community mental health setting. BMJ Open 2015;5. https://doi.org/10.1136/bmjopen-2014-006688.
- DiGioia A, Greenhouse PK. Patient and family shadowing: creating urgency for change. J Nurs Adm 2011;41:23-8. https://doi.org/10.1097/NNA.0b013e3182002844.
- NHS Health Research Authority . Patient Family Centred Care (PFCC): Living Well to the Very End 2017. www.hra.nhs.uk/planning-and-improving-research/application-summaries/research-summaries/patient-family-centred-care-pfcc-living-well-to-the-very-end/ (accessed 5 January 2018).
- Rozenblum R, Lisby M, Hockey PM, Levtzion-Korach O, Salzberg CA, Efrati N, et al. The patient satisfaction chasm: the gap between hospital management and frontline clinicians. BMJ Qual Saf 2013;22:242-50. https://doi.org/10.1136/bmjqs-2012-001045.
- Friedberg MW, SteelFisher GK, Karp M, Schneider EC. Physician groups’ use of data from patient experience surveys. J Gen Intern Med 2011;26:498-504. https://doi.org/10.1007/s11606-010-1597-1.
- Flott KM, Graham C, Darzi A, Mayer E. Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Qual Saf 2017;26:502-7. https://doi.org/10.1136/bmjqs-2016-005223.
- Burt J, Campbell J, Abel G, Aboulghate A, Ahmed F, Asprey A, et al. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience. Programme Grants Appl Res 2017;5. https://doi.org/10.3310/pgfar05090.
- Sheard L, Marsh C, O’Hara J, Armitage G, Wright J, Lawton R. The Patient Feedback Response Framework – understanding why UK hospital staff find it difficult to make improvements based on patient feedback: a qualitative study. Soc Sci Med 2017;178:19-27. https://doi.org/10.1016/j.socscimed.2017.02.005.
- Marshall M, Mountford J. Developing a science of improvement. J R Soc Med 2013;106:45-50. https://doi.org/10.1177/0141076812472622.
- May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci 2009;4. https://doi.org/10.1186/1748-5908-4-29.
- Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care 2005;14:26-33. https://doi.org/10.1136/qshc.2004.011155.
- Weick KE. Making Sense of the Organization. Oxford: Blackwell Publishing; 2001.
- Pettigrew A, Ferlie E, McKee L. Shaping Strategic Change: Making Change in Large Organizations – Case of the National Health Service. London: Sage Publications Ltd; 1992.
- Iles V, Sutherland K. Organisational Change. A Review for Health Care Managers, Professionals & Researchers. London: National Coordinating Centre for NHS Service Delivery and Organisation R&D; 2001.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581-629. https://doi.org/10.1111/j.0887-378X.2004.00325.x.
- Best A, Greenhalgh T, Lewis S, Saul JE, Carroll S, Bitz J. Large-system transformation in health care: a realist review. Milbank Q 2012;90:421-56. https://doi.org/10.1111/j.1468-0009.2012.00670.x.
- Hignett S, Lang A, Pickup L, Ives C, Fray M, McKeown C, et al. More holes than cheese. What prevents the delivery of effective, high quality and safe health care in England?. Ergonomics 2018;61:5-14. https://doi.org/10.1080/00140139.2016.1245446.
- Fitzgerald L, McDermott A. Challenging Perspectives on Organizational Change in Health Care. London: Routledge; 2017.
- Jones L, Pomeroy L, Robert G, Burnett S, Anderson JE, Fulop NJ. How do hospital boards govern for quality improvement? A mixed methods study of 15 organisations in England. BMJ Qual Saf 2017;26:978-86. https://doi.org/10.1136/bmjqs-2016-006433.
- The Health Foundation . Effective Networks for Improvement. Developing and Managing Effective Networks to Support Quality Improvement in Healthcare 2014.
- Walshe K. Pseudoinnovation: the development and spread of healthcare quality improvement methodologies. Int J Qual Health Care 2009;21:153-9. https://doi.org/10.1093/intqhc/mzp012.
- Ham C, Berwick D, Dixon J. Improving Quality in the English NHS. A Strategy for Action. London: The King’s Fund; 2016.
- Cribb A. Improvement science meets improvement scholarship: reframing research for better healthcare. Health Care Anal 2018;26:109-23. https://doi.org/10.1007/s10728-017-0354-6.
- Powell A, Rushmer R, Davies H. A Systematic Narrative Review of Quality Improvement Models in Health Care. Edinburgh: Healthcare Improvement Scotland; 2009.
- Trisolini MG. Applying business management models in health care. Int J Health Plann Manage 2002;17:295-314. https://doi.org/10.1002/hpm.683.
- Lucas B, Nacer H. The Habits of an Improver. Thinking About Learning for Improvement in Health Care. London: The Health Foundation; 2015.
- NHS England . National Results from the 2014 NHS Staff Survey 2015. www.nhsstaffsurveys.com/Page/1021/Past-Results/Historical-Staff-Survey-Results/ (accessed 29 April 2019).
- NHS England . Organisational Level Tables (Historic) n.d. www.england.nhs.uk/fft/friends-and-family-test-data/fft-data-historic/ (accessed 29 April 2019).
- Picker Institute Europe . Picker Institute Europe in Collaboration With the University of Oxford to Begin Data Collection for New Research ‘Evaluating the Use of Real-Time Data for Improving Patients’ Experiences of Care’ 2015. www.picker.org/wp-content/uploads/2015/05/2015-05-05-AfterFrancisTrustParticipation.pdf (accessed 3 April 2018).
- The Beryl Institute . State of Patient Experience Benchmarking, The State of Patient Experience 2017: A Return to Purpose n.d. www.theberylinstitute.org/?page=PXBENCHMARKING (accessed 3 April 2018).
- Donetto S. Organisational strategies and practices to improve care using patient experience data in acute NHS hospital trusts: an ethnographic study. Health Serv Deliv Res 2019.
- Erickson KC, Stull DD. Doing Team Ethnography: Warnings and Advice. Thousand Oaks, CA: SAGE Publications Ltd; 1998.
- Jarzabkowski P, Bednarek R, Cabantous L. Conducting global team-based ethnography: methodological challenges and practical methods. Human Relations 2015;68:3-33. https://doi.org/10.1177/0018726714535449.
- Van Maanen J. Tales of the Field: On Writing Ethnography. Chicago, IL: University of Chicago Press; 1988.
- Atherton H, Brant H, Ziebland S, Bikker A, Campbell J, Gibson A, et al. The potential of alternatives to face-to-face consultation in general practice, and the impact on different patient groups: a mixed methods case study. Health Serv Deliv Res 2018;6. https://doi.org/10.3310/hsdr06200.
- Schlesinger P, Selfe M, Munro E. Inside a cultural agency: team ethnography and knowledge exchange. J Arts Manage Law Socy 2015;45:66-83. https://doi.org/10.1080/10632921.2015.1039741.
- Higginbottom GM, Pillay JJ, Boadu NY. Guidance on performing focused ethnographies with an emphasis on healthcare research. Qual Report 2013;18:1-16.
- Vindrola-Padros C, Vindrola-Padros B. Quick and dirty? A systematic review of the use of rapid ethnographies in healthcare organisation and delivery. BMJ Qual Saf 2018;27:321-30. https://doi.org/10.1136/bmjqs-2017-007226.
- Wall SS. Focused ethnography: a methodological adaptation for social research in emerging contexts. Forum Qual Sozialforschung 2015;6.
- Knoblauch H. Focused ethnography. Forum Qual Sozialforschung 2005;6.
- Wong G, Westhorp G, Greenhalgh J, Manzano A, Jagosh J, Greenhalgh T. Quality and reporting standards, resources, training materials and information for realist evaluation: the RAMESES II project. Health Serv Deliv Res 2017;5. https://doi.org/10.3310/hsdr05280.
- Sharp CA, Boaden R, Dixon WG, Sanders C. The Means Not the End: Stakeholder Views of Toolkits Developed from Healthcare Research n.d.
- Staley K. Changing what researchers’ think and do’: is this how involvement impacts on research?. Research for All 2017;1:158-67. https://doi.org/10.18546/RFA.01.1.13.
- Weick KE. Sensemaking in Organizations. London/Thousand Oaks, CA: Sage Publications Ltd; 1995.
- Blumer H. Symbolic Interactionism: Perspective and Method. Berkeley, CA: University of California Press; 1969.
- Ancona D, Snook S, Nohria N, Khurana R. The Handbook for Teaching Leadership. London/Thousand Oaks, CA: Sage Publications Ltd; 2012.
- Gabbay J, Le May A. Evidence based guidelines or collectively constructed ‘mindlines?’ Ethnographic study of knowledge management in primary care. BMJ 2004;329. https://doi.org/10.1136/bmj.329.7473.1013.
- Tomlinson J. Lessons from ‘the other side’: teaching and learning from doctors’ illness narratives. BMJ 2014;348. https://doi.org/10.1136/bmj.g3600.
- Raleigh VS, Hussey D, Seccombe I, Qi R. Do associations between staff and inpatient feedback have the potential for improving patient experience? An analysis of surveys in NHS acute trusts in England. Qual Saf Health Care 2009;18:347-54. https://doi.org/10.1136/qshc.2008.028910.
- Luxford K, Safran DG, Delbanco T. Promoting patient-centered care: a qualitative study of facilitators and barriers in healthcare organizations with a reputation for improving the patient experience. Int J Qual Health Care 2011;23:510-15. https://doi.org/10.1093/intqhc/mzr024.
- Shaller D. Patient-centered Care: What Does It Take?. New York, NY: Commonwealth Fund; 2007.
- Maben J, Adams M, Peccei R, Murrells T, Robert G. ‘Poppets and parcels’: the links between staff experience of work and acutely ill older peoples’ experience of hospital care. Int J Older People Nurs 2012;7:83-94. https://doi.org/10.1111/j.1748-3743.2012.00326.x.
- West M, Dawson J, Admasachew L, Topakas A. NHS Staff Management and Health Service Quality. London: Department of Health and Social Care; 2011.
- Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014;12:573-6. https://doi.org/10.1370/afm.1713.
- Maben J, Taylor C, Dawson J, Leamy M, McCarthy I, Reynolds E, et al. A realist informed mixed methods evaluation of Schwartz Center Rounds® in England. Health Serv Deliv Res 2018;6. https://doi.org/10.3310/hsdr06370.
- Putnam RD. Bowling Alone: The Collapse and Revival of American Community. New York, NY: Simon and Schuster; 2001.
- Bourdieu P. Distinction: A Social Critique of the Judgement of Taste. London: Routledge and Kegan Paul; 1984.
- Marx K. Capital. London: J.M. Dent; 1933.
- Bourdieu P. Outline of a Theory of Practice. Cambridge: Cambridge University Press; 1977.
- Alderwick H, Charles A, Jones B, Warburton W. Making the Case for Quality Improvement: Lessons for NHS Boards and Leaders. London: The King’s Fund and The Health Foundation; 2017.
- Iedema R, Mesman J, Carroll K. Visualising Health Care Improvement. London: Radcliffe Publishing Ltd; 2013.
- Brennan PA, Mitchell DA, Holmes S, Plint S, Parry D. Good people who try their best can have problems: recognition of human factors and how to minimise error. Br J Oral Maxillofac Surg 2016;54:3-7. https://doi.org/10.1016/j.bjoms.2015.09.023.
- Green MC, Brock TC. The role of transportation in the persuasiveness of public narratives. J Pers Soc Psychol 2000;79:701-21. https://doi.org/10.1037/0022-3514.79.5.701.
- Sibley M. Understanding patient experience is fundamental to a patient centred service vision. The BMJ Opinion 2018. https://blogs.bmj.com/bmj/2018/04/24/miles-sibley-understanding-patient-experience-is-fundamental-to-a-patient-centred-service-vision/.
- Gubrium JF, Holstein JA, Flick U. The Sage Handbook of Qualitative Data Analysis. Los Angeles, CA: Sage Publications; 2014.
Appendix 1 Case descriptions
Case study 1: description summary
This case study is set within a local general district hospital in an area of high deprivation. The hospital had previously experienced a highly critical CQC inspection, with subsequent public reporting and negative discourse, which naturally had had an impact on staff morale as well as resulting in senior management resignations. Since 2012, measures have been lifted and a more positive assessment was obtained from the CQC in 2015. There are annual ‘remarkable people awards’ with categories including ward of the year, team of the year and employee of the year. In addition, feedback from online and social media is triangulated with other sources. Formal staff feedback is now more positive, based on the annual staff survey and FFT, specifically around perceptions of the care provided by the trust; the trust’s response to patient concerns; and staff recommendations of a place to work or receive care for themselves or their relatives/friends. Despite this, there are significant staff shortages, with many posts vacant, especially in challenging areas and services.
Case study ward
This case study is a 30-bed emergency assessment and discharge unit, which reviews and treats patients who are expected to require investigations and therapies as an inpatient. Following review, patients are transferred to an appropriate medical ward for further care or discharged home. Patients are referred via accident and emergency (A&E) or directly from their GP and have a diverse range of conditions and severity of illness. Staff have a wide knowledge and skill base, and work on the unit provides a challenging but broad range of experience for them. There is a high turnover of patients, with up to 35 patients passing through the ward per day. In theory, an emergency assessment and discharge unit is a short-stay area; patients are supposed to be assessed by a doctor, triaged for a general medical ward, and moved on to the appropriate ward within 14 hours. In reality, patients often stay several days, particularly those who are very sick and need to be cared for in a high-intensity environment.
Previous use of patient feedback at ward level
Despite the high level of trust engagement with patient experience and wide range of data sources utilised, the only source of ward-level patient experience data was the FFT, along with complaints and thank-you cards. Feedback was previously discussed at staff meetings, but these are no longer held on the unit because of staffing pressures. Feedback is summarised in a monthly newsletter which is e-mailed to staff and displayed in the staff room. Complaints are dealt with separately owing to the sensitivities around the identification of specific staff members.
Staff involvement in the project
The head of patient experience was keen to take part in the project, to the extent that it became one of the trust’s ‘quality priorities’ for 2016. The project was led by front-line staff who were selected by the ward sister based on their motivation and ‘can-do’ personalities, and who were deemed to deserve participation as a form of reward. The core team consisted of three health-care assistants and two nurses, who received practical and motivational support from a consultant on the ward.
Quality improvement intervention
The team chose to introduce a welcome pack for patients as their primary intervention. This was based on two things. First, they were impressed by a welcome pack that one of the other participating trusts had successfully implemented. As a similar kind of ward, they identified with them and felt the idea was easily transferable to their patients. Second, it was obvious to them that the patients they saw often came to the unit with no personal effects, either as a result of deprivation or because they had come in as an emergency with no time to organise what they would need. In their view, the welcome pack would address this patient need. The staff applied for funding and negotiated with suppliers to support this.
Following implementation of the welcome pack, formal feedback was obtained from over 200 patients. The majority of patients were extremely positive and many provided comments suggesting that the initiative indeed met patient need. The team felt extremely proud of what they had achieved.
The QI approach was informal and ad hoc, and communication within the team was supported using a messaging service on social media.
Challenges experienced
The team felt the strain of trying to implement a QI project on what is an extremely busy and high-pressured unit. They found it difficult to schedule formal meetings, which resulted in a more informal approach.
The hospital started operating at full capacity in October 2016, with escalation areas in use. A&E was already experiencing high demand at the start of October 2016, with attendances significantly higher than in the previous year. This continued over the year with increasing demand in A&E that had an impact on activity on the wards.
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
29% (n = 20) reported that they were not able to find a member of staff to talk to about their worries or fears
-
43% (n = 29) reported that it took ‘3–5 minutes’ to get the help they needed after using the call button
-
48% (n = 46) reported that they ‘definitely’ felt involved in decisions about their discharge from hospital
-
87% (n = 76) reported that they were ‘definitely’ given enough privacy when being examined or treated in A&E
-
89% (n = 88) reported that they ‘always’ felt well looked after by hospital staff
-
76% (n = 73) reported that members of staff ‘always’ worked well together
-
93% (n = 76) ‘strongly agreed’ or ‘agreed’ with the statement ‘When I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when’.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed one statistically significant change in the results for this case study. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value was < 0.05). A statistically significant improvement was found for question 36, ‘When I left hospital, I clearly understood how to take each of my medications, including how much I should take and when’ (p = 0.036). The proportion of people ‘agreeing’ with this statement increased from 43% in the baseline survey (n = 35, 95% CI 32% to 53%) to 59% in the post-intervention survey (n = 55, 95% CI 48% to 68%).
Non-statistically significant improvements
Non-significant improvements were also seen on a number of other questions. These included the following.
Care and treatment
For question 10, ‘In your opinion, did the members of staff caring for you work well together?’, 83% (n = 90, 95% CI 75% to 89%) reported that staff ‘always’ worked well together, compared with 76% (n = 73, 95% CI 67% to 84%) in the baseline survey (p = 0.264).
For question 14, ‘Did you find someone on the hospital staff to talk to about your worries and fears?’, 40% (n = 29, 95% CI 30% to 52%) reported that they were able to find a member of staff to talk to about their worries or fears ‘to some extent’, up from 27% (n = 18, 95% CI 17% to 38%) in the baseline survey (p = 0.082).
Operations and procedures
Those who had had an operation or procedure were asked question 22: ‘Beforehand, were you told how you could expect to feel after you had the operation or procedure?’. Both the proportion of patients saying ‘yes, completely’ and the proportion saying ‘yes, to some extent’ increased from the baseline survey. The proportion of respondents who said ‘yes, completely’ increased from 56% (n = 19, 95% CI 39% to 72%) in the baseline survey to 58% (n = 15, 95% CI 39% to 75%) in the post-intervention survey (p = 0.886). The proportion of those who responded ‘yes, to some extent’ increased by 10 percentage points, from 24% (n = 8, 95% CI 12% to 40%) in the baseline survey to 34% (n = 9, 95% CI 19% to 54%) in the post-intervention survey (p = 0.357).
Interviews
Baseline interviews
Comments provided in the free-text section of the survey and in the interviews were generally positive. Patients reported examples of good care typically in terms of the relational aspects of care. Staff were kind, helpful and attentive to both medical and personal care needs. Frequent monitoring was appreciated by some patients; this made them feel safe and looked after. There were several comments about the level of activity and noise on the ward; staff were under pressure responding to patients’ requests for help, and the noise from the monitors disturbed some patients. However, most patients appreciated that such monitoring was necessary. There were also reports from patients that, despite the demands, staff were dedicated, calm and professional, and worked together well.
Patients reported mixed experiences concerning communication and information. Some reported that staff spent time with them explaining treatments and their condition, but others would have liked more information and explanation about their condition and test results.
Although patients reported that they were treated with privacy and dignity (e.g. curtains pulled around when attending to care needs), this was insufficient when discussing diagnosis and end-of-life decisions. This was mentioned with regard not only to themselves but also to overhearing discussions between health-care professionals and other patients.
The noise and activity on the ward was troubling for some patients, as it had an impact on their ability to sleep. However, this was caused by disturbance from other patients and not from staff.
A few patients experienced poor care from specific members of staff, particularly a lack of compassion and help with toileting needs.
Some patients experienced delays during discharge, and, in other cases, they felt that discharge was rushed.
Post-intervention interviews
The responses to the free-text question on the post-intervention survey and comments provided by interview respondents were generally very positive, particularly highlighting the quality of care and the kindness of staff. Patients recognised that the case study unit was a very hectic place and yet, despite this, the staff were able to find the time to care for people as individuals. One patient reflected that it was specific acts of kindness and compassion that made them feel cared for; they were helped with showering in a way that afforded them privacy while knowing that a member of staff was available if needed.
Some patients felt that they were involved in decisions as much as they wanted to be.
Poor experiences were described for specific aspects of care. These included being disturbed at night by the noise on the ward and demands from other patients. In one case, a patient reported interrupted sleep because of tests and procedures. There were some reports of delays with administration of medication and loss of medicines when patients were transferred to another ward.
Some patients reported that they would have liked more information about their condition and treatments. Comments were also made about doctors not including them in discussions.
Some patients referred to delayed discharge but it is not clear if this was in relation to the case study ward.
Case study 2: description summary
This case study site is a large hospital foundation trust with two district and one maternity hospital, which serves a population in an affluent county. The study covered a turbulent time in the trust; there were resignations and vacancies at senior management level and the trust had recently been placed in ‘financial special measures’.
Recent CQC inspections reported that the trust required improvement in relation to safety, effectiveness and responsiveness. However, the report highlighted that the provision of care to patients was compassionate and caring.
The survey of patient experience leads in phase 1 reported that the trust had formal arrangements to promote and co-ordinate safety and QI education for staff at all levels. However, the trust did not have a specific written plan/strategy for the collection and use of patient experience data. Furthermore, there was a lack of time to examine it owing to workforce and general system issues.
In addition, the 2015 staff survey results indicated poor performance on ‘staff engagement’ as well as poor staff ratings on the effective use of patient/service user feedback, staff motivation, staff satisfaction with quality of work and ability to deliver patient care.
Case study ward
The study ward is a predominantly gastroenterology ward, with a small number of respiratory and dermatology beds. It has experienced persistent understaffing, with high long-term nurse vacancy levels and consequent large numbers of agency staff. The ward manager sometimes loses management/supervisory time to attend to clinical duties because of understaffing. The ward has a challenging mix of patients, including frail elderly patients, people with alcohol/drug use issues and people who display violent and aggressive behaviour. At the beginning of the project, the ward was emerging from a particularly difficult period during which a patient and relative with highly challenging behaviour had had a detrimental effect on staff retention and morale.
Previous use of patient feedback at ward level
No ward-level patient experience data were drawn on for service improvements prior to the study period.
Staff involvement in the project
An effective and committed core team included the ward manager, the ward clerk, a health-care assistant and a member of the patient experience team. The matron was supportive throughout and there was initial support from one of the consultants. During the project, all of the staff on the ward became involved. Reporting of achievements to strategic departments resulted in plans to roll out the approach more widely through the trust.
Quality improvement intervention
Improvement interventions were based on a wide range of data sources, including data from the study survey and interviews, a patient focus group, staff-generated ideas, and inspiration from the learning community. Existing patient experience data, such as those from the national inpatient survey and FFT, were drawn on largely to provide corroboration of ideas from other sources. Initiatives ranged from small patient-focused interventions to improve their experience on the ward to approaches to facilitate a culture of open communication with patients, encouraging expression of patient needs and priorities, and feedback on care.
Examples of initiatives included having colourful hearing-aid boxes to avoid these becoming lost; admission packs (including all of the necessary documentation plus eye masks and ear plugs, if required); the introduction of a rota that allowed staff to sign up for unfilled shifts before these were offered to agency staff; the provision of information boards and leaflets; and interventions to encourage communication and understanding between patients and staff, such as ‘It’s OK to ask’ badges and posters, bedside whiteboards and a ‘what matters to me’ board in the shape of a tree to which patients and staff could add comments.
A key philosophy was engaging staff in identifying priorities for improvement and fostering a culture of ensuring that staff felt valued by involving, listening and responding to them.
The ward manager, matron and patient experience team member described a better level of staff engagement, team work and morale than there had been on the ward prior to the project. As momentum and confidence built, the team set its sights on improvements that reached beyond the ward to wider service delivery systems and processes in the trust.
The ward received an award for its work: the category of ‘exceptional contribution to patient experience’ at the annual trust awards.
Challenges experienced
The trust was operating under severe financial constraints that contributed to difficulties in accessing resources and materials. The team’s improvement activities were achieved despite chronic staff vacancy levels on the ward that limited the time and energy available for non-essential work. There was a high level of engagement and enthusiasm for taking part in the project from staff. The team were interested in exploring a range of informal sources of insight into patient experiences rather than formal data, and did not choose to adopt a particular QI methodology.
A patient focus group was convened to help staff understand the findings from the baseline ward survey in more detail. Beyond this, patients were not involved in the planning or implementation of the project, but there was little indication that the team initially regarded this as a problem. When the issue of patient involvement was raised in a formative way, team members recognised that it could have been valuable, as long as (in their view) the patients involved were selected carefully.
Alongside the trust’s CQC rating as ‘requires improvement’ and the financial special measures, the ward faced large numbers of unfilled nursing and HCA vacancies, staff turnover and sickness. Between six and eight whole-time equivalent band 5 staff nurse posts were unfilled for the entire study period. Between four and six band 2 HCA posts were also unfilled.
Staff turnover for the 12 months, May 2016–June 2017, was 8.2%, and the staff sickness rate on the ward for the same period was 4.67% (compared with the whole-trust figure of 3.95%).
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
24% (n = 16) said that they were not able to find a member of staff to talk to about their worries or fears
-
39% (n = 33) said that they were not given enough information about their condition or treatment
-
29% (n = 25) said that they were not given enough notice about when they were going to be discharged
-
73% (n = 16) of those who had an operation or procedure said that staff explained the risks and benefits in a way they could understand
-
75% (n = 63) said that they were ‘always’ treated with respect and dignity
-
83% (n = 49) said that staff discussed whether they might need any further health or social care services after leaving hospital
-
87% (n = 70) ‘strongly agreed’ or ‘agreed’ with the statement ‘when I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when’.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed no statistically significant changes in the results for this case study. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value was < 0.05).
Non-statistically significant improvements
For most of the items relating to care from staff on the ward, communication and information, the survey results indicated a positive change in patient experience. These could be attributable to some of the QI interventions such as the ‘It’s OK to ask’ badges and posters, whiteboards, the ‘what matters to me’ tree, patient information racks and boards, staff suggestions board and admissions packs. Questions included the following:
-
For question 8, ‘When you had important questions to ask a doctor, did you get answers that you could understand?’, 56% (n = 43, 95% CI 45% to 67%) said that they ‘always’ got answers they could understand, compared with 50% (n = 41, 95% CI 40% to 62%) in the baseline survey (p = 0.452).
-
For question 10, ‘In your opinion, did the members of staff caring for you work well together?’, the proportion of respondents answering ’yes, always’ increased from 68% (n = 58, 95% CI 58% to 77%) to 74% (n = 64, 95% CI 64% to 83%) in the post-intervention survey (p = 0.368).
-
For question 11, ‘Were you involved as much as you wanted to be in decisions about your care and treatment?’, the proportion of respondents answering ‘yes, definitely’ to this question increased from 42% (n = 37, 95% CI 32% to 52%) in the baseline survey to 51% (n = 42, 95% CI 41% to 62%) in the post-intervention survey (p = 0.277).
-
For question 13, ‘How much information about your condition or treatment was given to you?’, the proportion of respondents answering ‘the right amount’ increased from 62% (n = 51, 95% CI 50% to 71%) to 69% (n = 58, 95% CI 59% to 78%) in the post-intervention survey (p = 0.289).
-
For question 21, ‘Beforehand, did a member of staff answer your questions about the operation or procedure in a way you could understand?’ (asked to those who had an operation or procedure), 78% of respondents (n = 23, 95% CI 60% to 89%) answered ‘yes, completely’, compared with 62% (n = 10, 95% CI 36% to 79%) in the baseline survey (p = 0.220).
By contrast, for some items where improvement was at least partly beyond the control of the ward (e.g. admission to hospital via A&E, some aspects of discharge or care at home), the results remained the same or declined.
Interviews
Baseline interviews
There were many reports from patients of staff kindness and compassion. Some patients recognised that staff were under pressure; the ward was busy and at times appeared chaotic, but nonetheless staff provided the best possible care. Those patients reporting a good experience on the ward were appreciative of the care they received.
The timing of clinical procedures was considered inconvenient at times, especially when these took place during mealtimes. A common concern reported was being distressed by disruptive patients on the ward who were aggressive and abusive to the nursing staff, and who patients found intimidating. Patients described feeling unsafe because of the disturbance, which had an impact on their recovery.
Patients reported mixed experiences of the discharge process; some reported good information and explanation, whereas others reported delays related to pharmacy and a lack of information.
There were some reports of poor care by specific individual staff members.
Post-intervention interviews
There were reports of good care from staff, particularly kindness, compassion and responsiveness to needs. Some patients described good communication between nurses and patients, but less so between doctors and patients and between doctors and nurses. Similar to baseline interviews, there were reports of disturbance by other patients on the ward, which had an impact on feelings of safety, resulting in some individuals feeling frightened. Staff were praised for their efforts in dealing with aggressive and abusive patients.
Some patients felt that they were not listened to by the staff and, in one case, family members had to push for information about their relative’s condition. Written and verbal information provided was reported as variable, especially in relation to discharge planning and reasons for delay. Furthermore, some patients would have liked more information about their condition, the treatments received and their discharge medicines.
A lack of co-ordination of care between hospital and home was also reported.
As noted above, given the recent history of poor morale on the ward, the team chose to focus explicitly on improving staff experience as a first step to improving patient experience.
It is inevitable that specific projects arising from this more general approach to cultural change would take time to get going and become embedded to a point where they become noticeable to patients and families. It is perhaps not surprising, therefore, given the focus on tackling staff experience, that a significant short-term change in patient experience scores was not evident. However, ethnographic data considered elsewhere in the report suggest that the project generated significant energy and activity among staff on the ward.
Case study 3: description summary
This case study site is a hospital in a large foundation trust. It provides acute and emergency care as well as a range of other services expected in a general hospital.
The CQC Inspection Report published in 2015 gave an overall rating for the hospital as ‘outstanding’, with most services rated as ‘good’ or ‘outstanding’. This outstanding rating was premised on an inspection that concluded that staff/services were noted as ‘caring, responsive and well-led, and good in providing safe and effective care’. The report notably commented on the implementation of different models of care; strong governance structures and leadership; staff and patient engagement; proactive management of patient flow; and outstanding practice of the collection of real-time feedback at ward level, with timely action and responsiveness. This was regardless of whether the feedback was positive or negative.
There is strong leadership and active engagement with the patient experience office at a strategic level, and the trust is motivated towards improving the quality of care and patient experience.
Case study ward
The case study ward at the inception of the study was an elderly persons’ rehabilitation unit, which included many patients with dementia and cognitive impairment. Patients were often on the ward for periods of 1–3 weeks as they sought rehabilitation from surgery, accidents, falls and/or illness. Much of this rehabilitation focused on physiotherapy and took place in a dedicated physiotherapy unit on the ward. However, in December 2016, the ward was subject to a trust-level change that required the ward to move from an elderly persons’ rehabilitation unit to a pioneering nurse-led discharge unit. This change resulted in a higher turnover of patients and a reduction in the number of staff on each shift. It was an unsettling time for staff as it affected morale on the ward and coincided with two important members of the team leaving.
Previous use of patient feedback at ward level
The ward had previously used the trust’s own real-time survey, and included some additional questions. Furthermore, an EBCD methodology was also applied throughout the project.
Staff involvement in the project
The core team consisted of the ward manager, the activities and well-being co-ordinator, the head of QI and patient experience, and a patient experience officer. The team had peripheral membership throughout the study, including key contributions from a student-placement (patient experience) and an executive director. The team did not include any form of patient representation but included ward patients (and former ward patients) as part of data collection using EBCD methods.
Quality improvement intervention
Initially, the team chose to focus on a pre-conceived area of unmet need (based on nurses’ intuition and experience of working on the ward). This related to the development of a non-clinical role to provide support to patients on the ward.
However, as a direct result of EBCD methods, the aims of this project moved towards three areas of action/improvement, namely improving communication, increasing responsiveness to patients’ needs, and increasing interaction with patients. Several small-scale interventions were implemented to address these issues.
Core tasks for increased communications included:
-
the design of a welcome pack for new patients accessing the ward
-
the design of a discharge pack for all patients leaving the ward
-
the creation of a photoboard to help patients/relatives identify staff (and their role).
Core tasks for reducing waiting times included:
-
the embedding of Ask Before You Leave (and assorted paraphernalia) on the ward
-
the monitoring of the call-bell use as extended observation
-
socialised dining in the ‘social room’.
Core tasks for increasing patient interaction included:
-
the introduction of a structured activities timetable
-
the introduction of bespoke activities
-
increased one-to-one time with the activities and well-being co-ordinator.
Challenges experienced
The main challenge was the departure of two members of staff who were key people involved in the project. The change in the remit of the ward altered the patient population and had an impact on the aims and objectives of the project. The enforced reduction in staff following ward reconfiguration, which also coincided with winter pressures, lowered staff morale throughout the ward. The international cyber attack that affected NHS England in May 2017 severely delayed all digital/electronic communications throughout the trust, including those connected to the research team.
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
35% (n = 6) stated that it took more than 5 minutes to get the help they needed after using the call button
-
53% (n = 8) reported that they were not given clear written information about what they should or should not do after leaving hospital
-
47% (n = 7) stated that a member of staff did not tell them about danger signals they should watch out for when they went home
-
79% (n = 15) reported that they were treated with respect and dignity
-
74% (n = 14) reported that all doctors and nurses asked them what name they prefer to be called by
-
89% (n = 16) reported that staff discussed whether they might need any further health or social care services after leaving hospital
-
94% (n = 15) ‘strongly agreed’ or ‘agreed’ with the statement ‘When I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when’.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed that one question saw a statistically significant improvement and two questions saw a statistically significant decline in results. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value was < 0.05).
The question that saw an improvement focused on the responsiveness of staff to call-bell use, with 47% of respondents (n = 16, 95% CI 30% to 62%) saying that the call bell was answered within ‘1–2 minutes’, up from 11% (n = 3, 95% CI 3% to 33%) in the baseline survey (p = 0.013).
The questions that saw a decline in positive responses were 8 and 14:
-
For question 8, ‘When you had important questions to ask a doctor, did you get answers that you could understand?’, a reduction was seen in the proportion of respondents saying ‘yes, always’ to 34% (n = 13, 95% CI 21% to 51%), down from 73% (n = 14, 95% CI 53% to 92%) in the baseline survey (p = 0.009). A significant increase in respondents saying ‘yes, sometimes’ was shown, from 24% (n = 3, 95% CI 9% to 48%) in the baseline survey to 57% (n = 21, 95% CI 41% to 72%) in the post-intervention survey (p = 0.023).
-
For question 14, ‘Did you find someone on the hospital staff to talk to about your worries and fears?’, a decrease in the proportion of people answering ‘yes, definitely’ was seen from 55% (n = 9, 95% CI 31% to 73%) in the baseline survey to 27% (n = 9, 95% CI 15% to 44%) in the post-intervention survey (p = 0.039).
Non-statistically significant improvements
Non-significant improvements were seen in two questions relating directly to the ward’s focus on communication, a major theme coming out of the EBCD process:
-
For question 28, ‘Were you given clear written information about what you should or should not do after leaving hospital?’, 72% (n = 26, 95% CI 57% to 85%) responded ‘yes, definitely’, up from 51% (n = 7, 95% CI 26% to 74%) in the baseline survey (p = 0.167).
-
For question 29, ‘Did hospital staff take your family or home situation into account when planning your discharge?’, 32% (n = 13, 95% CI 21% to 48%) said ‘yes, to some extent’, an increase from 24% (n = 5, 95% CI 8% to 45%) in the baseline survey (p = 0.533).
Interviews
Recruiting patients for interviews was difficult at baseline and particularly post intervention. Several attempts to recruit were made by the researcher and staff on the ward.
Baseline interviews
Comments from the free-text sections of the surveys and interviews in some cases were very positive: staff were kind, caring and compassionate. Further feedback stated examples of staff being responsive to personal needs as well as recognising wider aspects of patients’ lives. Noise on the ward was specifically referred to as disturbing, particularly the frequent call bells and patients demanding attention. However, it was acknowledged that this was not surprising as many patients on the ward were living with dementia.
Patients also commented on their dissatisfaction with discharge processes. In some cases, patients felt that they had been discharged too early and that the process was rushed. There were also reports of a lack of co-ordination of care between the hospital and social care, resulting in an inadequate care package.
Despite staff identifying patient boredom and isolation as a priority for improvement to enhance patients’ experience on the ward, there were no reports or concerns about this.
Post-intervention interviews
Patients gave some feedback in the free-text sections of the survey, but only one patient was successfully interviewed. Good care was referred to, but generally comments related to a less positive experience on the ward. Disturbance at night from other patients was reported to have an impact on sleep. One patient felt they did not receive the help they needed with personal care. Generally, conflicting information seemed to be given to patients about care and assessment from allied health-care professionals, specifically care that was needed at home.
One patient expressed concern about discussions held about ‘do not resuscitate’ orders and another felt that staff demonstrated a lack of knowledge and understanding of the needs of patients living with dementia. However, another patient reported that she had been treated as if she did have dementia; staff had checked her bed to see if she had been incontinent.
Case study 4: description summary
This case study site provides core services and is also a research-focused teaching facility in a large urban area.
In March 2016, the CQC gave the hospital an overall rating of ‘good’, but some services were identified as requiring improvement. The report noted examples of services that had outstanding leadership; staff were dedicated, caring and supportive of each other. There was also overwhelmingly positive patient feedback. The main negative observations by the CQC related to the impact of increasing demand on staff and service delivery.
Case study ward
The case study ward is a large acute medical unit (AMU) that receives patients from the emergency department. Patients typically have short stays on the ward and often arrive in varying degrees of acuity. The AMU’s primary role is to provide rapid definitive assessment, investigation and treatment for patients admitted urgently from the emergency department and/or referred by their GP.
Ward patient feedback
Great emphasis and value within the trust is given to the scores obtained from various departmental FFTs (outpatients, inpatients, A&E, maternity, etc.). Responses to the various tests are circulated to the respective wards and are subsequently cascaded to wider (more senior) front-line staff via e-mail. At the time of the study these were the main forms of patient feedback utilised by the ward.
Staff involvement in the project
A project manager from the patient experience office played a key role in driving the project, and was assisted by devolved decision-making from the ward manager in matters relating to ward logistics. Other team members included nursing staff and a senior health-care assistant. During the project, other staff were integrated into the team, including a pharmacist, a medical registrar and a charge nurse, as well as peripheral participation by at least five junior doctors.
It is integral to note, however, that the original core team was convened at very short notice (i.e. 2 days before the first learning community) by the ward manager concerned. The team did not include any form of patient representation (and this was not considered from the outset).
Quality improvement intervention
Initially, the team planned to address a small-scale issue that was driven by a perception of need. However, in the absence of data to support these assumptions, it was decided that an improvement area should be identified from the project baseline survey results. The improvement initiative subsequently focused on a specific part of the patient journey in relation to provision of information and more efficient operational processes. This was developed by and with a multidisciplinary team. Other small-scale interventions included doctor-led disease-specific information for patients.
Additional peripheral tasks for improved patience experience included a variety of (less successful) attempts to identify patient-centred issues for improvement. These attempts involved the trial of different approaches to obtaining patient feedback by staff largely unfamiliar with such techniques.
Core tasks for improved patient experience included:
-
the design of information leaflets for use among those affected by the ward’s most frequent illnesses
-
an attempt to make the AMU discharge process more efficient following a series of successful stakeholder mapping sessions.
Peripheral tasks for improved patience experience included a variety of less successful attempts to identify patient-centred issues for improvement. Several methods have been used to try to identify such issues, including:
-
an adapted form of the ‘15 Steps’ methods to consider the ward environment
-
a patient diary that did not produce meaningful data (i.e. it was too positive and did not identify areas for improvement)
-
attempts at conducting patient interviews after discharge.
Challenges experienced
There were some difficulties with sustaining the relationship between the patient experience office and the initial core team/ward staff. This was mainly due to capacity and shift patterns that meant that several project meetings were cancelled.
One of the planned interventions to obtain feedback from patients was not successful.
A flu outbreak severely curtailed who could enter/access the ward during December 2016–January 2017.
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
26% (n = 15) reported that they were not able to find a member of staff to talk to about their worries or fears
-
30% (n = 16) reported that they were not told about any danger signals to watch for after they went home
-
22% (n = 11) reported that their family (or someone close to them) were not given all the information they needed to help care for them at home
-
85% (n = 61) stated that they were ‘always’ treated with respect and dignity
-
72% (n = 52) stated that they ‘always’ had confidence in the decisions made about their condition or treatment
-
92% (n = 46) stated that staff discussed whether they might need any further health or social care services after leaving hospital
-
97% (n = 63) ‘strongly agreed’ or ‘agreed’ with the statement ‘When I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when’.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed statistically significant improvements in five questions across experiences while in A&E, with operations and procedures and during discharge. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value was < 0.05).
Accident and emergency department
For question 3, ‘While you were in the A&E department, how much information about your condition or treatment was given to you?’, the proportion of respondents ticking ‘the right amount’ increased from 71% in the baseline survey (n = 38, 95% CI 57% to 81%) to 88% (n = 58, 95% CI 78% to 94%) in the post-intervention survey (p = 0.024).
Operations and procedures
For question 20, ‘beforehand, did a member of staff explain the risks and benefits of the operation or procedure in a way you could understand?’, the proportion of respondents ticking ‘yes, completely’ increased from 66% in the baseline survey (n = 15, 95% CI 45% to 82%) to 91% (n = 21, 95% CI 75% to 98%) in the post-intervention survey (p = 0.040).
For question 21, ‘beforehand, did a member of staff answer your questions about the operation or procedure in a way you could understand?’, the proportion of respondents ticking ‘yes, to some extent’ decreased from 35% (n = 8, 95% CI 18% to 55%) to 10% (n = 2, 95% CI 2% to 27%) in the post-intervention survey (p = 0.045), a positive improvement when teamed with a non-significant increase in respondents ticking ‘yes, completely’ to this question from 57% in the baseline survey (n = 13, 95% CI 37% to 75%) to 81% (n = 17, 95% CI 61% to 93%) post intervention (p = 0.093).
For question 22, ‘beforehand, were you told how you could expect to feel after you had the operation or procedure?’, the proportion of respondents ticking ‘yes, completely’ increased from 43% (n = 10, 95% CI 24% to 61%) to 75% (n = 16, 95% CI 55% to 90%) post intervention (p = 0.026).
Discharge
For question 27, ‘Were you given enough notice about when you were going to be discharged?’, the proportion of ‘yes, definitely’ responses increased from 50% (n = 36, 95% CI 38% to 61%) to 66% (n = 53, 95% CI 55% to 75%) post intervention (p = 0.042). The proportion of ‘yes, to some extent’ responses decreased from 35% (n = 26, 95% CI 25% to 47%) to 20% (n = 17, 95% CI 13% to 31%) in the post-intervention survey (p = 0.040).
Non-statistically significant improvements
Non-significant improvements were also seen in a number of other questions. These included questions on communication. For example, the proportion of patients reporting that when they had important questions to ask a doctor they ‘always’ got answers they could understand increased from 65% (n = 46, 95% CI 53% to 75%) to 78% (n = 60, 95% CI 68% to 86%) post intervention (p = 0.084). In addition, the proportion of patients reporting that they ‘always’ got answers they could understand from nurses also increased from 65% (n = 44, 95% CI 52% to 74%) to 78% (n = 59, 95% CI 68% to 87%) in the post-intervention survey (p = 0.068).
Non-significant improvements were also seen in the help given to patients with washing, with 82% (n = 29, 95% CI 66% to 91%) of respondents saying that they ‘always’ got the help they needed post intervention, up from 63% (n = 26, 95% CI 48% to 77%) in the baseline survey (p = 0.071). The proportion of respondents who said that they were bothered by noise at night decreased from 24% (n = 18, 95% CI 15% to 34%) at baseline to 16% (n = 12, 95% CI 9% to 25%) in the post-intervention survey (p = 0.191).
Interviews
In many cases, patients had difficulty recalling their experiences of the ward because they had been unwell on admission. It seems that most patients were transferred to other wards and it is possible that they were providing feedback about their hospital experiences in general, and not specifically about the AMU. This relates particularly to their reports of the discharge experiences, which, in many cases, were delayed as a result of waiting for medication. In another case, discharge was rushed due to inadequate home assessment.
Baseline interviews
Many patients commented on how caring and kind staff were to all patients, and felt that they were treated with dignity and respect. Patients highlighted how efficient and professional the staff were, providing reassurance and being attentive to their needs. Good communication was reported in some cases, but others mentioned that they would have liked more information about their medication. It was acknowledged by some patients that the ward was always busy but they felt safe as staff were nearby. In one case, the patient expressed that the ward was too busy and that there were not enough staff attending to patients’ needs.
However, there were comments about disturbance from other patients and their relatives, as well as frequent requests for help and assistance from patients. In one case, a patient was aggressive, which posed a challenge for staff, particularly as the patient’s relatives were also difficult to deal with.
Post-intervention interviews
Largely, patients interviewed post intervention reported positive experiences of being on the ward. Examples included excellent communication and information from all staff; staff being respectful, friendly and caring; and patients feeling that they were able to discuss their worries and fears about their condition. Patients also reported that they had confidence in the staff, that they were monitored frequently and that the staff were well trained.
Unlike in the baseline interviews, there were no reports of disturbance from other patients. However, some felt that there were, on occasion, too many visitors.
Case study 5: description summary
This case study is a large general hospital providing acute services and renal, stroke and vascular specialties on four main sites. The trust is also an accredited cancer centre providing services to a wider population.
A CQC inspection in June 2015 resulted in an overall rating of ‘requires improvement’, with some specific services assessed as ‘good’. Staff feedback via the national survey placed this trust in the worst 20% nationally, with staff feeling undervalued, facing a lack of support from senior staff, and reporting work-related stress.
In response to the 2015 CQC ratings, the trust now has a well-developed and extensive strategy, supported by the board, for improving patient experience, which is very much a task-driven approach. This includes development of a diverse range of patient feedback mechanisms and sources of data; different approaches to staff engagement, including dissemination of FFT findings and triangulation of patient feedback to staff; and specific small-scale interventions to support a good patient experience. In addition to these, a sub-board-level patient and carer experience and engagement group has been set up. There is also a dedicated QI team within the trust. When the CQC visited again in 2017, it found that significant improvements had been made and rated the trust ‘good’.
Patient experience data are actively discussed with staff in the trust and changes have been made based on the results from patient experience data collections: changes to the way staff interact with patients; changes to the way care is provided to patients; changes to the layout of the hospital, ward or department; and changes to the transactional aspects of care (e.g. cleanliness, catering).
In May 2016, the ward that had been selected as a case study changed name and location within the hospital, meaning that the staffing team and patients moved to a different physical space and the ward assumed a new identity. This occurred after the US-PEx baseline survey but before the start of the ethnographic fieldwork. Because the ward team remained the same, this was not deemed to be a problem, as the majority of content in the survey focused on either relational aspects of care (staff interactions) or experiences pre and post being on the ward, rather than on the physical features of the ward.
Case study ward
This case study ward is a 28-bed acute general medical ward for female patients with an average stay of 5–9 days. There is a high proportion of frail, elderly patients.
Previous use of patient feedback at ward level
Very few ward-level patient experience data were routinely used to inform improvements, aside from limited FFT data.
Staff involvement in the project
The director of nursing nominated the ward for the study; it had been through a difficult period, and its participation in the project was considered to be an opportunity for development. Staff involved in the study were selected based on their capability and motivation towards change. Two nurses and two HCAs led the project.
Quality improvement interventions
At the start of the project, the team felt that they did not have any patient experience data to draw on, apart from some very limited FFT data. Their approach was therefore to start by generating some new data at a ward level and then identify themes to address. They implemented ‘FFT Friday’, a weekly drive to increase their FFT response rate; they asked a colleague from the professional development team to come in and observe the ward and feed back to them; and they asked the same colleague to conduct eight staff interviews exploring views on patient experience. ‘Snapshot’ data from patients were collected on a ‘bubble board’: patients were invited to write how they were feeling about their stay on the ward on paper speech bubbles, and stick these on the board. The ‘bubble board’ was described as both data and an intervention.
In terms of improvements made, the team implemented a number of other small-scale changes. These included sending sympathy cards to bereaved relatives of patients who had died on the ward; implementing a welcome pack; promoting (and auditing the use of) sleep-well packs to address noise at night; and having nurses accompany doctors on ward rounds to help with patient understanding and communication. Some changes were based on patient experience data, and some were from staff ideas based on perceived need.
Beyond these small changes, a more significant development put into place was the mainstreaming of patient experience into the routine work practices of the ward. For example, all new staff are now made aware of the importance of patient experience when they start, patient experience has a regular slot in staff meetings, and patient experience news is posted on a board in the staff area. Core team members report a change not only in their own awareness of listening to the patient’s voice, but also in other staff doing the same, suggesting that the ward culture may be changing in this regard.
Challenges experienced
The main problem from the staff’s perspective has been a lack of time to organise meetings or interventions around patient experience. QI planning meetings have been infrequent and most ideas have been floated and acted on during brief corridor encounters or in staff members’ own time. Further challenges have been the large number of staff vacancies, high staff turnover and reliance on a very high proportion of agency staff. This has made continuity hard to achieve. In spite of these challenges, the team remains committed to acting on patient experience to improve care on the ward.
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
50% (n = 16) reported that they were not able to find a member of staff to talk to about their worries or fears
-
34% (n = 12) reported that they were not involved as much as they wanted to be in decisions about their care and treatment
-
46% (n = 16) did not feel that they were involved in decisions about their discharge from hospital
-
54% (n = 14) reported that a member of staff did not tell them about danger signals they should watch out for when they went home
-
53% (n = 19) reported that they were treated with respect and dignity
-
51% (n = 18) reported that members of staff ‘always’ worked well together
-
86% (n = 26) ‘strongly agreed’ or ‘agreed’ with the statement ‘When I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when’.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed that there were statistically significant improvements on eight questions, including care while in A&E, care from staff and noise at night. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value is < 0.05).
Accident and emergency department
For question 3, ‘While you were in the A&E department, how much information about your condition or treatment was given to you?’, the proportion of respondents selecting ‘I was not given any information’ decreased from 27% in the baseline survey (n = 8, 95% CI 14% to 44%) to 8% (n = 3, 95% CI 2% to 20%) in the post-intervention survey (p = 0.041).
For question 4, ‘Were you given enough privacy when being examined or treated in the A&E department?’, the proportion of ‘yes, definitely’ responses increased from 71% (n = 20, 95% CI 53% to 86%) to 93% (n = 42, 95% CI 83% to 98%) post intervention (p = 0.011). The proportion of ‘yes, to some extent’ responses also decreased from 29% (n = 8, 95% CI 15% to 47%) at baseline to 7% (n = 3, 95% CI 2% to 17%) post intervention (p = 0.011).
Care from staff
For question 7, ‘While you were in hospital did the doctors and nurses ask you what name you prefer to be called by?’, the proportion of ‘yes, all of them’ responses increased from 33% (n = 12, 95% CI 20% to 50%) in the baseline survey to 64% (n = 34, 95% CI 51% to 76%] post intervention (p = 0.004). The proportion of respondents who ticked ‘none of them’ decreased from 25% (n = 9, 95% CI 13% to 41%) to 9% (n = 5, 95% CI 4% to 19%) post intervention (p = 0.048).
For question 10, ‘In your opinion, did the members of staff caring for you work well together?’, the proportion of ‘yes, always’ responses increased from 51% (n = 18, 95% CI 35% to 67%) to 82% (n = 40, 95% CI 69% to 91%) in the post-intervention survey (p = 0.003). The proportion of ‘yes, sometimes’ responses decreased from 40% (n = 14, 95% CI 25% to 57%) in the baseline survey to 14% (n = 7, 95% CI 7% to 26%) post intervention (p = 0.007).
For question 14, ‘Did you find someone on the hospital staff to talk to about your worries and fears?’, the proportion of ‘no’ responses decreased from 50% (n = 16, 95% CI 33% to 67%) to 26% (n = 11, 95% CI 15% to 41%) in the post-intervention survey (p = 0.035).
For question 24, ‘Overall, did you feel you were treated with respect and dignity while you were in the hospital?’, the proportion of ‘yes, always’ responses increased from 53% (n = 19, 95% CI 37% to 68%) to 77% (n = 41, 95% CI 65% to 87%) in the post-intervention survey (p = 0.015).
For question 25, ‘During your time in hospital did you feel well looked after by hospital staff?’, the proportion of ‘yes, always’ responses increased to 74% (n = 40, 95% CI 61% to 84%) from 50% (n = 18, 95% CI 34% to 66%) in the baseline survey (p = 0.019). The proportion of ‘yes, sometimes’ responses decreased from 42% (n = 15, 95% CI 27% to 58%) in the baseline survey to 22% (n = 12, 95% CI 13% to 35%) post intervention (p = 0.049).
Noise at night
For question 17, ‘Were you ever bothered by noise at night from hospital staff?’, the proportion of ‘yes’ responses decreased from 50% (n = 18, 95% CI 34% to 65%) in the baseline survey to 20% (n = 11, 95% CI 11% to 33%) in the post-intervention survey (p = 0.003).
Non-statistically significant improvements
Non-significant improvements were also seen on a number of other questions. These included questions on communication. For example, the proportion of patients reporting that when they had important questions to ask a doctor, they always got answers they could understand increased from 37% (n = 13, 95% CI 23% to 54%) to 51% (n = 25, 95% CI 37% to 65%) in the post-intervention survey (p = 0.208). Similarly, the proportion of patients reporting that they always got answers they could understand from nurses also increased from 44% (n = 15, 95% CI 29% to 61%) to 57% (n = 28, 95% CI 43% to 70%) post intervention (p = 0.243).
Interviews
Baseline interviews
Baseline interviews revealed many reports of positive experiences. Some patients observed that the ward was run efficiently; staff were helpful and personable. There were several examples of care provided in a timely manner and in response to their needs and requests for help. Furthermore, some patients observed that staff were very caring to other patients who needed help. Examples were given about how staff spent time with individual patients talking and ensuring they were cared for. Despite the demands on the staff, the staff were good-natured and attentive. There were good examples provided of staff involving the patient’s family, keeping them informed and encouraging questions. Some patients described better communication with nurses than doctors.
Generally, patients described being cared for with dignity and respect, but one patient had observed a lack of privacy for other patients, especially during personal care.
The discharge process was considered too long for some, in terms of delays waiting for medications from pharmacy and a lack of information. One patient reported that they were discharged with very little assistance to cope at home, especially help with caring responsibilities.
Mostly patients described good examples of information given about procedures and investigations, but there were some examples of staff lacking knowledge of their medications.
There were some reports that information was not given freely, with patients having to ask staff for details of treatments and processes of care.
Some patients were disturbed by other patients, especially those with dementia. Some patients were upset by the presence of so many elderly patients on the ward. Frequent disturbance by staff carrying out procedures and lights left on was troublesome for one patient. One patient observed arguments between members of staff and between relatives and a health-care assistant.
Post-intervention interviews
In post-intervention interviews, patients reported witnessing kind and compassionate care being given to other patients on the ward. Patients were generally treated with dignity and respect, and staff were approachable and kind. Staff communicated well together about patient’s needs. Good leadership from the ward manager was observed by patients.
The age of the other patients was concerning for younger patients. Disturbance by patients with dementia was also upsetting for some, but generally patients reported that staff managed the level of noise well.
There were mixed reports of receiving medications in a timely way; for some patients, pain relief was administered when needed but for other patients there were delays. There were comments about the lack of staff on the ward, which had an impact on the timing of medications and staff’s ability to provide emotional support. Although there were reports of good communication and information, contrasting reports were given of the lack of consistency of information given and patients having to repeat their history, particularly to doctors.
There were other reports of attentiveness to needs and provision of detailed information about the patient’s condition and involvement in decisions.
Reports of delayed discharge due to pharmacy dispensing medication and inadequate information about aftercare were highlighted by some patients, but others reported a more positive experience of discharge, saying that it was straightforward and they were given good information.
Case study 6: description summary
This case study site is a foundation trust providing general and specialist services to a large population. The trust has undergone significant restructuring following regional NHS changes and staff morale continues be low. In 2015, the CQC published its Quality Report, acknowledging the relative infancy of the trust (i.e. the inspection took place just 5 months after the trust’s introduction) and that a particularly challenging winter period (2014–15) had affected services. Nevertheless, despite these matters, the CQC concluded that the overall rating for the trust at that time was ‘requires improvement’. More specifically, the CQC stated that improvement was required in the areas of safety, effectiveness and responsiveness.
In addition to this, during the study, the trust was placed into ‘special measures’ with regard to finances and resources. The consequence of this is that all activity requiring funding and additional resources has to be scrutinised at numerous levels in order to justify and legitimise spending costs. This extends to any resources that may be required for the US-PEx project.
Furthermore, the hospital was operating at level 4 of the NHS Escalation Management System (i.e. extreme pressure that may affect patient safety). The hospital appears to have maintained this level of pressure for most of the winter 2016–17 period.
Case study ward
The case study is a general ward that admits predominantly male patients experiencing a wide range of medical conditions.
The ward ran at full capacity throughout the period of fieldwork, which affected staff morale. In addition, at the onset of the ward’s participation in the US-PEx study, clinical members reported staff morale on the ward to be ‘low’. This was to be confirmed in the Ward Staff Survey conducted as part of the ward’s QI project. This low morale on the ward also coincided with a period of reduced positive feedback from the FFT relevant for the ward.
Ward patient feedback
The patient experience office within the hospital/trust placed greatest emphasis and value on the scores obtained from various departmental FFTs (outpatients, inpatients, A&E, maternity, etc.). Responses to the various tests are circulated to the respective wards and are subsequently cascaded to wider front-line staff via e-mail, meetings or poster notification. The case study ward team used these data as the main comparative benchmark for QI in their chosen activities. The baseline survey and interview results were not specifically used for designing improvement initiatives.
Staff involvement
The core team mainly consists of senior nursing staff: two quality nurses, a matron, the ward manager and an associate chief nurse for medicine. All of the nursing team have been involved in QI projects in the past. The patient representative is also a former employee of the NHS but had comparatively limited input within the US-PEx project team.
Commitment was given from the patient experience director but the ward staff designed and led the projects.
Furthermore, the consequences of winter pressure in 2016–17 involved a redeployment of the team champion to a different hospital in the trust. In addition, these pressures were responsible for two of the team departing from the second learning community (December 2016) as a result of being ‘called back’ to the hospital to provide clinical assistance.
Quality improvement initiatives
This team’s project was aimed at improving the experience of staff morale and level of participation/involvement in decision-making and communication. This premise was based on the belief that improving staff experience will have an impact on the provision of better care and result in increased positive patient feedback.
Specific core initiatives included:
-
a reorganisation of clinical staff communication at shift handover, including more focused discussions at the end and beginning of shifts
-
an amended medicines round for clinical staff.
Peripheral tasks for improved patient experience included:
-
a comparative observation of like-for-like ward activities in another location
-
the introduction of the ward-based ‘It’s OK to ask’ scheme (to improve patient–staff communication)
-
the introduction of a trial run of ‘sleep packs’ for patients
-
the introduction of the trust-wide ‘Purple Bow initiative’ (regarding end-of-life communications project)
-
implementation of a trust-wide scheme to improve discharge procedures including electronic and hard copy formats
-
a Carer’s Passport
-
an area where staff could use Post-it (3M, St Paul, MN, USA) notes to put forward ideas and suggestions.
However, owing to a variety of pressures relating to reduced capacity, limited resources and limited data, approximately 50% of the team’s original plans were not fulfilled at the end of fieldwork.
Challenges experienced
Owing to staff capacity levels, organisational distress, redeployed staff and reduced resources, most activities progressed slowly during 2017.
Throughout January–June 2017, most of the scheduled US-PEx meetings were cancelled owing to winter or capacity pressures, despite the enthusiasm and motivation expressed at the meetings that did occur. Redeployment of a key project leader had an impact on the membership and organisation of the US-PEx team.
Patient experience survey and interview results
Baseline survey
Salient results highlighted for the ward included:
-
28% (n = 11) said that they were not able to find a member of staff to talk to about their worries or fears
-
47% (n = 27) said that they were not given written information about what they should or should not do after leaving hospital
-
38% (n = 19) said that staff had not told them about any danger signals they should watch for after they went home
-
47% (n = 26) said that doctors or nurses gave their family, or someone close to them, all the information they needed to help care for them at home
-
83% (n = 52) said that they were ‘always’ treated with respect and dignity
-
74% (n = 29) felt that they ‘always’ got the help they needed with washing
-
84% (n = 36) said that staff discussed whether they might need any further health or social care services after leaving hospital.
Changes between surveys
Significant changes
A comparison between the baseline and post-intervention surveys revealed that there was one statistically significant improvement in results for this case study. By ‘statistically significant’, we mean that the change was statistically reliable at a confidence level of 0.95 (i.e. the p-value is < 0.05). For question 16, ‘When you needed help with washing, did you get it when you needed it?’, the proportion of ‘yes, sometimes’ responses increased from 15% in the baseline survey (n = 6, 95% CI 7% to 29%) to 35% (n = 12, 95% CI 21% to 52%) in the post-intervention survey (p = 0.049).
Two questions showed a statistically significant decline in results:
-
For question 30, ‘Did hospital staff tell you who to contact if you were worried about your condition or treatment after you left hospital?’, the proportion of ‘yes’ responses decreased from 77% (n = 39, 95% CI 64% to 86%) to 56% (n = 22, 95% CI 41% to 71%) in the post-intervention survey (p = 0.044). The proportion of ‘no’ responses increased from 24% (n = 12, 95% CI 14% to 36%) in the baseline survey to 44% (n = 17, 95% CI 29% to 59%) post intervention (p = 0.044).
-
For question 35, ‘When I left hospital, I clearly understood how to manage my health?’, the proportion of respondents that said ‘disagree’ increased from 7% (n = 4, 95% CI 3% to 17%) to 22% (n = 9, 95% CI 12% to 36%) in the post-intervention survey (p = 0.041).
Non-statistically significant improvements
There were a number of questions for which non-significant improvements in the results can be seen and, although we cannot take this as definitive evidence that patient experience has improved, it is encouraging nonetheless. These include the following.
Care from staff
For question 12, ‘Did you have confidence in the decisions made about your condition or treatment?’, 74% (n = 36, 95% CI 60% to 84%) of respondents said that they ‘always’ had confidence in the decisions made about their condition or treatment, an increase from 67% (n = 40, 95% CI 54% to 78%) at the baseline (p = 0.442).
For question 17, ‘Were you ever bothered by noise at night from hospital staff?’, 86% of respondents (n = 42, 95% CI 74% to 93%) said that they were not bothered by noise at night from hospital staff, compared with 75% (n = 44, 95% CI 63% to 84%) at the baseline (p = 0.152).
Communication
For question 14, ‘Did you find someone on the hospital staff to talk to about your worries and fears?’, 33% of respondents (n = 13, 95% CI 20% to 48%) said that they ‘definitely’ found someone on the hospital staff to talk to about their worries and fears, compared with 23% (n = 9, 95% CI 12% to 38%) in the baseline survey (p = 0.350).
Information about discharge
For question 28, ‘Were you given clear written information about what you should or should not do after leaving hospital?’, the proportion of patients who said that they were given clear written information about what they should or should not do after leaving hospital increased from 53% (n = 30, 95% CI 40% to 65%) in the baseline survey to 61% (n = 25, 95% CI 46% to 75%) post intervention (p = 0.412).
For question 33, ‘Did the doctors or nurses give your family or someone close to you all the information they needed to help care for you at home?’, 56% of respondents (n = 24, 95% CI 41% to 70%) said that doctors or nurses ‘definitely’ gave their family (or someone close to them) all the information they needed to help care for them at home, which is an increase of 9 percentage points (47%, n = 26, 95% CI 35% to 60%) from the baseline survey (p = 0.401).
Interviews
Baseline interviews
Several patients reported a good experience in baseline interviews and provided positive comments about interactions with staff and care provision. Responsiveness and attendance to personal needs were highlighted and care was reportedly given with compassion and kindness; patients felt that they were treated with dignity and respect. Some patients described the ward as busy but said that this did not have too great an impact on how care was given. Other patients, however, felt that this did affect the timeliness of responding to calls. Mostly, the information given to patients, both verbal and written, was good or variable.
A few patients reported poor attendance by staff to their personal needs; in one case this was reported by a carer of a patient on the ward.
Disturbance at night was referred to by patients, which related to noise from staff but also from a specific patient on the ward.
There were several reports of poor discharge planning, with no information given on the reasons for delay or advice given regarding aftercare. In one case, a patient felt that they had experienced an unsafe discharge, which had resulted in another admission to hospital.
Post-intervention interviews
Similar themes were reported in post-intervention interviews, with slightly more positive comments about staff interactions, care and responsiveness to needs than in the baseline interviews. The impact of the busyness of the ward on experience was reported at baseline but this was not referred to specifically in follow-up interviews.
Patients reported sleep disturbance and noise at night in follow-up interviews, but did not refer to specific incidents. Delayed discharge continued to have a negative impact on patient experience, but the planned intervention to improve the process was put on hold because of organisational distress, and was not implemented.
It is difficult to attribute positive or negative changes in patient experience to quality improvement activities because of the timing of surveys/interviews and implementation.
Appendix 2 Designing and analysing the patient experience survey
The questionnaire used for the baseline and post-intervention surveys focused on the experience of four areas of the patient pathway of care:
-
referral to service
-
inpatient care
-
discharge
-
support for self-management.
A database of questions was compiled and mapped against the four areas. The questions were selected from extensively tested, reputable sources such as the NHS Adult Inpatient Survey, Oxford Patient Involvement and Experience scale (OxPIE), and previously developed questions about self-management and demographic indicators. A total of 130 questions were compiled during this initial mapping.
Three researchers (EA, EG and JK) independently assigned each question to one of three groups: ‘yes, include’, ‘no, exclude’ or ‘maybe’. The groupings were analysed and those questions that received three ‘yes, include’ groupings were prioritised for inclusion in the questionnaire. Questions that received a mix of positive groupings (such as two ‘yes, include’ and one ‘maybe’) were then discussed and a decision was made about whether or not these should also be considered for the questionnaire.
Based on the results of this process, a draft of the questionnaire was created and the face validity of the instrument assessed among the wider research group. This led to a small number of content changes to ensure that key components of each area of care were adequately covered.
The final version of the questionnaire, which can be seen in Appendix 3, comprised 44 questions. Table 10 shows the number of questions in the questionnaire by area of care.
Question type | Number of questions | |
---|---|---|
Compiled during initial mapping | Included in the final questionnaire | |
Referral | 9 | 6 |
Inpatient care | 65 | 19 |
Discharge | 13 | 7 |
Support for self-management | 30 | 4 |
Demographics | 13 | 6 |
Other (interview opt-in and other comments free text) | 2 | 2 |
Total | 130 | 44 |
Appendix 3 Baseline and post-intervention surveys: questionnaire
© 2017 Picker. All Rights Reserved. Reproduced with permission from the Picker Institute Europe.
Appendix 4 Patient experience survey: methodological considerations
Sample size and detecting differences
Ward teams were asked to draw a census of all discharged patients in a given 3- or 4-month period to which a questionnaire would be sent (at two sites, a sample of 250 inpatients was selected due to the much higher number of discharges over the sampling period). Despite this generous sampling period, the before and after sample sizes for sites were small, reflecting the volume of activity in the wards. Thus, the degree of improvement needed to be identified as a statistically significant change is high. This contrasts with studies with very large sample sizes, such as the NHS inpatient survey, where the very high precision of survey estimates means that even quite small changes in scores may be identified as statistically significant. As shown in the individual case study results (see Appendix 1), the CIs around the results were wide due to the small sample sizes.
There were 36 questions in the survey for which the change in the proportion of responses could be compared between the baseline and post-intervention survey for each case study site. As we used z-tests to compare individual pairs of proportions (i.e. comparing the responses to individual response options within a question), a total of 108 individual tests were carried out for each case study site to determine if any change in the proportion of responses was statistically significant. Based on an assumption that there are no underlying differences, we would expect that 5% of these tests would be significant just by chance at a CI of 0.95 or p < 0.05 (i.e. five statistically significant results would occur just by chance in each case study site). At two of the sites (case study sites 1 and 3) only one statistically significant result was shown, while at case study sites 2 and 6, three statistically significant changes in the responses were found. At case study site 4, there were five questions in which the change in the proportion of respondents selecting one of the response options was statistically significant, so it could be argued that given the volume of testing carried out we would expect this number of tests to be significant ‘just by chance’. Similarly, in case study site 5, where eight statistically significant results were shown, five of these eight tests could have been significant simply by chance. This highlights the caution that is needed in interpreting the patient survey results. As discussed in each of the case study survey results (see Appendix 1), a number of questions showed an increase in the proportion of patients responding positively between the baseline and post-intervention surveys, which may have been a statistically significant increase if there had been a larger number of respondents.
Questionnaire design
The questionnaire used for the baseline and post-intervention surveys focused on the experience of four areas of care:
-
referral to service
-
inpatient care
-
discharge
-
support for self-management.
The same questionnaire was used for each case study site. The questionnaire was designed before knowing what the case study sites would be and what ward and types of interventions they would choose to focus on. This approach was beneficial because it meant that all sites used a common questionnaire, and the content of the survey was based on existing evidence around the elements of care likely to be important to the majority of patients. This was useful in providing sites with evidence about areas where improvements would be beneficial. However, the ability of the questionnaire to pick up changes in patient experience as a result of quite specific initiatives (such as welcome packs) merits consideration. If changes narrowly address particular elements of people’s experiences – and if changes in these do not lead to changes in other elements of care that are covered in the questionnaire – then the survey would not be likely to show differences.
Timing
The sampling months used for the baseline survey were January, February and March 2016 (with one site also including discharges in April 2016 due to a small sample size). The sampling period for the post-intervention survey was originally planned to cover the same period in 2017, but was deferred to March, April and May 2017 to allow for delays in the implementation of the interventions caused by a very difficult winter. Although ideally the post-intervention survey would have been conducted over the same calendar months to eliminate any seasonality effect on people’s responses, the risk of potentially not capturing the impacts of the interventions was considered to outweigh this limitation.
Appendix 5 Topic guide for the patient interviews
Introduction
-
Background to the project and purpose of the interview: to investigate patient experiences of care.
-
Emphasise confidentiality – all personal details to be removed from transcripts so no individual can be identified.
-
Interview to last about 1 hour.
-
Questions from participant about the interview?
-
Obtain verbal consent to turn on tape recorder.
-
Signing of consent form (if not already completed).
Opening narrative
-
Tell me about your experience of x ward?
Referral
-
If your stay in hospital was planned, did you receive any information before you were admitted?
-
Can you tell me the types of things the information covered?
Prompts:
-
Was it helpful/relevant?
-
Was any information missing/would you have liked any more information on anything?
-
Inpatient care
-
What was the ward environment like?
Prompts:
-
Noise, privacy, cleanliness, lighting, sleep, food, etc.
-
-
What did you think of the care provided?
Prompts:
-
Medication, dressings
-
Help with personal care – washing, eating, toileting
-
Dignity and respect
-
-
How did the staff treat you? (Prompt for different staff groups.)
Prompts:
-
Communication
-
Involvement in decisions
-
Involvement of family
-
Treated as an individual with own needs/preferences/values respected?
-
Able to discuss worries and fears?
-
-
Did you feel you got enough information about your condition and treatment while in hospital?
-
What was visiting like for you/your family members?
-
Can you describe to me some examples of really good quality of care while you were there?
-
Can you tell me about some things you thought could be improved?
Discharge
-
Can you tell me about the few days leading up to your discharge from hospital?
-
When you were being discharged did you receive any written information?
Prompts:
-
Can you tell me the types of things the information covered?
-
Was it helpful? Would you have liked any more information?
-
Support for self-management
-
When you got home from hospital did you feel able to look after yourself?
-
Did you get any support from health or social care staff/or voluntary organisations?
-
(If yes) what did this support look like?
Closing
Interviewer to wrap up with importance of gathered information for research project, and thanking participant for their involvement.
Appendix 6 Pro forma
Reflections
Group dynamics
Is there an obvious leader?
Are all team members equally vocal?
How does the patient/carer fit in (if there is one)?
How does the team ‘feel’ (cohesive/hierarchical/relaxed/engaged, etc.)
Are different views expressed in relation to patient experience data and the project? If so, how are these managed? Whose prevails?
Does being given distributed leadership for choosing how to use patient experience data appear to motivate staff?
Engagement with US-PEx information and materials
What kinds of questions do the trust teams ask?
Which kinds of information seem to be most useful/relevant/interesting?
Which kinds of information are ignored/rejected?
Do the trust teams challenge information from US-PEx experts?
What comments are made about the US-PEx resource book?
Do people refer to the materials, and, if so, how?
Not at all. But they have been used in dissemination and information awareness procedures (as described above).
Formulating a plan
What are the triggers to discussion?
How are new sources of information weighed in relation to team’s own experience?
What kinds of data/QI methods are discussed/not discussed?
Do they focus on transactional (access, waiting times, food, noise) or relational (dignity, empathy, emotional support, involvement) aspects of care?
How easy do teams find it to produce a plan?
What assumptions form the basis of plans?
Is consensus reached, and if so, how?
Are concrete steps planned to implement the project?
Appendix 7 Information sheet and consent form
Appendix 8 Qualitative coding and visual mapping
Developing and applying a common coding framework
In the final phase of all fieldwork, a series of ethnographer data sessions aimed to harmonise the way in which data were analysed within the team. These data included documentary sources, semistructured interviews, field-based observations and individual field notes. The analysis focused on the production of a common coding framework (for use within NVivo 10) that would aim to include all aspects of the data generated from the six case study sites. Following a series of data sessions dedicated to formulating a suitable coding framework, 13 domains (or ‘parent nodes’) were identified by the three ethnographers as the key areas for analysis. These domains included ‘context’, ‘organisational culture and practices’ and ‘quality improvement’ (plus 10 others) that emerged from a shared primary analysis of each respective data set. Thematic analysis then established a wider framework that consisted of 58 separate codes (or ‘child nodes’) across all 13 domains. These 58 child nodes each represented a shared theme or topic that was discussed or observed during fieldwork (interviews and observations) and were each consistent across the various data sets generated by the three ethnographers. In addition to producing a common coding framework, the ethnographers agreed summary descriptions of each parent and child node, and collected these together in a common codebook for reference. Once the codes and codebook were validated for analytical relevance by the principal investigator, each ethnographer subsequently created an NVivo file that reflected the key domains and associated fields.
Ethnographic mapping and analytical inspiration
As noted in Chapter 2, Study methods, one ethnographer (SP) was responsible for the conduct and management of fieldwork at three case study sites, while the others worked on two sites and one site, respectively. Managing three sites created a voluminous and thematically diverse data set. To deal with this complexity, Stephen Parkin devised a system of note-taking and information management to keep the various sites separate, and during a period of reflection began sketching the ‘evolution’ of a QI project at one particular location in his field-note journal.
The need to sketch the evolution of one particular QI emerged from the development and progress of a complex, multifaceted, multicomponent project designed by the ward team concerned. More specifically, the relevant team had devised a QI project consisting of three aims and nine objectives and utilised a variety of (mainly) qualitative QI techniques to reach these various goals. To summarise the various workstreams and methods attached to this project, Stephen Parkin began charting the ward team’s activities and progress as an evolutionary and longitudinal timeline. This evolutionary timeline noted a variety of developments, including positive and negative influences, and ultimately produced a visual representation of the ward’s shared efforts. After recognising the visual impact that this chart produced (in terms of summarising the work conducted by the ward team), he repeated the exercise for two further case studies. This activity produced three very different A4 notebook sketches representing the different aims and objectives of the three case studies concerned, and also visually portrayed three different ways in which the respective ward teams had approached the task of conducting QI on the front line of NHS services. Stephen Parkin then reproduced the sketches in the form of three large-scale process maps using colour-coding and various symbolic representations to characterise the assorted process, progress, barriers and outcomes of the three case study sites’ projects. These hand-crafted maps were subsequently recreated using a software program called Visio (Microsoft, Corporation), which was developed specifically for constructing process maps. The creation of colour-coded, symbolic ethnographic process maps helped to produce an ‘at-a-glance’ description of entire improvement projects and single workstreams conducted on the three wards. The approach was subsequently developed and refined by the team in the data sessions and a dedicated ‘mapping’ meeting. Each of the ethnographers contributed to this process and all three ethnographers produced versions of the maps for their sites.
Whereas conventional process mapping seeks to improve efficiency within organisations and business settings, in this case the result was an ethnographic representation of process mapping, to portray (visually and schematically) the ethnographic object within each of the three locations. Similarly, the collapsed data from these three maps also informed and complemented the range of data sessions and other meetings held throughout the lifetime of the US-PEx study. They helped to identify positive/negative QI, effective methods of working as a ward team and the assorted negative/positive influences on project outcome (all of which are consistent with conventional process mapping). Perhaps more simply, an ‘analytical inspiration’105 unpacked the implicit and explicit narrative contained within the three maps (both collectively and individually) and helped to inform and develop the ‘team ethnography’ approach to the shared understanding and interpretation of how front-line staff engage with QI projects.
Appendix 9 Visual mapping example
Appendix 10 Themes on inpatient ward experience
These themes came from a discussion with our lay panel members, who drew on their own recent inpatient experience or that of a family member. We asked them to think about what ‘touchpoints’ they thought would be important to share with staff. ‘Touchpoints’ are about how patients and carers feel about their interactions with staff and services. They can include examples of good practice that we can learn from, but also areas where there is room for improvement.
We have used this set of themes to guide a search through existing interviews on www.healthtalk.org to create a ‘trigger film’ about inpatient ward experiences. Trigger films are used in EBCD to stimulate joint discussion between patients, family members and staff about what the local priorities for improvement should be, which they then work together to implement. The film is available online for case study sites to use if they wish [URL: www.healthtalk.org/peoples-experiences/improving-health-care/trigger-films-service-improvement/inpatient-medical-ward-experiences (accessed 18 June 2019)], but it does not necessarily cover every point and its purpose is just to ‘trigger’ further discussion locally. The lay panel would like to share this background document too in case it is useful.
Arriving in hospital: the ward is a strange new place
-
Admissions: be alert to what happened to people before they got here (delays, anxiety, bureaucratic hurdles).
-
We may have been wrenched unexpectedly from caring responsibilities, tasks at home, work, or pets who need looking after.
-
We need information – a guide about what to expect (preferably before admission or early on if emergency) – staff need to appreciate we don’t know what the system is, what to do, what the ropes are (e.g. where to find a towel, when to shower, when the lights can be turned out, how does the TV work).
-
If we have to bring a list of medication with us, we need to know in advance; we can’t always remember a string of complex names.
-
Remember that for the patient the ward is a completely alien and traumatic environment, however familiar it is to staff – we feel vulnerable.
-
Whereas staff may not notice noise, lights on at night, the worry of seeing/hearing other very sick people.
-
Being moved from one ward to another without notice is disorientating.
-
It’s difficult often to know who is who on the ward, with unfamiliar uniforms and difficulty reading badges. Not just ‘hello my name is’ but what your role is.
-
Patients who come into hospital from nursing homes are often sent with staff who do not know the details of the person and can’t tell staff much about them.
Communication between patient (and carers) and health professional
-
It helps to have eye contact and really good, clear explanations.
-
Handouts of what’s been said to help you remember are useful, including for those of us with some degree of memory loss.
-
Online information may work for some, but one size doesn’t fit all, and some older people will not use the technology.
-
If talking to medical students, please engage the person in the discussion as well.
-
Don’t turn ‘hello my name is’ into a tick-box exercise.
-
Having English as a second language matters whether you are a patient or a member of staff.
Care/caring/feeling like a person
-
We need normal human conversation; nurses may be run ragged, but other staff on the ward may have more time.
-
We are people first; the things that made us who we are in the outside world are still true in hospital.
-
Sympathy and kindness may come from the most unexpected quarters and they matter a lot.
-
Caring as well as nursing matters – emotional and social interaction, not just practical tasks.
-
Importance of touch.
-
Importance of recognising faces and remembering names.
-
‘Diamond’ nurses – nurses who stand out for their care and compassion – can transform experience.
-
Privacy around personal hygiene.
-
Continuity of carer, building confidence and trust.
-
Timing of pain relief and other medication – when you need it, not when it’s the next drugs round.
-
Don’t strip people of normal responsibility and ability to self-manage own condition(s).
-
Being able to use own phone and laptop in hospital helps you feel normal.
-
Loss of personal effects – valuables, practical things such as teeth and phones, personal things with sentimental value – is very important. Trivialising how much they mean to the person and saying things like ‘it’s not a hotel’ adds to distress.
-
Teeth and hearing aids are important for communication; important to remember many medical ward patients will have some hearing loss, or may have difficulty speaking as a result of stroke, or be visually impaired. Staff communication training?
-
Difficulty communicating because of dementia is also important – staff sensitivity to confusion and anxiety helps; patience, reassurance.
-
Other conditions or disabilities may also need to be taken into account.
-
It’s hard to give feedback on the spot if you feel it will affect your care.
-
Small acts of care, kindness and putting things right are everyone’s responsibility, no matter how grand they think they are.
-
Patients (and visiting family) can offer such small acts to each other.
-
Food choices may be unhelpful for those with allergies and the options to choose from can be confusing. Staff need sensitivity about when/whether to ask people about food choices.
-
Make sure food, water, buzzers, light switches and TV remote are within reach, especially for people with limited mobility.
Involving carers as appropriate
-
Patients have a right to make their own decisions and may not have family, or may not want family involved at some points.
-
But patients who want their family there and involved in decisions and care should be able to.
-
Who are ‘protected’ mealtimes for? You may want family there to help with food and drink. Longer visiting hours?
-
Patients may not want to hear some information, but carers may do.
-
Carers have information and emotional needs of their own (Carers UK see it as a triangle).
Communication and coordination across team
-
Conflicting messages from and need for co-ordination between different team members.
-
Being on the ‘wrong’ ward; need for communications between medical team and host ward.
-
Making sure staff know about other coexisting conditions (and allergies) and enable you to self-manage (e.g. diabetes and insulin).
-
Getting tests and assessments done in sequence so results are available in time for meetings with the doctor.
-
Having notes readily available, not in the archive – and reading them.
Discharge
-
Involve the patient in decisions; don’t just tell them it’s time to go.
-
Try to get discharge prescriptions sorted promptly so going home is not delayed.
-
Co-ordination with primary care (GP, etc.), equipment, etc.
-
Patients need a ‘what next’ in writing with them when they go home e.g. ‘see GP within 7 days’, ‘outpatient appointment letter will come by post’, ‘no follow-up needed’, etc.
-
Make sure discharge information is accurate.
-
Patients need a ‘what to watch for’ in writing e.g. ‘some pain is normal’, ‘if X happens speak to or see your GP’, ‘if Y happens phone 111’, etc. Nowadays, patients are often discharged when they are not 100% well, and need to know when to worry (and when not to worry).
Appendix 11 ‘Quick Wins: Tips for Ward Improvement’ (making your life and patients’ lives better): developed by the lay panel to support participating teams
Appendix 12 ’Tips for Involving Patients in Ward Improvement’ (working together to improve wards for you and your patients): developed by the lay panel to support participating teams
List of abbreviations
- A&E
- accident and emergency
- AMU
- acute medical unit
- CI
- confidence interval
- CQC
- Care Quality Commission
- EBCD
- experience-based co-design
- FFT
- Friends and Family Test
- GP
- general practitioner
- HCA
- health-care assistant
- HSDR
- Health Services and Delivery Research
- INQUIRE
- Improving NHS Quality Using Internet Ratings and Experiences
- IT
- information technology
- NIHR
- National Institute for Health Research
- PFCC
- patient- and family-centred care
- PhD
- Doctor of Philosophy
- PPI
- patient and public involvement
- PREM
- patient-reported experience measure
- QI
- quality improvement
- R&D
- research and development
- US-PEx
- Understanding how Front-line Staff use Patient Experience Data for Service Improvement
Notes
Supplementary material can be found on the NIHR Journals Library report page (https://doi.org/10.3310/hsdr08130).
Supplementary material has been provided by the authors to support the report and any files provided at submission will have been seen by peer reviewers, but not extensively reviewed. Any supplementary material provided at a later stage in the process may not have been peer reviewed.