Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 13/07/68. The contractual start date was in July 2014. The final report began editorial review in January 2017 and was accepted for publication in June 2017. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2018. This work was produced by Keen et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2018 Queen’s Printer and Controller of HMSO
Chapter 1 Introduction
A number of developments at the turn of the millennium highlighted problems with the quality and safety of acute hospital services. The Institute of Medicine’s landmark 1999 report, To Err Is Human, highlighted high adverse event rates in hospitals in the USA. 1 The likelihood that NHS hospitals also had high adverse event rates was raised in An Organisation With A Memory, published the following year. 2 The 2001 Kennedy report on high mortality rates in cardiac surgery at Bristol Royal Infirmary humanised the problem. 3 Put starkly, many adults and children who were operated on died when they could and should have lived.
In the intervening years it has become clear that it is difficult to make substantial and sustained improvements in the quality and safety of hospital services. As a result, and real improvements in some localities notwithstanding, it is generally agreed that acute hospitals still need to provide higher-quality and safer services. This point was brought home in the two reports by Sir Robert Francis on the failings in some wards and departments at the Mid Staffordshire NHS Foundation Trust,4,5 published in 2010 and 2013 respectively, and subsequently by the Kirkup report on maternal and infant deaths at University Hospitals of Morecambe Bay NHS Foundation Trust, published in 2015. 6 This research study was commissioned following the ‘After Francis’ call for proposals issued in 2013 by the National Institute for Health Research Health Services and Delivery Research (HSDR) programme.
The problems have generated a range of policy responses over the last 15 years. A recurring theme concerns the need for cultural change in NHS trusts, moving away from a ‘blame culture’ and towards a culture in which staff have the confidence to report mistakes and are able to learn from them. 7 National bodies, including the Care Quality Commission (CQC) and NHS Improvement, have been created and given responsibility for the oversight of quality and safety of services across the NHS. Our interest in this study is in another long-standing policy prescription: the generation and use of data on the quality and safety of services, and investment in the information technology (IT) needed to manage the data.
High-quality data
There are different views, in the NHS and in academic circles, on the data that are needed to monitor and manage the quality and safety of health services. One is that doctors, nurses and other clinicians are always concerned with quality and safety. Patients’ records, updated in the course of clinicians’ work, reflect the quality and safety of services provided. Thus, albeit at the risk of oversimplification, we might take the following scenario as an example: if a patient’s temperature and blood pressure are normal, the most recent pathology tests are normal and the patient says that she is happy with the care she has received, then the treatment and care were of reasonable or good quality. There is no distinct category of data that reflect quality and safety; rather, the data are part and parcel of everyday service delivery.
A second view is held by practitioners and academics with interests in service improvement. They argue that data play a vital role in improvement projects, perhaps most obviously in the ‘study’ phase of ‘plan-do-study-act’ cycles. That said, few authors talk in any detail about data or information systems. 8,9 The 2013 report by Donald Berwick, the respected US physician, is one of the exceptions to the general rule. He identified a range of routine data that ward staff – and wider clinical teams – needed to investigate unwarranted variations in services and to support service improvement (Table 1). 10 Berwick argued that these data were not routinely available to ward or directorate staff in NHS trusts, and that neither did they have appropriately trained staff to analyse and present the data – the staff lacked expertise in data analytics. His arguments are based on quality management principles, and, as such, stress the importance of learning from adverse events through root cause analyses or ‘deep dive’ reviews. Accounts of adverse events are, therefore, an important source of evidence to inform service improvements.
The perspectives of patients and their families | Measures of harm | Measures of the reliability of critical safety processes |
Information on practices that encourage the monitoring of patient safety | Information on the capacity to anticipate safety problems | Information on the capacity to respond to and learn from safety information |
Data on staff attitudes, awareness and feedback | Mortality rate indicators | Staffing levels |
Data on fundamental standards | Incident reports | Incident reporting levels |
The second Francis report provides an example of a third view, which is that the quality and safety of services should be monitored by trust managers and external agencies,3 who therefore need appropriate performance data. In recent years, the NHS has required all hospital trusts to provide an increasing number of indicators. Current submissions from trusts include the number of complaints and of incidents (where a patient has been harmed in the course of treatment and care), and data for the NHS Safety Thermometer. The Thermometer includes the number of patients who have developed severe pressure ulcers, experienced a fall, experienced a venous thromboembolism (VTE) and developed a urinary tract infection following the use of a catheter.
There have been a number of studies on the use of data to manage hospital performance, including the recent HSDR study by Mannion et al. 11 There have also been reports that draw attention to the large number of routine data that are captured, and the limited use of these made by the trusts or organisations that have access to them. 12 There are also many studies of individual indicators, notably those on mortality. 13 We have not, however, found any published studies of the processes involved in producing data sets for NHS performance management purposes. Indeed, given the marked differences in the views about the types of data that reveal the quality and safety of services, and the people who are in a position to review and act on those data, it was not at all clear what data trusts were actually capturing and using, or whether or not their uses of data were changing over time.
Information technologies in acute hospitals
Investments in IT systems were recommended by the Bristol Inquiry in 2001,14 which argued that they were a prerequisite for providing the data that clinical teams needed to manage services, and for managers to monitor those services and ensure that they were safe. The Wanless review15 on the future of the NHS, published in 2002, also recommended substantial increases in IT investments. This led to the creation of the NHS National Programme for IT, which had an initial budget of around £7B, eventually increasing to > £10B. The programme had a vision of making patients’ records available anywhere in England: if a Manchester resident fell ill in Norfolk, NHS staff in Norfolk would be able to access that person’s clinical records. However, a succession of National Audit Office reports between 2006 and 2011 reported that billions of pounds were largely wasted. 16 In particular, new, integrated IT systems for acute hospitals were not delivered on time, and even today only a minority of trusts have implemented them. The vision of an integrated, England-wide IT service has never been realised.
The result was that, at the beginning of this decade, most hospitals found themselves with ageing systems and a pressing need to upgrade and replace them. 17 Many trusts made rapid progress once the National Programme was officially abandoned in 2011, and implemented systems across most wards and departments. Even so, having been held back for so long, most trusts needed to make further substantial investments at the time this study started in 2014.
The literature: evidence and theory
Our interest in this study is in the data and IT systems that are used to monitor and manage the quality and safety of services. There are a number of literatures that could be drawn on. Taking our cue from the ‘views’ outlined above, one focus of current investments is on acute hospital wards. There is a rich literature on human–computer interaction in health care. 18 However, most studies focus on accident and emergency departments and intensive care units rather than on acute wards. 19,20 There is also a growing literature on the use of mobile devices, including laptops and tablets, on wards. 21 This study contributes to the literatures on whiteboards and mobile devices on acute wards. It is worth noting that wall-mounted systems are typically referred to as ‘dashboards’, or ‘clinical’ or ‘quality dashboards’, in the literature. There is a risk of confusion in this report, because the term ‘dashboard’ is also used to refer to the graphical displays used in board reports in the NHS. Following the use of terms in the NHS, we refer to whiteboards on wards, and dashboards in board reports, from Chapter 2 onwards.
We are also interested in hospital-wide systems: in the kinds of systems that NHS trusts need in order to provide data to managers. Most trusts have eschewed the single hospital-wide – or ‘big bang’ – solutions favoured under the NHS National Programme. They have instead opted to develop communication networks and integration engines – in effect, software that supports access to several systems via a single screen – that allow clinicians to access data from several departmental systems via a single screen. They build on what is there, rather than replacing existing systems in a single hospital-wide implementation.
There are a number of traditions of studies on these systems. 22 For example, some writers have drawn on structuration theory to understand the deployment of IT systems in organisations. Barrett et al. 23 and Beane and Orlikowski24 built on this work, and critiques of it, and used practice theory to explore the ways in which health-care organisations and IT systems shape one another. Other writers work in a very different tradition, whereby organisations are viewed as complex systems, and IT systems are technologies that can be used to address complexity. 25 Additionally, other writers have taken a more instrumental view of organisations, arguing that it is possible – and useful – to identify barriers to, and facilitators of, organisational change. In this study, our approach has more in common with the tradition represented by practice theory than with the view that uses complexity or barriers and facilitators. On the latter, we note that our fourth study objective refers to barriers and facilitators. For reasons we discuss in Chapter 11, we did not find thinking in terms of barriers and facilitators helpful in this study.
A science and technology study
We made our decisions about the nature of the study in the context of these literatures, and in the light of the comments we have made about data and IT systems. Specifically, our task was to investigate and understand developments at the levels of both the hospital ward and the board. This ruled out a narrow focus on individual technologies such as electronic whiteboards. More positively, we were able to draw on the conceptual and methodological developments in institutional approaches, such as practice theory, and on the critiques of those theories that have developed over the years. We opted for the Biography of Artefacts approach, which is located in the tradition of science and technology studies. 26,27 This approach appears to have developed in parallel with structuration, practice and other institutional theories, with which it shares a number of features. It starts from the observation that IT systems in organisations are implemented over many years. New functions are added periodically, and existing systems are linked to one another and to new systems. Systems thus develop in piecemeal fashion, producing digital infrastructures that are deeply embedded in the day-to-day work of an organisation.
We use the term ‘infrastructure’ throughout this report. An infrastructure is an amalgam of a number of previously separate components. Siskin provides a useful description:
. . . infrastructures are never built from the centre with a single design philosophy. Rather they are built from the ground up in modular units, their development an oscillation between the desire for smooth, system-like behaviour and the need to combine capabilities no single system can provide.
Siskin, p. 628
As the quotation implies, the ways in which infrastructures are used can change over time. As a result, if we want to understand any IT system in an organisation – understand why it looks the way it does today – we need to understand its history. This led us to design a study that adheres closely to the technologies of interest and to the working practices associated with them. It is, in part, a ‘hidden history’ of the people who capture and validate data, who prepare dashboards for board papers and who upload data sets to NHS Digital. It is also a story of what happens when the data arrive in different places: hospital directorates, board meetings and national and local agencies. We are not aware of any previous studies of the NHS trust infrastructures that produce the data used to monitor and manage the quality and safety of services.
Aims and objectives
In our proposal, we described our research aims and objectives as follows:
The research has two aims. The first and principal aim is to establish whether or not ward teams in acute NHS trusts have the information systems they need to manage their own work, and to report on that work to trust boards and other stakeholders, in the post-Francis environment. The research will focus on the design and implementation of a key technology, ward-level dashboards. Dashboards are already the preferred NHS vehicle for integrating information from diverse sources, are being actively promoted by NHS England and NHS Digital (formerly the Health and Social Care Information Centre). As earlier sections show, dashboards will need to be redesigned in the light of the Francis, Keogh and Berwick recommendations. The second aim is to establish the extent to which ward-level dashboards provide a basis for achieving the openness, transparency and candour envisaged by Francis, and supported by Keogh and Berwick. As the reports emphasise, although there are examples of excellent practice, the NHS as a whole needs to undergo a culture change, moving from low to high trust working practices. The extent of sharing of detailed information about performance will be an important source of evidence about that culture change.
Keen et al. 29
There are four research objectives:
-
Design. We will assess the extent to which trusts are able to integrate activity, quality, outcome and cost information in dashboards, to enable ward teams to manage their services effectively and to improve services over time.
-
Implementation and use. We will evaluate the impact of the use of dashboards on clinical and management practices at ward level.
-
Governance. We will assess the extent to which dashboards provide data that are valuable to other local stakeholders, including trust boards, Healthwatch and commissioners.
-
Barriers and facilitators. We will identify the barriers to, and facilitators of, the effective redesign and use of dashboards.
We referred to dashboards in the text reproduced here, and throughout the proposal, but, as we noted above, we make a distinction in this report between whiteboards used at ward level and dashboards used at board level. It should also be noted that we address both aims and the first three objectives in the report, but we do not discuss barriers and facilitators. We explain our technical concerns with these terms in Chapter 11.
The structure of this report
In Chapter 2, we set out the study design and methods used in the study. In Chapter 3, we outline the development of national arrangements for data collection and IT over the last 30 years. In Chapter 4, we present the methods and findings of the first phase of the study, a 2014 telephone survey of senior trust nursing managers about their use of whiteboards and dashboards, and of trust board papers in January 2015. In Chapters 5–10, we set out the main findings of the study, starting with a brief overview of the four study sites and following with mini-biographies of developments on wards, in data and technology infrastructures, of board quality committees, of directorates (also termed clinical service or business units in NHS trusts) and of external bodies including commissioners and regulators. In Chapter 11, we discuss our findings and identify implications for practitioners and recommendations for researchers.
Chapter 2 Study design and methods
Key points
-
A telephone survey of 15 acute hospital trusts and a survey of board papers of all acute hospital trusts in England were undertaken in 2014 and early 2015 respectively.
-
We then observed the use of information systems in four acute hospital trusts over an 18-month period in 2015 and 2016.
-
We used a number of methods, including the direct observation of the use of whiteboards and other technologies on wards, an observation of board quality committees, semistructured interviews and an analysis of board papers.
-
Normalisation process theory was used to direct our fieldwork.
-
The Biography of Artefacts approach was used to analyse our findings.
Introduction
This chapter describes the study design and methods. It has three main sections, reflecting the three phases of the study: scoping and selection, field research at four sites, and analysis and interpretation. As Figure 1 emphasises, the fieldwork and analysis were undertaken iteratively: they are presented separately here for clarity.
Scoping and site selection
The study was approved by the University of Leeds Faculty of Medicine and Health Research Ethics Committee (see Appendix 1). We identified 18 acute trusts that were within a reasonable travelling time – 75 minutes each way – of our university base, and thus were potential sites for the in-depth phase of our study. Research governance approval was successfully obtained from 15 of the 18 trusts. We were unable to obtain research governance approval in the other three trusts. We undertook one interview in each trust, 11 with chief nurses and four with senior colleagues whom chief nurses nominated.
Data collection
All interviews were semistructured and undertaken by telephone. Each interviewee was asked to describe the ways in which dashboards and whiteboards were currently used in their trusts, how they were used and future plans for design, deployment and use. The consent form and information sheet are in Appendix 2, and the interview topic guide is in Appendix 3. Two members of the research team (CG and JK) conducted 15 interviews between September and November 2014. The interviews were audio recorded and then transcribed.
Analysis
The transcripts were analysed using framework analysis. 30–32 The framework approach allowed us to investigate transcripts in two ways, namely by identifying themes that emerged from the data and by exploring themes derived from our research questions. The researchers first listened to the audio files and read the transcripts to familiarise themselves with the data. Two members of the research team (CG and EF) undertook the initial coding of a sample of transcripts and identified emerging themes. These were then shared and discussed with the research team, and an initial set of themes was agreed. The full set of transcripts was then coded using NVivo 10 (QSR International, Warrington, UK). The coded text was organised into a matrix with rows representing interviewees and columns representing key themes. The matrix was reviewed by members of the research team; column headings were refined and text was moved within the matrix to develop consistent themes.
Review of acute NHS trust board papers
The NHS Choices website listed 156 NHS acute trusts in England in November 2014. We reviewed the board papers that we were able to obtain for January 2015 and recorded whether or not they included data on quality and safety, including the NHS Safety Thermometer (data on falls, pressure ulcers and other adverse events), waiting time and other data included in the NHS Constitution, number of incidents, number of complaints and patient experience data (the NHS Friends and Family Test and patient-reported outcome measures). We also recorded whether or not the papers included workforce data, Monitor’s risk assessment framework, a balanced scorecard (a method for presenting data that Monitor had recommended for foundation trusts (FTs) since the mid-2000s) and any data that were presented even though they were not requested or required by national bodies.
Selection of main fieldwork sites
One purpose of the telephone survey was to select four sites for the main field study. Site selection was partly pragmatic and partly purposive. It was pragmatic because we could select only from sites that were within reasonable travelling distance of Leeds – given the volume of proposed fieldwork – and that were willing to participate. In the proposal, our purposive sampling strategy was based on the assumption that trusts would be designing and deploying new information systems, with ward whiteboards as their visible manifestations. We would therefore select trusts on the basis that they had formally agreed on implementation plans. We also undertook to select a mix of FTs and non-FTs on the basis that they had different governance arrangements and might, therefore, be expected to use different data in different ways. The telephone survey, reported in Chapter 4, revealed that two trusts had already implemented real-time (electronic) ward management systems. We therefore decided to amend our sampling criteria to include sites with these systems.
Monitoring of publications
We did not undertake a formal literature review as part of this study. We did, however, monitor and collect relevant publications throughout our study. We used publication alerts (see Appendix 4) and we also used a Twitter (Twitter, Inc., San Francisco, CA, USA) account to follow a number of feeds, including Digital Health Intelligence and the Health Foundation, to identify relevant publications – for example by the CQC or NHS Improvement – that might not be identified by publication alerts.
Field study in four acute hospital trusts
We designed this phase of the study as a prospective, iterative case study design, focusing on two wards – one surgical and one medical – in each of the four acute NHS trusts. Given the focus of the ‘After Francis’ HSDR call, and of Sir Robert’s second report, we excluded intensive care units from the study. More positively, the fact that this was an ‘After Francis’ study focused our attention on the management of the quality and safety of services on wards, and upwards ‘from ward to board’. We named the four trusts Solo, Duo, Trio and Quartet to protect their identities.
There are many types of case study approaches, ranging from Yin’s scientific to more naturalistic ones. 33 Our approach was towards the naturalistic end of the spectrum, on the basis that the best way to understand implementation is to spend time observing the people doing the implementing. 34 As a result, we were principally interested in the working practices of nurses and nurse managers, although we did also observe and interview other clinicians on wards, and we observed and interviewed a range of staff elsewhere in the four trusts. We judged that focusing on wards alone would be outside the scope of the call. Accordingly, we did not propose to observe clinician–patient interactions or detailed data capture in medical and nursing records; that would be an interesting study, but one undertaken in response to a different call for proposals.
Ethics approval was obtained from the University of Leeds Faculty of Medicine and Health Ethics Committee in July 2014 (see Appendix 5). The observations and interviews undertaken at the four sites are set out in Tables 2 and 3. We were able to interview almost everyone we approached. We were not, however, able to interview one trust informatics director, one medical director, one member of an information team, two junior doctors, two nurse managers (both towards the end of the fieldwork) and representatives of one Healthwatch and three clinical commissioning groups (CCGs). We do not think that the omission of most of these interviews had a significant impact on the conduct of the study or on our analysis. We do, however, note that reliance on one CCG interview limits what we can say about the role of CCGs. Two CCGs refused to talk to us – one instructed us to ‘remove us from your records’ – and we were not able to get a response from a third. We do not know why some CCG staff responded in the way that they did; their reactions to our approaches stand in marked contrast to the positive responses from the great majority of people whom we approached.
Intervention or procedure (as described in study proposal) | Site | Total | |||
---|---|---|---|---|---|
Solo | Duo | Trio | Quartet | ||
Observation of meetings where dashboards are used in divisional and trust board meetings | 18 | 12 | 8 | 13 | 50 |
Semistructured interviews with trust managers, CCG staff and Healthwatch representatives | 7 | 7 | 8 | 7 | 29 |
Semistructured interviews with staff involved in the design of dashboards | 2 | 5 | 10 | 6 | 23 |
Observation of meetings about the design of dashboards | 0 | 0 | 5 | 0 | 5 |
Semistructured interviews with ward staff using dashboards | 8 | 10 | 10 | 9 | 37 |
Observation of the implementation and use of dashboards on wards | 13 | 22 | 17.5 | 19.5 | 79 |
Fieldwork activity | Breakdown of fieldwork numbers | Total fieldwork |
---|---|---|
Observation of quality management meetings (number of meetings) | 35 | 50 |
Observation of directorate meetings (number of meetings) | 15 | |
Interviews with NEDs/medical directors (number of interviews) | 10 | 29 |
Interviews with chief nurses (or equivalent) (number of interviews) | 10 | |
Interviews with CCG/Healthwatch (number of interviews) | 3 | |
Interviews with directorate lead nurses/matrons (number of interviews) | 6 | |
Interviews with IT/informatics (design) (number of interviews) | 23 | 23 |
Observation of informatics meetings (number of meetings) | 5 | 5 |
Interviews with ward managers (number of interviews) | 15 | 37 |
Interviews with ward staff (number of interviews) | 22 | |
Observation of handovers/whiteboards/MDT meetings/ward meetings (number of observation hours) | 79 | 79 |
Interviews with regulators (number of interviews) | 7 | 7 |
Normalisation process theory
A number of established theoretical frameworks have been used to guide the design of studies of technologies in organisations, including health-care organisations. Our interest was in the implementation of new ways of working associated with IT. In our proposal, we argued that normalisation process theory provided the appropriate focus and had a track record of successful use in case studies in the NHS. 35,36
We used the theory to guide us towards phenomena of interest and, thus, to design our data collection methods. The theory rests on three main arguments. First, it proposes that practices become embedded in social contexts as a result of people working, individually and collectively, to implement them. Second, implementation is operationalised through four generative mechanisms – coherence, cognitive participation, collective action and reflexive monitoring. 35,36 If those involved in the implementation of dashboards can identify coherent arguments for adopting them, are engaged in the process of implementation, are able to adapt their work processes to use dashboards (or dashboards to fit in with practices) and judge them to be valuable once they are in use, then the dashboards are more likely to become embedded in routine practice. Third, embedding new ways of working is not a ‘one-off’ process, but requires continuous investment by the parties involved in implementation.
Figure 2 shows how the components of the theory are related. The lower half of the figure emphasises the importance of understanding working practices in local contexts. The upper half draws attention to institutional arrangements: the ways in which prevailing values and norms influence the behaviour of trust staff. The theory, and in particular the four generative mechanisms, provided a framework for designing the fieldwork. For example, it focused our attention on the value of observing ward working practices and, thus, evaluating the extent to which whiteboards were integrated into them (collective action). Similarly, it encouraged us to ask interviewees about their experiences of implementation and of using new technologies (cognitive participation and reflexive monitoring).
The theory also encouraged us to consider what it would mean to move from left to right across the diagram. The main point here, we felt, was that new technologies would become progressively embedded in people’s working practices. For example, as we describe in Chapter 7, when electronic whiteboards were introduced at Solo, some staff were initially sceptical about them. On one ward, technological problems led to the whiteboards being withdrawn and later reintroduced, with mixed results. Viewed in the light of normalisation process theory, the initial experience involved limited movement from left to right, and the later experience was one in which different members of staff moved different distances to the right. One would not say that the whiteboards had been fully embedded in local working practices by the end of the observation period. In Trio, in contrast, the whiteboards were fully embedded throughout the observation period; staff frequently used them and had found ways to integrate them into routines.
Data collection
Using a combination of methods, we collected data at the four sites over an 18-month period between April 2015 and September 2016. One focus was on direct observation of working practices; that is, we were interested in the use of information systems ‘as performed’ rather than ‘as imagined’ (i.e. how they were reported in interviews). We also used semistructured interviews and an analysis of site documentation to capture information about practices that we could not observe directly, for example key decisions made in meetings that we were not able to attend. The combination of methods draws on the work of Crabtree et al. ,38 Crosson et al. 39 and Ventres et al. 40 The consent form and a representative information sheet are in Appendix 6. The guides were developed iteratively in the course of the fieldwork.
Observation of handovers and the routine use of whiteboards
We discussed with ward managers where and when we should observe the use of whiteboards and then undertook an initial phase of ethnographic observation to establish that we could observe practices effectively and do so without getting in the way of staff going about their work. The chosen locations were in, or near, nursing stations. During this period we got to know staff and made sure that they were comfortable with our presence. Team members recorded their observations in field notes, which were written up in detail as soon as possible after data collection. The conversations with ward managers informed a decision to observe each of the eight wards approximately once per month, and to do so during a morning handover meeting and for up to 1 hour after handover, as staff began their shifts.
Given our aims and objectives, we were interested in when and how whiteboards – the outward manifestations of the information systems – were used. We were also interested in the sources of, and use of, information more generally. This included the data used in handovers, including ‘soft intelligence’, where staff passed on information from one shift to the next, and messages from senior managers. In addition, the observers occasionally asked staff to explain their actions ‘on the spot’ when it seemed important for the study, for example why those involved in a handover meeting had spent so long discussing a particular topic.
Observation of meetings
A range of meetings was observed, including board-level quality meetings, directorate meetings and IT design and development meetings. At all meetings, team members took contemporaneous notes, focusing both on the substance of discussions and on the deep assumptions informing them, for example whether boards were acting as performance managers, assurance managers or in some other mode. These notes were written up as soon as practicable after meetings.
Site documents
Local staff were asked to provide relevant documents. Most of these were meeting papers and minutes. The principal documents used in the analysis were the papers for the board-level quality meetings, discussed in the board mini-biography, and for directorate meetings, discussed in the directorate mini-biography. We also collected a number of other documents, including IT project plans and Quality Accounts.
Semistructured interviews
Semistructured face-to-face interviews were conducted for each of the mini-biographies. An initial series of interviews was conducted at ward, directorate and board levels in the spring and summer of 2015, using topic guides. The guides were developed during the course of 2015, partly to reflect our own improving understanding of the work of the trusts, and partly to reflect the extension of our interview programme to include trust information teams (interviews conducted in late 2015 and early 2016) and then external bodies – regulators, CCGs and Healthwatches. The number of interviews are set out in Tables 2 and 3. The interviews were audio recorded and then transcribed.
Quantitative data
We collected hospital- and ward-level quality and safety data, drawing on internal trust papers, Quality Accounts and other trust publications and NHS Digital. The main hospital-level data were mortality indices, reported incidents and complaints, the NHS Safety Thermometer, the NHS Staff Survey and the NHS Friends and Family test.
Roles of the patient and public involvement group and steering group
We had a patient and public involvement (PPI) group and a steering group for the project. Both met during the course of the project, commenting on our research plans in 2014 and 2015 and on our emerging findings in 2016. Both had a substantive influence on the conduct of the main fieldwork phase of the study. At the first meeting of the steering group, in late 2014, members gave us a clear steer about the focus of our work: we should focus on data on the quality and safety of services. Moreover, it would make sense to focus on a number of specific measures. This may appear to be a trite point: a study undertaken on acute wards, in the wake of the Francis report, would naturally focus on quality and safety. We had, however, initially assumed that we would also cover cost and outcome data. The point being made was that we should be less concerned with cost and outcomes than with quality and safety measures.
Our PPI group reinforced this point. The members of the group were recruited from the University of Leeds School of Healthcare’s own PPI group. A member of the study team (RR) sent an explanatory note to one of their meetings, and explained the study at the meetings, inviting expressions of interest. Our PPI group members were those who expressed interest following the School of Healthcare meeting. We held four meetings with the group and, throughout the study, members made detailed comments on our research plans and emerging findings. Our relationship with the group was mediated by Claire Ginn – the patient representative on the study team – and Elizabeth McGinnis. We asked the group, at the second meeting in the spring of 2015, what data we should focus on. We told them the measures that we were considering: incidents, complaints, mortality, NHS Safety Thermometer and vital signs (reflected in the National Early Warning Score (NEWS)). The PPI group took the view that these measures were appropriate, but also recommended the addition of two topics, namely nutrition and pain management. Their argument was that if a ward manager or a more senior manager had access to all of these data, they would be able to make overall judgements about the quality and safety of services on a given ward. In practice, this had a significant effect on the ward mini-biography (see Chapter 6) and the board mini-biography (see Chapter 8). In both, we noted when nutrition and pain management were mentioned, along with the other measures. The PPI group also commented on drafts of the ward and board mini-biographies at our fourth meeting in October 2016.
Biography of Artefacts and practices
We assumed at the start of the study that we would be observing the design, implementation and use of discrete technologies over time. We further assumed that ward whiteboards and board-level dashboards would be part of the same IT systems, with data captured on wards being available to trust managers. However, the findings of the telephone survey and the early observations at the four sites indicated that the situation on the ground was more complicated than we had anticipated. In our early fieldwork, we found that:
-
There was a separation, technologically, between real-time ward management systems – systems that make data immediately available to all users of a system once they have been entered – and the systems used to manage data for management meetings. The ward systems did not, as we had assumed, provide the primary ‘feeds’ for management reports.
-
There was also a distinction between data used on wards and elsewhere, and the IT used to capture, store, manage and present them. The same data were available on different technologies – magnetic whiteboards, digital whiteboards and tablets – across the four trusts.
-
National bodies – including NHS England, the Health and Social Care Information Centre (HSCIC; now known as NHS Digital) and regulators – play an important role in determining what data are captured by trusts and how they are structured and submitted to NHS Digital. That is, it did not make sense to assume that trust informatics teams – and suppliers – had a free hand to develop discrete systems; they had to take account of existing data and IT infrastructures.
Normalisation process theory had proved to be valuable in the design of our fieldwork. We realised, however, that it assumes that a single, discrete technology is being implemented. As the points above imply, this was not the case in this study: there were a number of technologies being implemented. The theory is not suited to the complicated mix of technologies that we found. Moreover, we were dealing with a large-scale technology, and thus groups of people who were engaging with it in very different ways – at ward and board levels – in different parts of an organisation. We therefore needed to identify an analytical framework that allowed us to address our aims and objectives, and that retained the key features of the approach set out in our protocol – but that was suited to the nature of the technologies that we found at the sites.
We considered a range of alternatives, but the Biography of Artefacts approach, developed by Pollock and Williams26 at Edinburgh, was the only one that we felt dealt with the development of large-scale systems over time. The approach stems from the observation that many IT systems in organisations have been implemented over many years. New functions are added periodically, and existing systems are linked both to one another and to new systems. Systems thus develop in piecemeal fashion, producing digital infrastructures that are deeply embedded in the day-to-day work of an organisation.
Pollock and Williams argue that it is simply not possible to study any large-scale organisational technology in its entirety. The pragmatic solution is to observe developments at a number of ‘key points’ where significant things happen, for example meetings in which system developers meet users to discuss the design of a system, or in which ward nurses use systems in the course of their work. The usual field methods are ethnographic, with researchers effectively undertaking ‘mini-ethnographies’ at each of the key points. In this study, we drew particularly on a narrative approach, whereby informants’ stories provide insights into how they are achieving or getting to achieve their aims, and how the narrative unfolds over time. 41 The authors further argue that if we want to understand any IT system in an organisation, and understand why it looks the way it does today, we need to understand its history. The method therefore involves the development of a number of ‘mini-biographies’ based on observations made, over time, at each of the key points. A Biography of Artefacts is, then, made up of a number of mini-biographies. The result is an account of developments from an unusual perspective. For example, there are a number of studies on the use of performance management data by trust board. However, we are not aware of any that, as in the current study, ask how the data get there and why those arrangements have developed, as well as how they are used.
We decided to use the Biography of Artefacts approach in the autumn of 2015. We were committed in our original protocol to understanding developments over time, and the decision to produce mini-biographies did not have a substantial effect on our fieldwork. That said, our realisation that the technologies were more complicated than we had expected led us to expand the scope of our fieldwork in two directions. The first was to ‘go behind the scenes’ to understand how trusts transformed data from a range of systems into consolidated reports for boards and other audiences. The second was to explore what happened to data sets when they were submitted to NHS Digital and sent onwards to other organisations, including the CQC and CCGs.
Chapter 3 National infrastructures
Key points
-
Technologies for handling national data submissions have developed over the last 30 years.
-
They were initially developed to manage routine activity data but, more recently, have been used to manage the increasing number of quality and safety data.
-
Developments have been piecemeal, resulting in hybrid infrastructures: amalgams of different technologies.
-
Current IT arrangements support the submission of data sets to national bodies, rather than supporting trusts to manage their own affairs effectively or share data with one another.
-
Some recent policies advocate ‘real-time management’: these, too, envisage national monitoring of the quality and safety of services.
-
Nurses’ data and IT needs have been barely visible during the whole period.
Introduction
This chapter discusses the government policies, and the national data and IT infrastructures, that have shaped the development of information systems in the study sites. It became apparent early in the study that national policies had had a very significant influence on the technologies and practices that we were observing. There were, for example, many data sets on the quality and safety of services that trusts were required to submit routinely to NHS Digital and other national bodies.
We present developments in national infrastructures, unfolding over a period of 30 years, albeit with the major changes occurring since the early 2000s. This is consistent with the Biography of Artefacts approach, which we outlined in Chapter 2. Developments are presented in three ‘clusters’, which have emerged at different points in time and are characterised by different ways of thinking about the nature and purposes of data and technology infrastructures. The three clusters are, performance management from the late 1970s onwards; a hybrid of New Public Management-style inspection and advocacy of local governance of quality and safety from 1997 onwards; and ‘real-time management’ since 2012.
First cluster: managerialism, 1979–97
The Conservative administration that came to power in 1979 argued that traditional bureaucracies, including the NHS, were inefficient. They believed that efficiency could be improved by the introduction of a range of measures, some of which were borrowed from private firms, including the use of performance management, the devolution of authority to local managers and the use of market-like mechanisms to commission public services. 42,43 These policies led to the introduction of institutional arrangements that are familiar today, including the separation of commissioning and provision of services. In the context of this study, a key development concerned national data collections. Hospitals submitted a Hospital Episode Statistics (HES) data set and a set of performance indicators (the latter on paper) from 1987 onwards (Professor Paul Aylin, based on evidence paper submitted to Bristol Inquiry, personal communication, Imperial College London, 2015). 44 HES included mortality data, as hospitals had to record all discharges, and death was one category of discharge. HES data sets were initially extracted monthly from hospitals’ Patient Administration Systems (PAS) onto tapes or compact discs (CDs), which were sent (by post or courier) to Regional Health Authorities. 45
There was one other policy that led to the routine collection of data on quality in this period: the Patient’s Charter of 1991. This introduced a maximum waiting time target of 18 months from GP referral to hospital admission. Hospitals were required to report the number of patients who were waiting more than 18 months.
The first significant national IT infrastructure investments were made in the 1990s, prompted in part by the perceived need to move large data sets between locations for contract monitoring in the then-new NHS internal market. 46 Contracts for a national internal network, NHSnet, were signed in 1994. From 1996, hospitals submitted data sets to Regional Health Authorities electronically via a national service, the NHS-wide Clearing Service, called ClearNET.
Second cluster: centralising performance management, 1999 onwards
A Labour government was elected in 1997. From 1999 onwards it decided to pursue managerial policies, continuing in the broad direction set by the previous Conservative administration. At the same time, policy-makers became concerned about the quality and safety of NHS services, prompted by official reports that highlighted far higher hospital mortality rates than anyone had realised, as well as a scandal in cardiac surgery services at Bristol Royal Infirmary. 1,2,47,48
There were a number of policy responses, notably in the creation of national bodies, that would need data on trusts to do their work. Two new bodies were announced in The New NHS:49 the National Institute for Clinical Excellence and the Commission for Healthcare Inspection (later renamed the Healthcare Commission and then the CQC). The latter was to be responsible for governance of the quality of services in NHS organisations. These organisations were subsequently joined by Monitor, which started its work in 2004, as the regulator of the (then) new foundation hospitals. 50
The National Patient Safety Agency was created in 2001; NHS organisations were required to report clinical incidents to it. There was a substantial overhaul of professional regulation, which, among other things, led to a revised system for the licensing of doctors and a National Clinical Assessment Authority for investigating poor performance. The net effect of policies in this period was twofold: a step change in the number of centrally mandated data ‘returns’ in the years after 1997, and the creation of new agencies that would use them (Table 4).
Year | Policies/reports | New agencies | New indicators |
---|---|---|---|
1997 | The New NHS: Modern, Dependable 49 | Announced creation of NICE and CHI | |
1999 | Modernising Government (all public services)51 | ||
2000 | The NHS Plan 52 | Staged reduction of maximum waiting time from 18 to 6 months | |
2000 | NHS Cancer Plan 53 | A set of referral/diagnosis-to-treatment targets | |
2001 | Reforming Emergency Care 54 | Thrombolysis within 20 minutes in accident and emergency; maximum of 4 hours from arrival to admission | |
2001 | National Patient Safety Agency established | Reporting of incidents/development of NRLS | |
2001–2 | CHI star ratings (0–3 for hospitals) published | ||
2002 | Payment by Results (tariffs) announced55 | ||
2004 |
CHI now Healthcare Commission Monitor starts work |
||
2004 | NHS Improvement Plan 56 | RTT |
Expansion and consolidation
In the period from 2004 onwards there were countervailing tendencies. One was a centralising tendency, as the new regulators got down to work. The CQC from the beginning undertook inspections of individual trusts and collected a range of data directly from them, using them to publish ratings (0–3 stars) based on a basket of indicators. 57,58 It also used routine data sets to ‘scan’ NHS trusts, for example looking for outliers on key variables such as waiting times, and, in so doing, built up data sets of indicators. In 2009, it introduced Quality and Risk Profiles, which amalgamated the indicators and information gathered in inspections to create ‘risk scores’ for each trust. These were provided to inspectors as a series of dials (dashboards) in advance of an inspection.
The National Patient Safety Agency also developed its reporting arrangements – the National Reporting and Learning System – during this period. 59 Additionally, Monitor started its work in 2004. Initially, it focused on financial measures, but later it included a limited set of quality measures. Overall, then, there was a substantial increase in the number of data on quality and safety that NHS organisations were required to submit to national bodies.
The other tendency was to develop local management arrangements, drawing on Total Quality60 and Lean61 approaches, among others. For example, the Productive Ward initiative,62 a rare nursing-focused initiative in this account, drew on Lean thinking and was thus intended to improve quality and reduce costs simultaneously. Interest in the use of nurse staffing data, including ratios of nurses to patients, increased in this period. However, staffing data aside, it appears that limited attention was paid to the data that were needed to drive local initiatives or to the IT systems needed to support them. There was no local equivalent of the national data submission requirements.
There was a clear focus on data in High Quality Care For All,63 widely known as the Darzi report, published in 2008. It recommended a move away from centrally driven performance management towards more local ownership of the quality of services. The report argued that quality had three components: clinical effectiveness, patient experience and patient safety. These were used to develop the NHS Outcomes Framework, which was first published in 2010. There were five ‘domains’, each linked to one of the three components. 64 Public Health England now commissions the Framework, which currently has over 60 indicators. The Darzi report also recommended the publication of Quality Accounts:
to help make quality information available, we will require, in legislation, health-care providers working for or on behalf of the NHS to publish their ‘Quality Accounts’ from April 2010 – just as they publish financial accounts. These will be reports to the public on the quality of services they provide in every service line – looking at safety, experience and outcomes.
p. 51,63
The first Quality Accounts were published in 2010. 65
2010 onwards: policy turbulence
A Conservative–Liberal Democrat coalition government was formed in 2010. In the following 3 years it abolished a number of bodies created in the previous decade, including the National Patient Safety Agency. The National Patient Safety Agency’s functions were transferred to NHS England.
The coalition introduced major structural changes in the NHS, set out in the Health and Social Care Act 2012. 66 From 2013 onwards, five organisations were formally responsible for the governance of the NHS: NHS England, the CQC (responsible for quality), Monitor (responsible for market regulation), the Trust Development Authority (TDA) (responsible for oversight of the preparation of trusts for foundation status, particularly in relation to their governance) and Public Health England. Later, in 2015, the patient safety function of NHS England, Monitor, TDA and other smaller bodies was brought together in a single new organisation, NHS Improvement. This was the patient safety function’s third ‘home’ in 5 years. In 2016, a new Healthcare Safety Investigation Branch was created, which started work in 2017.
The HSCIC was also created by the new act, merging the NHS Information Centre – which managed HES and other data sets in the 2000s – and Connecting for Health, the agency that had been responsible for the NHS National Programme for IT. HSCIC was legally responsible for acquiring data from all NHS organisations (i.e. it was to manage data collection on behalf of other national organisations and CCGs). The net effect of these structural changes was, again, to reinforce centralisation of data submissions.
At the same time, concerns about the safety of services intensified, prompted in part by clear evidence of poor treatment and care at Mid Staffordshire NHS Foundation Trust from 2006 onwards. Robert Francis (now Sir Robert) chaired an inquiry and published a report in 2010, setting out many harrowing experiences of patients and their carers. A second report was commissioned on the management and regulation of services at the trust, which was published in 2013. It emphasised three points that are relevant to this account. First, it criticised the board, Strategic Health Authority and regulators (including CQC and Monitor), but recommended that the arrangements needed to be strengthened rather than changed. Second, the report noted that NHS trusts published very limited information about their performance, and argued that problems would be less likely to occur if trusts were required to publish more. This echoed wider developments, notably in cross-government open data policies. 67 Third, and echoing the Bristol Inquiry from 12 years earlier, Francis argued that better use could and should be made of data and IT systems:
The recording of routine observations on the ward should, where possible, be done automatically as they are taken, with results being immediately accessible to all staff electronically in a form enabling progress to be monitored and interpreted. If this cannot be done, there needs to be a system whereby ward leaders and named nurses are responsible for ensuring that the observations are carried out and recorded.
Recommendation 243, p. 111,5
These recommendations were accepted by the government in its formal response to the report in November 2013. Yet again, the overall arguments were in favour of centralising data submissions and, by implication, the capability to monitor and manage quality and patient safety from the centre.
There were some voices in favour of localism. In the wake of the Mid Staffordshire report, the government commissioned a report from the respected US clinician and analyst Donald Berwick. His 2013 report identified a role for local data and IT services:
Patient safety cannot be improved without active interrogation of information that is generated primarily for learning, not punishment, and is for use primarily at the front line. Information should include: the perspective of patients and their families; measures of harm; measures of the reliability of critical safety processes; information on practices that encourage the monitoring of safety on a day to day basis; on the capacity to anticipate safety problems; and on the capacity to respond and learn from safety information . . . Most health care organisations at present have very little capacity to analyse, monitor, or learn from safety and quality information. This gap is costly, and should be closed.
p. 27,10
This report was, however, the one exception to the general trend towards centralisation of authority.
Information technology policies 1997–2012
There was little overlap between data and IT policies in this period. Earlier policies prevailed in the first few years: the one significant change involved explicit support for integrated electronic patient records, presented as part of a move to more patient-centred care. 68,69 There was, however, a marked shift in the nature of IT policy-making in 2001 and 2002. A series of reports paved the way for a decision to provide funding for an ambitious IT infrastructure for the whole of the NHS. 70–72
The implementation of the NHS National Programme for IT – as it was termed – did not go well. The biggest problem concerned the flagship of the programme, namely the five contracts for electronic health records (EHRs), which were worth > £5B. 73 Five years into the programme, in 2008, systems had been implemented in a handful of hospitals in the south of England and in single departments in two hospitals in the north. There were calls for a fundamental review, and even abandonment of the programme. 74 Initially, NHS organisations waited for these systems – they had been told forcibly that they had to – but many eventually decided to pursue their own plans. A new policy was published in 2008, which tacitly endorsed these decisions, encouraging trusts to focus on implementing the ‘clinical five’: five key systems that all acute hospitals were deemed to need. 75
The main bright spot for policy-makers was the N3 network, the successor to NHSnet. N3 worked, in the straightforward sense that GP and hospital (including pathology department) systems could link to it and send messages over it. The success of N3 is significant in this report, because it extended the technology infrastructure in a way that reinforced the ‘pathways’ from trusts to national bodies but did not provide infrastructure for other data flows, for example between trusts.
A new policy, The Power of Information, was published in 2012. 76 There was continuity with the arguments that had underpinned the NHS National Programme for IT, notably in its focus on IT and on maintaining a clear separation from data and other policies. However, the policy also had novel features, notably its aim to promote:
a culture of transparency, where access to high-quality, evidence-based information about services and the quality of care held by government and health and care services is openly and easily available to us all.
p. 5, Department of Health and Social Care,76
This echoed the arguments in the Francis report and in open data policies, noted earlier.
Third cluster: real-time governance
At the same time as later ‘second cluster’ policies were published, the government published policies with two distinctive characteristics. The most important of these is the Five Year Forward View,77 published in 2014. Rather than emphasising competition, performance management and other managerial policies, the Five Year Forward View talks of the NHS as a ‘social movement’, and of the importance of engaging with local communities, with organisations in a locality working closely together. It talks, too, of the devolution of authority to groups of organisations working together, with the freedom to develop new working practices appropriate to the populations they serve. This implied distinctive data flows between organisations rather than upwards from trusts to national bodies. Subsequently, in early 2016, the Secretary of State for Health announced IT investments totalling £4.2B, with a significant proportion of this to be used to ensure that systems could share data with one another. 78,79
A second new theme concerned the use of clinical data in ‘real time’: using data on services while memories of those services are still fresh, as opposed to using them for retrospective reviews weeks or months after the activities described. Thus, the second Francis report,5 introduced earlier, stated that:
All healthcare provider organisations, in conjunction with their healthcare professionals, should develop and maintain systems which give them:
Effective real-time information on the performance of each of their services against patient safety and minimum quality standards;
Effective real-time information of the performance of each of their consultants and specialist teams in relation to mortality, morbidity, outcome and patient satisfaction.
Recommendation 262, p. 113,5
Similarly, a new IT policy, Personalised Health and Care 2020,80 proposed that all patient records would be real-time and interoperable by 2020.
Finally, the government published a clinical utilisation review in 2015. 81 This argued that NHS trusts should implement IT infrastructure for collecting and analysing real-time vital signs data. These data would, it was envisaged, be made available to NHS England in real time (i.e. presumably within a few hours). Few details were given about the ways in which data would be used, but the text implied that managers at NHS England would be able to monitor the work of nurses and doctors in the course of a shift.
Discussion
This brief account emphasises the long period over which the current infrastructures have developed. In the main, the infrastructures that have been implemented support centralisation: the flow of data sets from trusts to NHS Digital and other bodies. This has happened in spite of the fact that policy-making for data and policy-making for IT have been largely separate throughout the last 30 years. In later chapters we will explore the nature and extent of the overlap between the two, in practice, in trusts.
The nature of the third cluster, with its emphasis on the use of real-time data, is not yet clear: there is not enough policy or experience to date. It appears, however, to have some of the characteristics of Digital Era Governance, wherein IT systems are integrated and used to enable staff in different services – and different trusts – to co-ordinate their work. 82 This contrasts with the centralising tendencies of developments in earlier periods. We will also explore the extent of the moves towards the use of real-time data in trusts.
Chapter 4 Telephone survey and board paper review
Key points
-
This phase of the study comprised a telephone survey of 15 acute NHS trusts and a survey of the content of board papers of all trusts in England in 2014.
-
The telephone survey revealed that two sites in our region were already using electronic ward whiteboards.
-
The board paper survey showed that all acute NHS trusts used dashboards to present data on the quality and safety of services.
-
The surveys, taken together, indicated that national bodies had a substantial influence on the data used by boards: the boards submitted the data collected to national bodies.
Introduction
This chapter presents the findings of a telephone survey of senior nurses, undertaken between September and November 2014, in 15 NHS acute hospital trusts in northern England. We also surveyed the routine quality and safety data presented in papers tabled for board meetings in all 156 acute trusts across England in January 2015.
Telephone survey
Dashboards were used by all 15 of the trusts interviewed and were described as a graphical means of displaying data on the performance of wards and departments, and, more broadly, as key components of trust management information systems. Most participants told us that dashboard information was collated manually from a number of operational systems, including PAS and Datix, the NHS system for reporting patient safety incidents. Reports were typically assembled manually by central performance management teams, and retrospective reports were circulated monthly to wards, directorates and trust boards. The assembly of board reports was easier in two trusts that had real-time data capture and reporting systems.
Regulators
The focus of our questions was on ‘ward to board’ information systems, but many of our interviewees stressed the importance of regulators in determining the information that their trusts collected and used. All trusts in the sample were required to report to national bodies including Monitor, CQC and NHS England.
Several participants pointed out that compiling data for national reporting was onerous:
The information colleagues are so busy doing the national reports and making sure things are validated that there’s very little time for development.
Site A
We were told how regulators scrutinised trusts’ data collection and reporting:
We had an external review of our governance arrangements earlier in the year as part of Monitor’s commitment to 3 yearly reviews of governance, and part of the review of the governance was, is the information that we provide to people fit for purpose and does it help, you know, with planning and you know improving services? So that review took place and made some recommendations about some of the dashboards, particularly the dashboards that go to the board of directors.
Site A
Some interviewees noted that reports to regulators were aggregated, and that this could mask significant variations within and between trusts.
Participants noted that, historically, reports were sent to commissioners, whereas now they were available to other trusts and to the general public. Participants shared a concern that data might be misinterpreted:
. . . it’s all in public now . . . the devil’s in the detail sometimes and actually sometimes a dashboard can almost be too simplistic.
Site B
For example, local circumstances, such as a trust having a prison in its catchment area, which made some targets difficult to achieve, might not be apparent in nationally published data. On the other hand, participants said that the availability of key national targets allowed trusts to compare their performance with that of others and to benchmark themselves. Indeed, one trust indicated that it reviewed other trusts’ information to see where improvements had been made and how they, in turn, could improve.
Trust boards
All participants told us that dashboards were important tools for trust boards. Board members used a number of dashboards covering a wide range of services and topics, including mortality, infection control and the nursing workforce. Dashboard data presented at board level tended to be for activity 4–6 weeks in arrears; for example, data for the month of May would be reviewed at a July meeting.
Most interviewees told us that dashboards provided warning signs, highlighting specific wards or topics that needed monitoring or action. Trusts used colour-coded dashboards – red, amber and green – to highlight areas that needed attention. Many of the trusts reported that dashboards were used to identify poorly performing services, which could be ‘escalated’ for senior management review if necessary. One interviewee described dashboard review as a ‘trigger or a proxy for escalation’.
Interviewees were aware that dashboards could be viewed as opportunities for learning:
[Dashboards are] not something that they see as a way of kind of something to beat them up with but actually something to celebrate that’s good and something to say you’re doing a really good job.
Site C
Less positively, there were concerns that summaries could provide false-positive information:
Ward 1 might be doing really badly and ward 99 might be doing exceptionally well and you end up with an average that’s good or worse than that in terms of post Francis, results that look like the organisation is doing very well, so then the board is reassured but if you get in to the detail you might find you’ve got two or three areas that you should be concerned about.
Site C
Furthermore, some participants said that, at board level, problems may be masked by the lack of detail in reports:
It is very easy to fall down the trap of just looking at the crude numbers without understanding the clinical area, so for example an area will come out with a high number of falls but it’s going to have because it’s an area that has got a high risk of falls . . . so it’s very easy to say Ward A is brilliant, look they’ve had no falls, when it’s a paediatric ward . . . I think in the wrong hands the information can be quite dangerous.
Site B
Other participants were also wary of dashboards containing too much information for the board to understand or use:
There’s always an appetite at the board for more and more information, and if I was a vindictive person or wanted to hide something the one way to deal with that is to give the board more and more information.
Site D
Finally, trusts with real-time data capture and reporting told us about the ways in which they used their systems:
Every Friday, myself, the medical director, the director of governance, director of risk and quality, we meet to go through harm, to understand the root cause and how it’s reported, and then the learning, and the actions that are taken, and then follow that through with what we call . . . it’s like a recipe card . . . it’s a very high-level visual message board which goes out to the whole trust.
Site E
Board paper survey
These comments, and the comments about regulators, prompted us to review the papers for board meetings at all 156 acute trusts in England for January 2015, in the week beginning 16 March 2015. One hundred and fifty-two trusts published full sets of board papers, consistent with post-Francis reporting requirements. We requested papers from the other four trusts, two of which sent them by return and two of which did not send them within 4 weeks of our request. Almost all trusts (152 out of 154 published sets of papers) had papers on waiting times, the NHS Safety Thermometer, serious incidents, complaints and patient experience measures. Almost all (153 out of 154) had dashboard-based workforce reports. A total of 115 out of 154 trusts reported risk assessment framework data, and 58 out of 154 presented balanced scorecards. Eight out of 156 trusts reported data items that were not requested or required by any national body.
Wards
Our interviewees reported considerable variation between trusts in the availability of management information to ward staff. A majority reported that management reports were prepared by a central team, were limited in scope and reported 4–6 weeks in arrears. A minority told us that dashboards were actively used by ward staff. One participant considered this active use to be because the content was nurse driven and nurses had had a big influence on how their information systems were designed and used:
What you have to do is make them intelligent and useful to the user, something that they feel they understand their business better, and also be prepared to invest where it’s needed if it tells you something you need to invest in that you do it in a responsive way so staff feel listened to and you engage them in what’s important to them as well as what’s important to the organisation.
Site E
The engagement of ward staff with management issues was an ongoing issue across all trusts. For example:
What we’re looking at is how we absolutely get that down in to ward level and them actually using it and being able to interpret the data and what it’s telling them.
Site F
Interviewees offered a number of reasons why information was not being used on wards, including clinical staff struggling with the perceived low value of retrospective information, problems with local IT systems, lack of staff education and training to use information, and earlier versions of dashboards having been poorly received, colouring renewed attempts to engage staff. A number of interviewees told us that their ward staff viewed dashboards as management tools, and as tools for control rather than as a means to support their work.
As noted above, there were differences between trusts with real-time systems and those without them. Where real-time systems were available, interviewees told us that emerging risks could be addressed as they arose. These trusts had implemented risk-based management systems, for example by ensuring that all admitted patients had NEWS and other risk metrics. These could be viewed on screen and risks could be proactively managed:
When you go on to our wards you’ll see a single computer screen that’s got a patient’s bed number and then there’s a tick or a cross against all the bits and pieces that need to be done, like have they had VTE assessment . . . and it’s very obvious where the gaps are . . . I need to know very quickly where they’re not doing stuff so I can ask questions.
Site D
Discussion
We found that board-level dashboards are integral parts of acute NHS trusts’ management information systems. All acute NHS trusts used dashboards to summarise information for board-level meetings. The evidence suggests that trusts are working within a centralised NHS reporting system, using data for retrospective review.
In most of the 15 trusts, in contrast, the overall picture was of ward managers receiving limited, retrospective management reports. They were not in a position to manage risks in real time in the manner envisaged by Berwick and others. That said, two trusts told us that they had deployed real-time ward management systems and were able to manage quality and safety risks proactively in this way. Four further trusts told us that they had plans to implement such systems. The fact that ward systems had already been implemented influenced our thinking that it would be most valuable to observe, in the main, fieldwork at four acute hospital trusts. We described the effect of this decision on our sampling strategy in Chapter 2.
The surveys provided initial indications of the developing governance arrangements and shed some light on our aims and objectives. They suggested that authority was exercised in two ways. First, there was authority to determine what data were collected and reported, and the findings indicate that national bodies determine the majority of data items reported at trust board level. Viewed in historical context, this is not surprising. Ever since they were introduced in the late 1980s, performance indicators have been designed and managed by national bodies. The authority to define the performance framework still resides with national bodies. This point is consistent with the account of national data and technology infrastructures in Chapter 3.
Second, we found that board members receive a great deal of detailed data on wards and departments, presented in a wide range of graphical formats. There was general agreement that dashboards were valuable, both for trust boards in monitoring performance and for non-executives and governors holding boards to account. Some weaknesses of dashboard-based reporting were also noted, including the risk that aggregation of data masked important information about outliers.
Finally, the importance of IT infrastructure in capturing and making quality and safety data available to managers and ward teams, is not stressed by either Francis or Berwick, but is highlighted by these findings. This point also influenced our thinking about the fieldwork that we were to undertake in four trusts in phase 2; we felt that we should investigate the nature and role of IT infrastructures for handling quality and safety data in trusts.
Chapter 5 The four trusts: infrastructures and performance
Key points
-
We identify five distinct technology ‘development paths’, concerning data used on wards, national data submissions, data processing systems, real-time ward management systems and infrastructures.
-
The four trusts were at different points on a technology development trajectory, progressively integrating previously separate technologies.
-
The overall performance of all four trusts, judged on a range of quality and safety metrics, improved in the course of the study.
Introduction
The four mini-biographies in Chapters 6–10 describe the artefacts and practices in different areas of four acute NHS trusts, and at national and local agencies. In drafting the mini-biographies, we came to realise that, although the Biography of Artefacts approach has analytical advantages, the artefacts – the technologies – as a whole can be difficult to ‘see’. It is easy to get lost in the detail. Accordingly, this chapter provides an overview of developments at the trusts, designed to make it easier to make sense of the accounts in the following chapters. We present material that would usually be found in the final chapter of a report, but we feel that it is more helpful to present it here. The overview comes in two parts. The first sets out five developments in data and in IT that we observed across the mini-biographies. The trusts were at different stages on a development path, progressively integrating previously discrete technologies. The second provides comments on the performance of the four trusts between 2014 and 2016: these data provide useful context for the observations and comments in the following chapters.
Four data and information technology development paths
The main aim of the study, set out in Chapter 1, was to establish whether or not ward teams in acute NHS trusts have the information systems they need to manage their own work and to report on that work to trust boards and other stakeholders. To address the aim, we needed to know what information systems they were using and how they were using them. This section outlines four distinct but related technologies that we observed. As we will see in later chapters, these have gradually been drawn together to create the information infrastructures that are the focus of this study.
Patient-level data and managing wards
Most of the data on the quality and safety of services, including the routine data that appear in management reports, are captured on wards in the course of treatment and care provided to individual patients. Staff on hospital wards have historically recorded a wide range of patient data on paper in their medical and nursing records. In the context of this study, which focuses principally on nursing and on quality and safety, the key data included patients’ care plans, vital signs (body temperature, blood pressure, heart rate (pulse) and respiratory rate) and risk assessments. The risks varied from patient to patient, but could include the risks of developing a pressure ulcer or of experiencing a fall while in hospital.
These are part of a large array of data, comprising test results, records of treatments undertaken, drugs administered and free text. The last of these might include a range of comments and observations, including conversations with patients and relatives. We will see in Chapter 6 that some of the individual patient data were used to manage the work of a ward, notably by nurses identifying patients with the most significant clinical risks in handovers and patient safety huddles.
National data submissions
We will also see, particularly in Chapter 7, that there has long been parallel data capture on wards, largely separate from patients’ records. These parallel arrangements originated in the capture of administrative data for national submissions, going back to the 1980s; we outlined them in Chapter 3. We noted that many submissions are mandatory, that the number of data concerning the quality and safety of services has grown over the last 15 years and that the arrangements have developed in piecemeal fashion.
Basic administrative data (patient name, admission date and so on) are typically entered by clerks, usually before a patient arrives on a ward. Trusts have had to develop arrangements for submitting data on incidents, using the commercial Datix software (Datix®, London, UK), and the measures in the Safety Thermometer. These data are manually entered into a computer system, increasingly a laptop or tablet, or recorded on paper and then re-entered into a computer.
It is worth stressing that the great majority of these data have to be submitted to national bodies as reports of transactions. The preparation of these data is undertaken in trust information teams (see Chapter 7). Thus, routine data on incidents, for example, report the number of incidents, rather than details of the incident, or whether or not trusts have learned from them and made changes designed to prevent similar incidents. These transaction data are the basis of most of the routine data sets and reports on the quality and safety of services that are used by trust boards (see Chapter 8), directorates (see Chapter 9) and national and local agencies (see Chapter 10).
Hospital information technology systems: data processing systems
Acute hospitals first deployed PAS in the 1970s and have continued to rely on them ever since for the recording of outpatient appointments, admissions and discharges, and a range of other transactions.
Hospitals have also progressively deployed administrative IT systems in departments, including pathology, radiology and operating theatres. Most trusts purchased systems from commercial suppliers and adapted the software to meet their local practices. They are data processing systems: essentially, repositories where data on events are stored. Moreover, in most trusts they have been purchased in piecemeal fashion over many years, use proprietary software and have not been designed to be integrated with one another.
This broad class of systems has provided the context for the development of EHR systems. By and large, these systems, including those offered to NHS trusts under the NHS National Programme for IT, are data processing systems. The result for the NHS as a whole is a situation where most trusts have PAS and a range of departmental systems. This study is not directly concerned with these systems: their significance lies in the fact that they provide an extensive IT backdrop to those that are of interest to us.
Real-time ward management systems
We are particularly interested in these systems in this study, which typically comprise mobile devices (laptops and tablets) and wall-mounted electronic whiteboards. They mark a departure from the legacy data processing systems, and hence a departure in the ways in which trust staff think about the role of IT in their work. The details of the systems (set out in Chapter 6), therefore, provide insight into the changes. The approach of three of the four trusts was to design systems around patient data: patient data were the basic ‘building block’. Details about patients (e.g. which wards they were on, who their lead consultant was) could then be aggregated so that clinicians could view data in any of several ways (by ward, by the patients under a consultant’s care and so on). This approach dissolves the conventional distinctions between EHRs and other systems, including an earlier generation of nursing systems.
Viewed in this context, the systems we describe in Chapter 6 were components of broader developments: they could be used to extend the automation of data collection on wards, notably to include data that are of value to nurses. They were also different in kind from data processing systems: they were designed to provide ‘real-time’ support to clinicians. They could, for example, be used to set times for future nursing tasks, such as the next set of observations or risk assessment. They could also be used for trust-wide real-time tasks, notably for real-time bed management.
The four trusts
The four trusts adopted different IT strategies. At the start of the study, Solo had been developing a comprehensive, integrated IT infrastructure for over more than a decade. The bulk of the work had been undertaken by an in-house informatics team. It had two main hospital-wide systems. The first was one that staff could use to manage wards and departments in real time, as described above. It also had a hospital-wide management system that included a wide range of management reports. Some data – for national submissions – were entered manually into the management system. A range of data were fed from the real-time system to a data warehouse, where they could be formatted for data sets and board reports and made available on the management system.
Both Duo and Trio had a number of departmental systems, as described above. At the start of the study, both had committed to the real-time systems described above, with patients as the ‘building blocks’, and allowing for a range of views of data. Both managed developments largely in-house. Up until 2014, Duo had focused more on the EHR component of its developments, which were in use in a number of clinical specialties. As we describe in Chapter 6, there were two pilot initiatives, one in each of the two wards that we observed during 2015 and 2016, and each focusing on a subset of nursing data.
Trio had made more progress with introducing real-time systems at the start of the study. All wards in the trust had electronic whiteboards and tablets, and the system was used by the bed management team. During the study, new functions were added to the system and, in the background, the trust was planning, and then executing, the integration of the real-time and EHR systems.
Quartet had adopted a different strategy in the period before the study, and decided on a NHS National Programme EHR system, which was introduced in 2014. This system was, in effect, an addition to existing data processing systems. Quartet did not have real-time systems at the start of the study. It did, however – in common with the other three trusts – have a data warehouse.
The four trusts: performance
This section presents summary data on the performance of the four trusts. The data were obtained from a number of trust and national sources. It is worth noting that some data were difficult to locate – even for us – and difficult to interpret. For example, data on the same topic, such as serious incidents, could be reported in slightly different ways in board papers and Quality Accounts, due in part to the use of different cut-off dates for counting them. This point aside, however, the data show that, overall, the four trusts showed improvements in key national indicators during the three financial years of 2013–14, 2014–15 and 2015–16.
Mortality
The Summary Hospital-level Mortality Indicator (SHMI) improved at Solo, Trio and Quartet from 2013 onwards. At Duo, Quartet and Trio, the Hospital Standardised Mortality Ratio (HSMR) reduced.
Complaints
Between 2013 and 2016, in all sites, the number of formal complaints reduced; in Solo and Duo, the number of formal complaints decreased by > 40%.
Safety Thermometer
Results from the Safety Thermometer showed that, in October 2016, Solo and Duo were slightly above the national average in the percentage of patients who received harm-free care, while Trio and Quartet were slightly below the national average. The percentage of reported harm-free care increased at all four trusts between 2013 and 2016.
The NHS Safety Thermometer data indicated a decrease in new pressure ulcers developed within 3 days or more after admission at all four trusts. Data showed a downwards trend in relation to falls resulting in harm in all four trusts over the 3 years.
All four trusts also reported a reduction in the proportion of patients admitted who were being treated for new VTE. Duo and Trio data showed that a high number of patients (> 90%) received a risk assessment for VTE in 2016, a number that improved between 2013 and 2016.
Serious incidents
The number of reported serious incidents increased at Solo, Duo and Quartet between 2013 and 2016. Duo’s figures increased by > 60%, Solo’s more than doubled and Quartet’s increased more than fourfold. There was little change in the number of serious incidents reported at Trio over the 3 years. We note that there is a widely held view in the NHS that an increase in incident reporting implies an increase in openness or honesty.
NHS Staff Survey
The results from the NHS Staff Survey showed that, in all four trusts, the majority of staff ‘agreed’ or ‘strongly agreed’ with the statement ‘the care of patients/service users is my organisation’s top priority’. Over 65% of staff in all of the trusts ‘agreed’ or ‘strongly agreed’ with the statement ‘I am satisfied with the quality of care I give to patients/service users’. Between 2013 and 2016, the number of staff in Solo, Duo and Trio who strongly agreed with this statement grew; in Quartet the number was similar in each year. Equally, in all four trusts, fewer than 35% of staff ‘agreed’ or ‘strongly agreed’ that ‘senior managers act on staff feedback’.
Friends and Family Test
In Solo, Duo and Quartet, an increasing number of people said that they were likely to recommend the trust to friends and family between 2013 and 2016. Over 92% of inpatients in all four trusts would recommend their services in 2016.
Chapter 6 Wards mini-biography
Key points
-
Two wards were observed in each of the four trusts.
-
There were observations of the use of whiteboards during shifts, of handovers, safety huddles and of ward meetings, and interviews with a range of staff.
-
The four trusts were on a ‘technological trajectory’, with Quartet the least automated and Trio and Solo the most.
-
The introduction of electronic whiteboards and mobile devices went smoothly on some wards, and far less so on others.
-
Broadly, the move to digital technologies was viewed positively, although there were some dissenting voices.
Introduction
This chapter presents the ward mini-biographies, the first of four mini-biographies of developments at the four trusts. Developments at each of the trusts are presented separately, starting with the trust that had not introduced real-time systems at the start of the study, and ending with the trusts that had. The mini-biography presents evidence from our observations of the use of whiteboards, of handovers, of observations and interviews concerning observations of ward meetings, and comments on information culture on the wards more broadly.
Quartet
Fieldwork was undertaken on two wards between July 2015 and July 2016. Ward A had the same ward sister throughout, while Ward B had a new ward sister half way through our fieldwork. Staffing establishments on both day and night shifts deteriorated on both wards between June 2015 and May 2016. In September 2016, the proportion of registered nursing staff filled as planned during the day was around 80%. The results of the Friends and Family Test showed that > 99% of patients would recommend the two wards’ services. During the observation period, in 2016, the trust introduced a new application, the Electronic Health Record.
Using whiteboards
The two Quartet wards had two dry-erase whiteboards positioned in nurses’ rooms located across corridors from the nursing stations. One of these was a patient whiteboard of the kind developed in the national Productive Ward initiative in the 2000s. 83 For each patient, it listed their name, consultant, diet, estimated date of discharge and risk assessment completion dates. On both wards, magnetic symbols were also used, placed next to a patient’s name to highlight key nursing issues, for example that someone has dementia, had a cannula inserted, was at risk of falling or had had a fall. The whiteboards were updated after handovers and then throughout the day. They were updated promptly when a new patient was admitted. (observed September 2015, September 2015 A, March 2016, Quartet ward sister, Quartet ward sister.)
Throughout the observation period, the dry-erase whiteboards were used in the same way on both wards: their use was firmly embedded in staff working practices. Nurses, doctors, therapists and porters typically used the patient whiteboards ‘at a glance’, often looking for patients’ locations or to check key clinical information before they went to see them (Quartet ward sister). This was confirmed in an interview with a junior doctor (Quartet junior doctor). We also observed medical staff discussing patients around the boards.
Nursing handovers
Both wards’ handovers took place in a private room and typically lasted 45–60 minutes. Both wards had paper handover sheets, which were updated by the nurse in charge on a computer before handover meetings (observed September 2015, March 2016, Quartet ward sister). The sheets listed information for each patient, including their general status, estimated date of discharge, alerts (i.e. patient has dementia or allergies), recommendations (i.e. scans or blood tests) and risks (i.e. risk of falls or pressure sores) (observed February 2016).
The conduct of handovers was broadly similar throughout the fieldwork period, with the nurse managing the handover and covering a number of topics, including the clinical status of new patients, key risks (e.g. falls, pressure ulcers), when patients’ vital signs observations were due and tasks to be undertaken on the next shift. NEWS were not recorded on the handover sheets; patients with high NEWS were, however, discussed during the meetings. Other information that was not on handover sheets was discussed, including patients’ emotional status (observed September 2015, September 2015 A, February 2016, March 2016, April 2016, Quartet ward nurse). We observed that lead nurses wrote copious notes on the handover sheets during the meetings (observed March 2016).
In the last 3 months of our observations, Ward A added a new heading to the bottom of the handover sheet: ‘2 minute safety briefing (tick list)’. It was not discussed during handovers, but the ward sister told us that the list was there as a reminder to staff of things to be aware of or to check during a shift, for example checking that risk assessments are up to date (observed April 2016).
Near the end of our observations, Ward B changed the management of handovers. The nurse handing over would audio record everything she would have previously said in handover, before the next shift arrived. That nurse was able to provide care while the new staff listened to the audio file, and then answered any questions at the end of handover. The ward sister told us that recording handover was a much more efficient use of staff time; she felt that it had been well received on the ward (Quartet ward sister).
Using electronic records
We noted in Chapter 5 that Quartet had purchased a traditional EHR in 2014, designed to record transactions rather than actively manage patients. Each ward had two computers in the nursing room and one at the nursing station, plus additional laptops on wheels (LoWs). Before this, the majority of nursing data on patients had been recorded on paper and kept at the ends of patients’ beds.
When our observations began in 2015, some routine patient data were recorded in the EHR, including NEWS and pressure ulcer assessment data. Towards the end of our fieldwork, in mid-2016, more data were recorded on the EHR, including patient care plans, risk assessments and nursing evaluations (Quartet ward sister). VTE assessments were recorded both on EHR and on paper. Falls assessment data were recorded on the EHR for a few months, but then staff reverted to paper records (observed February 2016, March 2016, Quartet ward nurse). At the end of our fieldwork, the sister in Ward A said she printed paper copies of the EHR data: they were kept at the end of patients’ beds (Quartet ward sister).
Views about the EHR were mixed. A nurse told us that he liked the system:
People can access it from different access points and see the same information.
Quartet, non-executive director
A health-care assistant (HCA) observed that it was useful to be able to record particularly sensitive data on the system. For example:
When the care plans used to be at the bottom of the bed, you had to be a bit careful about what you were writing, because they [patients and visitors] could read it.
Quartet, HCA
Less positively, staff found data entry time-consuming (Quartet ward nurse, Quartet ward sister), a problem compounded by the need to maintain paper records alongside the EHR (Quartet 51, Quartet 53, Quartet 55). Even at the end of our fieldwork, staff told us that they preferred care plans on paper rather than on the EHR. The falls care plan was easier to access on paper: ‘we can look at them at any time now, now it’s on paper it’s a lot better’ (Quartet ward nurse). At the end of our fieldwork, the sister in Ward B also said ‘if we had to revert back to manual care plans I think the entire nursing workforce would breathe a sigh of relief’ (Quartet ward sister). There were also misgivings about the LoWs throughout the period of observation: laptops were slow, they broke down and Wi-Fi access to the hospital network was patchy within wards (Quartet ward sister, Quartet ward sister, Quartet ward sister).
A new application
At the start of the fieldwork, in 2015, ward sisters at Quartet received monthly reports, approximately a month in arrears, on a range of measures including incidents, complaints, pressure ulcers, falls and staff sickness. In early 2016, the trust introduced an electronic application that was to replace this process. The ‘app’ was accessible on smartphones or tablets provided by the trust, and supported reviewers to capture quality and safety data on the ward in real time. The app was designed so that the review was done by a sister from another ward, in contrast to previous years when matrons collected audit data.
The data collected on the app presented various aspects of quality and safety on the ward, including whether or not the whiteboard data were up to date, if the medicine trollies were tidy and stocked, if patient buzzers were promptly answered and, at mealtimes, if the red tray system was in place. The questions asked via the app were designed by a group of nurses and varied slightly between departments. The app also allowed the reviewer to take photographs as evidence; these were captured alongside the other data collected. For example, if a reviewer saw that a medical trolley on the ward was untidy or not properly stocked, they could take a picture and then write notes and suggestions for improvement. Once questions were completed, the app provided an instant report presented in the style of an interactive dashboard with dials and graphs. The report was also instantaneously sent to the directorate matron and head of nursing. The ward sisters printed off the report and displayed it on the ward so that all nursing staff could view it (Quartet ward sister). Early comments in interviews were positive; the app provided immediate rather than delayed information.
Ward meetings
In 2015, both wards held monthly meetings in the communal family area of their wards. They were led by the ward sister, there were no papers and they lasted around an hour (observed June 2015, August 2015, October 2015). Most meetings focused almost exclusively on quality and safety issues, with three recurring themes (observed August 2015). First, the ward sister relayed messages from other meetings, including directorate meetings, and from the nursing hierarchy, for example about a serious incident elsewhere in the trust (observed June 2015, August 2015, October 2015). Second, ward performance was discussed using monthly reports (observed June 15, August 2015, October 2015). Third, these were fora where staff raised problems and concerns. We observed that conscious efforts were made to encourage staff to speak up (observed June 2015, August 2015, October 2015).
In 2016, Ward B began holding longer meetings, lasting 3 hours, attended by nurses and HCAs who were not on shift. The ward sister told us that these meetings were easier for staff to attend because they were relieved from operational pressures (observed April 2016). In the meetings, copies of the nursing metrics report were distributed. Data were included on falls, tissue viability, nutrition, pain management, vital signs, complaints and Friends and Family Test reports. The metrics were discussed at length (observed, April 2016).
Information culture
The sister on Ward A believed that there had been a change in the reporting culture in the ward in the last year, including a major focus on introducing the (new legal) duty of candour:
If someone fell or was harmed you would get informed verbally . . . [now] we send a letter out so there is hard evidence and it’s on Datix, . . . so that has changed in the last year definitely, there are been a big drive on duty of candour.
Quartet, ward sister
In the past, when there had been a complaint or incident, it was common for the ward sister and staff to get defensive (Quartet ward sister). This had changed when they had received a large number of complaints in a short time, which made them think ‘we have got something wrong, we need to do something about it’ (Quartet ward sister). Sharing information for learning was viewed as important:
You learn from other examples that you can go back and feedback, because you want your patients to be safe so I think that’s what it is all about, sharing practice, complaints, your patients’ falls . . . it’s learning, learning from other areas because whatever happens in another in-patient area can happen in this inpatient area.
Quartet, ward sister
Datix was perceived to be a useful tool, as it automatically ‘escalated’ incidents by sending an alert to the matron, head of nursing and risk management team (Quartet ward sister). Conversely, a junior doctor told us that she would not automatically use Datix to report a problem:
I’ve not done any [reporting of incidents] since I’ve been here, I’d probably speak to . . . my consultant first.
Quartet, junior doctor
She did, however, support the view that there was a positive culture:
They seem fairly good and they’re fairly open to you saying if you’ve seen something that you’re not happy with.
Quartet, junior doctor
Finally, both ward sisters said that wards received a great deal of information. All ward staff received the Trust Patient Safety Bulletin (introduced in 2015), the Communication Bulletin and summary Datix reports (Quartet ward sister, Quartet ward sister). The safety bulletin was perceived to be useful: ‘it makes you think you aware of things, that’s a good idea or we better watch out for that’ (Quartet ward nurse).
Duo
Fieldwork was undertaken on two wards between July 2015 and July 2016. Both wards had the same ward sisters throughout our observations. In September 2016, the percentage of registered nursing staff filled as planned during the day was around 90%. Ward C consistently performed well on its nursing metrics (100%), whereas Ward D was seeking to improve its scores throughout the period. Results from the Friends and Family Test showed that > 91% of patients would recommend the wards’ services.
Duo was one of the three sites that was developing real-time patient management systems, largely in-house. Our interest was in the use of these systems for managing wards. Both wards introduced electronic whiteboards before our fieldwork began in 2015. The whiteboards presented summary patient data in rows for each patient on the ward, including their name, their consultant, estimated discharge date, job lists and risk assessment data (covering falls, infections and pressure ulcers). The latter indicated both a patient’s risks and whether or not their latest timetabled assessment had been completed. Patient data could be input from any terminal or mobile device on the ward and were available almost immediately on the whiteboards. Equally, mobile technologies were deployed on the two wards during the period of observation, providing the best opportunity to observe experiences on the wards. Different functions were introduced on each ward, and technological developments on each one are presented separately below.
Ward C
Electronic whiteboards, dry-erase whiteboards and tablets
The electronic whiteboard on Ward C was located in a nurse’s room. It was accessed infrequently by nursing staff throughout our fieldwork (observed August 2015, December 2015, April 2016). There was also a dry-erase whiteboard, located opposite the nursing station, which duplicated many of the data on the electronic board. Magnets were used to highlight patients’ risks, for example of a fall or developing a pressure ulcer. This whiteboard was used more frequently by the ward clerk and nursing staff to identify a patient’s location and during patient safety huddles.
Ward C was selected to trial tablets during the course of fieldwork: nine tablets were introduced in November 2015, one for each member of staff on a shift. Their introduction was viewed positively. For example:
If we were to take a phone call, we can update on here any information immediately so it’s straight on the whiteboards, the doctors can see straight away, all of the team can see, and if we’re asked any questions we’ve got all the information available.
Duo, ward nurse
If I need to check something I’m not having to go down to the doctor’s office . . . get through the doctor’s notes, everything’s on here so I know for example if they’ve been for a test.
Duo, junior doctor
Towards the end of the observation period, the ward sister believed that the key change was that everyone on the ward had access to the same data, in contrast to in the past:
What I found is when I collected printed out [handover sheets] . . . everybody had written completely different information.
Duo, ward sister
He believed that there had been a reduction in instances of patients and carers being given inaccurate information. Conversely, nurses and HCAs reported, during our observations, that the tablets logged them out too quickly. This was confirmed by nursing staff in our interviews:
It logs you out a bit too quickly so although it would be quicker to record information on here at the moment we’re having to log in quite regularly.
Duo, ward nurse
Nursing handovers and patient safety huddles
At the beginning of our observations, handovers were held in a staff room, and they typically lasted 45–60 minutes. Handover sheets were printed from a computer before meetings (observed August 2015). When the tablets were introduced in November 2015, the printed sheets were no longer used: all patient data were recorded, and thus available, on the tablets. However, the length and management of handover meetings did not change. For example, staff discussed patients at risk (e.g. of a fall or a pressure ulcer), patients with high scores on the NEWS, patients with infections, jobs that needed to be done on the next shift and patients with special nutritional requirements.
When the tablets were introduced, we observed some initial problems during handovers. One concerned logging on to the tablets. Some staff chose not to access their login details (observed December 2015, January 2016, February 2016); one HCA said in January 2016: ‘I do not like them [tablets] so, I do not have a login’. Some staff were reluctant to move away from paper forms and reverted to paper when the ward sister was not on shift (observed January 2016). By June 2016, we observed that nursing staff seemed to have overcome or worked around these problems, and use of the tablets was embedded in handovers.
From the start of the fieldwork, Ward C had patient safety huddles every morning, lasting around 10 minutes and using the dry-erase whiteboard. Huddles were multidisciplinary, involving nurses, HCAs, physiotherapists, doctors and discharge facilitators (observed July 2015, December 2015). The briefings included admissions and discharges, patients at risk of falls and patients with high NEWS (observed July 2015, December 2015).
Ward D
Electronic whiteboards and tablets
Ward D had an electronic whiteboard in the centre of the ward, opposite the nursing station. We observed, throughout the observation period, that the whiteboard was an important way to view data ‘at a glance’; nurses, HCAs and doctors used it to check when patients’ observations were next due and to check the location of patients when they came onto the ward (observed, January 2016, March 2016, June 2016).
Health-care assistants began to enter vital signs data on tablets at the bedside – colloquially referred to as e-observations – in July 2015. The electronic whiteboard would immediately show the NEWS, which was calculated automatically, and whether NEWS was increasing, decreasing or stable, and when the next observations were due. We observed nurses and HCAs using the electronic whiteboard frequently to check when patients’ observations were due (observed March 2016, June 2016).
The system was prone to crashing, and at the beginning of 2016 it was taken out of service for several weeks, meaning that vital signs data could not be recorded electronically. During this time, staff reverted to recording information in patients’ paper notes. Staff told us that they were frustrated that the system had gone down (observed January 2016), but one nurse said that it had also saved them time, as there was one fewer place to record data. The ward sister also said that because the ward had a high turnround of patients, updating the system could be ‘so time-consuming, it’s unreal’ (Duo ward sister). At the end of our fieldwork, there was still resistance to using the system from some clinical staff. A junior doctor we interviewed felt that the electronic observations made it harder for him to access a patient’s vital sign data:
When you compare how easy it is to just go to a patient’s bed, pull out the chart from the end of the bed, it just seems a bit of a time waste to have to go and look at the tablet and find out what the password is, and you know carry that around with you . . . it’s not slick yet and I suppose a lot of us are struggling to see what the benefit of it is.
Duo, junior doctor
The reporting culture also seemed to be different among medical staff.
At the end of our fieldwork, the system was back in use and was not crashing as often as before (Duo ward sister). Indeed, the electronic whiteboard had become an important source of data for some staff (Duo ward sister, Duo ward sister, Duo ward nurse, Duo HCA, Duo junior doctor). For example, a nurse told us:
I refer to it [the electronic whiteboard] so many times in a day. Obviously it tells me all my patients, that they’re in a bed, the consultants they’re under which is helpful, and then it has the discharge bit . . . so you can have a quick glance and see what’s going on.
Duo, ward nurse
The junior doctor observed that:
I think a lot of us don’t realise how much we use it, because occasionally you walk onto a ward and the whiteboard will be down and you’re completely clueless then as to where the patients are and it’s a bit of a nightmare.
Duo, junior doctor
Handovers and patient safety huddles
The handovers on Ward D took place at the nursing station, next to the electronic whiteboard. The management of handovers did not change during our fieldwork. The ward had its own paper handover sheets: that is, the sheets were not printed out from the system. All nurses had copies of the sheets and took notes during handovers (observed August 2015, January 2016, January 2016 A, March 2016, June 2016). Each patient was discussed individually, with topics including their NEWS, nursing risks (falls, pressure ulcers), nutritional status, tasks in the following shift and referrals made (e.g. to a physiotherapist). During the handover, the nurses would occasionally look at the electronic whiteboard, mainly to check a patient’s NEWS (observed January 2016, June 2016). The handovers on Ward D were consistent in style and context during our observations.
At the beginning of 2016, Ward D introduced patient safety huddles. The huddles had a format very similar to those in the other ward, except they took place around the electronic whiteboard. The huddles were deemed to be useful because staff ‘know just a little bit more about what’s going on’ with all of the patients on the ward, and hence knew whether or not colleagues might need help during a shift (Duo ward nurse).
Ward meetings and monthly reports: both wards
Both wards regularly held meetings throughout the study (observed July 2015, January 2016). Discussion topics on both wards included the metrics on their ward dashboards, actions needed to respond to incidents and complaints, sharing lessons from incidents on other wards and sending messages upwards to senior managers and downwards from senior managers to ward staff. From February 2016, Ward C changed the format of meetings to one that was less reliant on a formal agenda and designed to encourage staff to be more actively involved in addressing issues and concerns (observed February 2016).
Both wards received monthly nursing dashboards – the dashboards were discussed in the board and directorate mini-biographies – that included data on the NHS Safety Thermometer measures, Friends and Family Test scores, incidents, complaints and staffing (Duo ward sister, Due ward sister). Both sisters valued these, partly to monitor their own performance over time and in relation to other wards in the trust, and partly to learn from the wards that were doing well:
If they’re doing something really well that we’re possibly failing on, then you can speak to that ward and say well actually what are you doing different from us, so it’s all about sharing knowledge.
Duo, ward sister
The sister in Ward C told us that this had been part of an improvement process:
We were in escalation before it came in, and all the dashboard did was put it in writing, or put it visual so the staff could see, and the staff got engaged with it really quickly, cause they could see what they were doing wrong . . . where we needed to make improvements.
Duo, ward sister
Less positively, it was suggested that the metrics were not a good representation of the quality of a ward’s services:
It’s all based on paperwork, it’s not based on how we are as nurses. I could be absolutely fantastic at paperwork and I could get every single piece of paperwork in, but I could be awful to my patients.
Duo, ward nurse
Information culture
There were three indications of the information-related culture on the wards. The first concerned candour:
Something’s gone wrong here, we’re really sorry, but let’s see what’s gone wrong and let’s see if we can rectify it. Or even if it’s for the next patient, let’s make sure it doesn’t happen again to somebody else.
Duo, ward sister
Additionally the ward sister stated:
That’s how the NHS is now, we’re expected to hold our hands up and there’s no hiding about it, there’s no shying away from it, be honest.
Duo, ward sister
That said, a junior doctor told us:
I don’t know any of my colleagues who would routinely fill out a Datix, if there was something which needed to be escalated I think they would escalate it in a different way . . . I would have absolutely no idea how to fill out a Datix myself.
Duo, junior doctor
The second indication of the information-related culture on the wards was from one nurse and one HCA we interviewed told us that they did not usually get feedback from incidents they had reported, unless the incident had been classed as serious (Duo ward nurse, Due HCA). The third indication, more positively, was that there was support from their matrons and other trust staff when they were failing on some of their metrics. Ward nursing staff were given:
Lots of support, it was never made to feel that you’re failing and you’re rubbish, it was let’s get you out of this.
Duo, ward sister
Trio
Fieldwork was undertaken on two wards between July 2015 and July 2016. Ward E had the same ward sister during our fieldwork, while Ward F had an acting deputy ward sister and a new ward sister in post near the end of our study. In September 2016, the percentage of registered nursing staff filled as planned during the day was between 87% and 98%. The results of the Friends and Family Test showed that > 97% of patients would recommend the two wards’ services.
Electronic whiteboards and tablets
In 2014, the trust introduced a real-time ward management system. The system was initially available to ward staff via a terminal in each ward, and soon afterwards via electronic whiteboards and mobile tablets. The latter were introduced on both wards around 1 year before our fieldwork began. Previously, the wards had used computers on wheels, but we were told that there had been problems with battery power and the reliability of Wi-Fi links to the trust network (Trio ward sister, Trio HCA).
Ward E’s whiteboard was located on a wall opposite the nurse’s station, while Ward F’s was positioned in a room that led off of the nurse’s station. The whiteboards showed the ward layouts in schematic form, including bed bays, with rectangles representing beds. Each rectangle showed the name of the patient in that bed, their NHS number and trust-wide icons each representing some aspects of the status of a patient, such as risks of pressure ulcers and falls. A second screen showed ward-specific icons appropriate to the types of patients treated on that ward (e.g. orthopaedics, respiratory) (observed June 2015, July 2015, August 2015).
The screen with the trust-wide icons was developed by senior and ward-level nurses and, being trust-wide, changes had to be signed off formally. However, the screens with ward-specific icons could be changed more readily (Trio ward sister, Trio ward sister, Trio ward sister). Ward staff told us that the informatics team was very helpful throughout the implementation of the system, with a ward sister reporting:
They would come up to the ward and say, you’ve asked us to tweak this, this is what we have done, is there anything else you think of? Or, if there is let us know we will come up.
Trio, ward sister
Changes to existing icons, or new icons, were typically produced promptly: ‘they will be there on that day’ (Trio ward sister). Both wards were still adding and editing their icons throughout our observations. For example, the icon of an apple to represent ‘nutritional status’ was added in April 2016 (observed April 2016).
We observed that, on both wards, nurses, HCAs, junior doctors and consultants accessed the whiteboards very frequently. The boards were especially heavily used between 8 a.m. and 9 a.m., when clinical staff would surround and sometimes queue to view them (observed June 2015, July 2015, Aug 2015, January 2016, February 2016, April 2016). There were no substantive changes in practice during the 13 months of observations. That said, our observations and interviews suggested that clinical staff perceived advantages over the earlier paper-based arrangements. Data that had been recorded in paper records or on a magnetic whiteboard were now available on the ward system (observed June 2015, July 2015, August 2015, January 2016, February 2016, April 2016). A ward sister observed that:
You’ve got this huge thing telling you . . . it’s just easier to see, it’s so much clearer . . . you can see people’s blood pressure dropping . . . we’re just more aware, I just think it’s really good.
Trio, ward sister
A junior doctor observed that:
. . . you will see this patient is not feeling good . . . without having to go through the paperwork . . . just rush to the patient.
Trio, junior doctor
Both wards had the use of five tablets, one of which was permanently positioned at the nurse’s station. Once staff had logged in, they could see each patient’s data, the same data as were available on the electronic whiteboards (observed July 2015, April 2016, Trio ward nurse). HCAs on both wards used the tablets to record vital signs observations by the bedside, which automatically calculated patient NEWS scores. They then set when the next observation was due, and this information was also available across the ward almost immediately. We also observed that, when tablets were not available, staff would write down the patient’s observations on their handover sheet and then input them onto a tablet when one was available (observed July 2015, Aug 2015, February 2016).
Consultants and junior doctors would take tablets with them on their ward rounds (observed July 2015). Although in this study we did not observe staff in bed bays, staff told us that they used the tablets to access and capture patient data, instead of going to and from the computer at the nursing station (Trio ward sister, Trio ward sister). Data captured on the tablets were available almost immediately and could be viewed on other tablets or on the ward whiteboards.
Staff were positive about the tablets. Across both wards, HCAs believed that entering data onto the tablets saved them time. Other staff told us how useful it was that the ward system held a large number of easily accessible patient data (Trio ward sister, Trio ward nurse, Trio ward sister). On the other hand, two problems were mentioned. One was that there were too few devices (Trio ward sister, Trio ward sister, Trio ward nurse, Trio ward sister, Trio HCA). The other concerned the difficulties experienced when the system crashed. The system did not go down often, or for very long, but there were problems when it did (Trio ward sister, Trio HCA, Trio junior doctor): ‘if that screen goes down you cannot see when your patient’s obs [observations] are due, what they were before, or anything’ (Trio HCA).
Nursing handovers
Handovers on Ward E took place around the electronic whiteboard and on Ward F in the staff room. Both wards’ handovers typically lasted between 35 and 45 minutes. Both wards throughout our study used a paper printout of data from the real-time ward management system. The data on the printout included patient details (bed, name, age, clinician), admission/progress (arrival date, doctor’s note, reason for admission) and any risks (e.g. risk of sepsis, dietary requirements). Each ward staff member had a copy of the handover sheet, and nurses and HCAs made notes on their sheets throughout handover meetings.
Nurses managing handovers also discussed information that was not available on the handover sheets, such as jobs that needed to be done (i.e. ‘his dressings need changing’) or how a patient was feeling (‘his scores are fine but he says he doesn’t feel very well this morning’) (observed June 2015, July 2015, August 2015, February 2016, April 2016). They highlighted any data on the sheets that had changed, and updated the system after handover. On Ward E, staff would often update the system during handover:
. . . when they’re handing over if what they’re telling me doesn’t correlate with what’s on my handover sheet then I can actually alter it you know, and often I’ll stand there at handover as they’re telling me I’m altering things.
Trio, ward sister
Towards the end of our fieldwork, Ward F introduced a safety briefing at the end of handover (observed February 2016). This involved highlighting, using the electronic whiteboard, which patients were acutely ill and the patients with the greatest needs (e.g. needed assistance with eating). The patients had already been discussed at handover; the briefing was designed to reinforce nursing priorities (Trio ward sister).
Wider value of the real-time system
Managers could access ward data, in real time, without visiting wards. The ward sisters told us that this had become increasingly important in managing the ward (Trio 38, Trio ward sister, Trio ward sister). At the start of the fieldwork in 2015, both ward sisters told us that the middle managers or bed manager might ring if they had patients who were ‘medical outliers’ or had a high NEWS score (observed June 2015, Trio ward sister, Trio ward sister). The managers would then offer advice or get a member of clinical staff to come and review the patient. The sister in Ward G said that this could be helpful, and better than nursing staff having to ‘chase the doctors up’ (Trio ward sister).
Nearing the end of our fieldwork, patients’ acuity levels – over and above their NEWS scores – were available on the ward management system. For example, the system showed whether patients needed ‘basic’ nursing care or if they needed more resources, for example end-of-life care. The sisters agreed that this has been helpful development (Trio ward sister). The bed management team now took the acuity of the patient into account, and not just the NEWS:
You could have someone who is perfectly up and about but they could have a very high NEWS score . . . a NEWS score doesn’t mean to say they need a lot of nursing input . . . but when you’ve got the acuity it’s the [evidence of the need for] physical hands-on nursing.
Trio, ward sister
Furthermore, the ward sister in Ward F said that they would struggle without the electronic system now. She told us:
Information is out there with the doctors, with the site managers, with the hospital at night teams so they know where they need to focus on, which I think is really good, rather than the wards acting in isolation.
Trio, ward sister
The system did not replace relationships between ward staff and managers, and managers continued to visit wards. Towards the end of our fieldwork, the sister on Ward E told us that it could be very helpful:
. . . It’s nice sometimes just have the support that they can see that you really are stressed . . . there doesn’t seem to be anybody sort of coming down and slating you, it would be to come down and support you.
Trio, ward sister
Ward meetings and monthly reports
There were limited opportunities to observe ward meetings. Neither ward was holding ward meetings when we started in mid-2015 (observed November 2015, Trio ward sister). Towards the end of our fieldwork, however, each ward had scheduled meetings every 3–4 months. The agendas in the meetings included discussions about recent incidents and complaints, problems reported by staff, updates on staff training and messages from senior management (observed November 2015, Trio ward sister). Both ward sisters felt that the meetings were valuable vehicles for passing trust messages to ward staff.
The wards received monthly quality nursing dashboard reports. They were distributed by e-mail, printed out and displayed in the nurses’ room on each ward (Trio ward sister, Trio 44). The dashboard included a variety of indicators, such as quality of nursing documentation, incidents and timeliness of vital sign observations. The design did not change during the fieldwork. The sister on Ward E said that the dashboard:
Ensures that we are maintaining the standards, improving standards, seeing if there’s anywhere that we’ve gone wrong and looking at what we need to improve.
Trio, ward sister
It also reassured her when the data were consistently good, or prompted a reaction when there was a recurrent problem that had been highlighted by monthly trend data (Trio ward sister).
Information culture
Both ward sisters told felt that Datix was valuable. When a staff member recorded an incident on Datix, a number of people were alerted, including the quality matron and risk facilitator. The ward sisters were then supported by the quality matron and risk facilitator to complete an evaluation of the incident. After their evaluation, both ward sisters said that they personally liked to give feedback to the staff member who initially reported the incident, or sometimes at handover/ward meetings.
The ward sister in Ward H told us that there had been a change in reporting culture in 2016, which had stemmed from an incident on the ward that had not been escalated appropriately. The learning from this episode made staff more aware of the importance of reporting incidents (Trio ward sister). During a ward meeting, staff said that they wanted to record incidents or their concerns on Datix so that their voices were heard and managers in the trust were made aware of their problems (observed November 2015).
Although staff were happy to report incidents, one of the nurses on Ward E told us that the trust could be better at sharing learning from incidents on other wards:
We have raised that before that we’re not always aware of any serious untoward incidents that may have occurred elsewhere, cause it can be benefited trust-wide.
Trio, ward nurse
Solo
Fieldwork was undertaken in two wards during a 6-month period, from February 2016 to July 2016. The sister on Ward G was relatively new to post, whereas the sister in Ward H had been in post for several years. In September 2016, the proportion of registered nursing staff filled as planned during the day was 96–98%. Ward G is classed as an enhanced supervision ward. The results of the Friends and Family Test showed that 93–97% of patients would recommend the two wards’ services.
Electronic whiteboards and mobile devices
Solo had a real-time ward management system that supported users in managing nursing and medical work. One feature was electronic observations (e-observations). On both wards, the HCAs and nurses recorded observations on laptops. The system reminded staff when the next observations were due. If the observations were overdue, the system highlighted those patients in red. When no laptop was available, the HCAs wrote the observations on a piece of paper and then accessed a computer to input the data.
Nurses also recorded patient care plan data and assessments on the system, including nutrition assessments and Waterlow scores (pressure ulcer risk). The system alerted nurses when the assessments were due.
Junior doctors used the real-time ward management system for three main clinical functions. First, they monitored patients’ status on a daily basis. When doctors logged in to the system, they could look at patients’ NEWS scores and identify patients that are deteriorating. Second, junior doctors requested and viewed blood tests, scans and X-rays on the system. A junior doctor told us that the system was a lot more efficient than the previous setup, which involved calling the laboratory to find the blood test results. Third, junior doctors used the system to obtain real-time data before ward rounds. If a junior doctor was doing a ward round, they could type the name of the consultant and identify who needed to be visited.
Junior doctors told us that information was in a single place:
Other trusts where I’ve worked in, you’ve got one system for bloods, one system for scans, you’ve got one system for NEWS, one you’ll need one password for, you’ll need a different password for another, you’ll need a card for another and it’s just a pain.
Solo, junior doctor
Equally, users encountered some problems. Staff told us that laptop batteries were old and not durable, and one ward sister said:
The batteries are absolutely rubbish so you can’t unplug the laptop from the wall and walk round the bay and do your observation because the battery doesn’t last that long.
Solo, ward sister
Another common problem for staff was that the supply of IT equipment, in particular the number of computers and laptops, was not sufficient.
Both wards had electronic whiteboards installed in the second half of 2015. The whiteboards displayed a list of patients on the ward and their details, including their bed number, their consultant’s name, NEWS, their care plan data (nutrition, pressure ulcer risk and falls risk), VTE, any allergies, whether they were waiting for multidisciplinary team review or test results, and their discharge information. Each ward had two whiteboards, one near the nurse’s station and one in the staff room. During the 6-month observation period, the electronic whiteboard in Ward H staff room was not switched on, but staff already had access to the trust system via laptops.
Ward staff used the whiteboard as a ‘visual reminder’ of outstanding tasks for that shift. The ticks and crosses next to a patient’s name indicated whether or not the care plans and observations were up to date. For example, when there was a cross on the falls column, the nurse was reminded to reassess the patient. Nurses on Ward G said that they did not look at the whiteboard as much as they should because most of the data (i.e. NEWS, risk of fall, nutrition) were duplicated on their handover sheet. Other staff said that the whiteboards were not installed in an optimum place, especially in the staff room, as staff were in there only at lunchtime and during handover: ‘It was a waste of time putting it in there . . . I do not remember the last time it was even switched on let alone used’ (Solo ward nurse).
Nursing handovers
At the start of the observation period, handovers on both wards lasted around 45 minutes and were held in staff rooms. All the nurses starting their shift had a paper handover sheet, which included patient history (e.g. diabetes or depression), diet, patient assessment (e.g. falls risk), medication given, jobs that needed to be done (e.g. ultrasound, dressing change) and NEWS. There was also a verbal handover from the nurses from the night shift, for example if a patient had been experiencing pain overnight.
Broadly speaking, the management of handovers on Ward F remained the same during the observation period. That said, towards the end of the observation period, handover length on Ward G was reduced to 25 minutes on average. The nurses spent less time briefing colleagues. They were encouraged not to read the data, such as a patient’s previous history, that were already included in the handover sheet; instead, they only communicated information about the patient’s overnight progress and reminded staff about any outstanding actions.
After the handover, both wards had a 5-minute safety briefing. The ward sisters used the safety briefing to highlight relevant patient safety tasks to the staff. For example, if the e-observations were not completed on time, then the ward sister reminded the staff to do so. Similarly, any important information that the ward sister receives from board level was passed on to staff. One of the ward sisters said that this was the place to discuss any concerns or incidents with staff. Both sisters told us that handovers were important because regular ward meetings were being cancelled as a result of operational pressures.
Monthly reports
The ward sisters received a monthly ward nursing dashboard report. It included patient experience data, incident data and information on whether or not observations were done on time. The sister on Ward G told us that he would review the dashboard and identify areas that staff needed to know about, and raise issues at handovers.
The ward sisters also used data to provide assurance to their senior managers. For example, the ward dashboard was used at directorate meetings and in one-to-ones with matrons to discuss data in relation to, among other topics, patient experience, complaints and serious incidents. During meetings, discussion topics included ward action plans and ward performance.
The sister on Ward H told us that she believed that the nursing dashboard was supposed to enable senior managers to identify wards that are at risk, and then provide support. However, she said that because her ward always scored highly on their early warning trigger tool, the escalation process was not helpful:
I had to have a meeting with matron and somebody from corporate nursing to look at the figures and about what we are doing, we realise that a lot of it was beyond my control and it lies within the trust so it sort of gets washed to one side.
Solo, ward sister
Information culture
Solo wards use the web-based system Datix to report and escalate any incidents or adverse events. Before the online Datix system was in place, the nurses completed written Datix forms. Staff were encouraged to report incidents such as falls or medication errors as soon as possible, when these were still fresh in their minds. Other adverse events, such as staff shortages, are also meant to be reported, but sometimes, when the staffing level is poor and they are busy, nurses postpone completing the forms and forget to fill them in. The urgency to report staffing issues is lacking because the nurses feel that ‘it does not make a difference’ to staffing levels. Similarly, a nurse said that staff were not aware that senior managers read Datix forms. For example, a nurse said:
I’ve had an incident . . . with an aggressive patient that I sent a Datix up. I wasn’t really ever expecting anything back but . . . I did actually get an e-mail back . . . from the directorate manager . . . at least it meant somebody had read it!
Solo, ward sister
This nurse was surprised to find out that senior managers looked at Datix forms, and said that if other nurses and HCAs were aware that their senior managers were reading the forms, they might be encouraged to report incidents.
Commentary
The four site accounts between them present a ‘direction of travel’ in the introduction of electronic whiteboards and mobile devices on wards. The account of experiences at Duo emphasises that the deployment of new technologies, although often anything but straightforward, can be achieved. Indeed, the systems at Solo and Trio, and to an extent at Duo, were successfully embedded in wards’ working practices.
The technologies have, in general, been viewed positively in relation to the paper and dry-erase technologies that they are replacing. We have found evidence that, in general, nursing staff are willing and able to engage with new technologies and value them. They are willing and able to capture data electronically and use them in the course of their work.
We were struck by the stability of working practices on the wards, notably in handovers and, where we were able to observe them, patient safety huddles. Across the trusts, similar data were used in handovers and huddles throughout the period of observation. Put another way, it appears that the new technologies did not disrupt these practices when deployment went smoothly. Conversely, when deployment went less well, notably on one ward at Solo, there were suggestions that there might be risks to patients’ care.
The site accounts suggest that the move to a paperless NHS – a current NHS England IT objective – is not a straightforward matter. It does not simply involve the substitution of paper with digital media. Rather, the move to digital media has been gradual. It has involved the incremental development of information infrastructures, with different components of the infrastructures often partially replacing paper.
Finally, the site accounts provide some evidence on the impact of technologies on the quality and safety of services. We found a range of views about the systems, with a number of strongly positive statements but also misgivings, the latter notably on the part of some junior doctors. Thus, ‘successfully embedded’ is not quite the same as ‘universal support’ for the new technologies.
Chapter 7 Mini-biography: data and information technology infrastructures
Key points
-
The trusts devoted considerable resources to capturing data and preparing data sets for national and local agencies.
-
They used data required by national agencies as the basis for internal reporting of quality and safety.
-
Other types of data were increasingly used in reports; notably, ‘raw’ mortality data were used instead of national indicators.
-
Informatics teams at trusts that had successfully deployed electronic whiteboards and mobile devices on wards were able to give detailed accounts of the processes from their perspectives.
-
The IT developments on wards were part of much broader developments in trust IT infrastructures.
-
There were distinct development paths for data and IT systems, and for real-time and management IT systems.
Introduction
This chapter focuses on the work of the information and informatics teams in the four trusts, and thus on the work of staff who rarely appear in policy documents or in accounts of developments in the applied health services research literature. Theirs are the ‘secret histories’ referred to in Chapter 1. As we explained in Chapters 3 and 5, a number of distinct development paths have come together to create information infrastructures in acute NHS trusts. In this chapter we explore the infrastructures in more detail.
Information teams and data warehouses describes the development of board reports and data sets for submission to external bodies. Development of board reports and data sets sets out the processes involved in managing the four types of quality and safety data that we listed in Chapter 2: incidents, complaints, the NHS Safety Thermometer and mortality data. The reporting arrangements were substantially determined by national bodies for reasons set out in Chapter 3 and, as a result, the experiences of the sites were broadly similar. Finally, we describe the development of the IT infrastructures used to support ‘live’ ward systems and ‘offline’ systems for managing data sets for board reports and national submissions.
Information teams and data warehouses
This section and the following one (Development of board reports and data sets) describe the work of trust information teams and the technologies that they used to generate data sets for board meetings and for submission to external agencies. We found that information teams were organised by function – indeed, effectively by data set - so that there were individuals or small teams responsible for mortality data, for generating reports from Datix and so on. They managed data sets once they had been captured in wards and departments. This involved three main activities. The first was to validate data, a task undertaken in collaboration with nursing and other staff who were responsible for recording them. The second was to use data to prepare graphs and tables for board and other meetings. The data were typically presented as trends, so that each new month’s data were added as they became available. The third was to transfer – sometimes re-enter – data from trusts into national systems. Information teams were, then, the ‘secret’ mechanisms by which data captured in wards and departments were transformed for use by boards and external agencies.
To undertake these activities, information teams relied on data warehouses where the data sets were held. These, as the name implies, were computer servers where data sets that were used for management purposes were held. Historically, data sets were held in separate computer servers and were often managed by their own dedicated teams. This arrangement had increasingly made less sense in the period before the study, when trusts were expected to produce management reports each month using a wide range of data. Throughout the study, period subsets of data were transferred from ‘live’ systems into the warehouses:
So what changed? Some of it was perhaps the creation of a data warehouse. I think we have a really robust data warehouse where transactional data is fed in to the data warehouse every 15 minutes . . .
Solo, senior informatics lead
Development of board reports and data sets
We outlined the development of national data and IT infrastructures in Chapter 3. We noted that a number of national policies and official reports, including the second Francis report, advocated more effective board oversight of the quality and safety of services. All four trusts began to develop reports on the quality and safety of services about 4 years ago, between 2012 and 2013. Initially, monthly reports were modest, comprising key indicators set out in a few sides of A4 paper.
Developments were influenced by historical experiences. One of these concerned the tendency of clinical staff to be sceptical about the quality of data. Solo and Duo both stressed the importance, to them, of developing a ‘single source of truth’, authoritative statements about the performance of the trusts each month, presented in a consolidated set of graphs, charts and tables. At Solo:
I would say also, really important, was the data quality, because for quite a long time . . . you’d end up showing a clinician some data and they would end up saying that’s wrong . . . I can remember having a discussion with a gynaecologist and looking at her data and it said that she’d extracted a wisdom tooth.
Solo, senior informatics lead
Board reports, and the dashboards they included, increased substantially in length between 2013 and 2016; the trends in the contents of reports are discussed in Chapter 8 and details are provided in Appendix 7. Viewed from the perspective of the teams that prepared reports, there have been a number of influences on the development paths at the trusts. Several of our interviewees noted that early reports had presented data that were being collected for national submission, for example NHS Safety Thermometer data. Trio told us that it initially reported Patient At Risk scores, and then in 2013 replaced them with NEWS, when this was introduced as a standard national indicator. There had been experimentation with presentation of data along the way:
When we started [collating data], a few years ago now, we SPC’d everything [used a Statistical Process Control method], and it almost seemed like overkill, so we’ve tried to move back away from that . . . I think it’s just getting the balance.
Solo, informatics lead
Once trusts were producing longer reports, the number of data led some of our interviewees to think about their purposes. For example:
You had to plough through a report which was quite lengthy to find out what are the real issues, where now we, our prime aim is this is what we need to tell you about, these are areas where we feel vulnerable or are concerned about. So we pinpoint [issues for] the reader.
Quartet, non-executive director
More positively:
. . . they don’t read all 92 pages, and that is not the intention. The intention is to say actually I hope that 90 plus percent of what you might need to know – and might being the important word – is here. So . . . you then have got an easy place to access that information.
Solo, senior informatics lead
Another view was that reports should set out narratives that executive directors and non-executive directors (NEDs) could follow easily in meetings and use to highlight issues that needed to be addressed:
I think it’s about a story. So the board could receive a report about 18-week performance . . . saying this is how we’re performing for 18 weeks . . . it’s about typing of letters efficiently, about how the admin [administration] teams are booking things effectively. You know what I mean? The whole shebang in terms of how we deliver on that performance indicator.
Trio, senior nurse manager
We then walk through the business of that unit in terms of patient experience, so what have you shown in your audits of Friends and Family Test, your hygiene audits, hand washing, whatever that may be. Any complaints, what have we learned, what are the issues?
Quartet, NED
Interviewees also emphasised the growth, over the last 3 years, in the number of reports that they produced on a weekly or monthly basis. The details varied between sites, but a typical list included monthly board reports, monthly reports to directorates and reports for weekly performance meetings. There were also reports on specific topics, for example mortality (see Mortality).
Publication and external relationships
All of the trusts produced a Quality Account each year (mandated by NHS England). They also published board papers, and a range of other documents, for example in ‘open and honest’ sections on their web sites. The majority of the data that left trusts, however, were sent to NHS Digital and other bodies. Our interviewees stressed the number and diversity of submissions, and the extent of external scrutiny, from NHS England, CQC, NHS Improvement, CCGs and Commissioning Support Units. This created work for trust teams:
They [the Commissioning Support Unit] have a large team, whereas at the moment we’ve got three analysts doing all the SUS [Secondary Uses Service data sets], all the national returns, all the waiting lists, and then we get these new data sets put on us as well . . . they’ll challenge that, we’ll go into it and we’ll have to go back to them and give a reason as to why. I mean some of them . . . there’ll be multiple patient spells on the same day. But the problem might be that the other in-patient spell is at another trust. Which is fine, they can challenge it, but we can’t see their data.
Trio, information team
Similarly, they were required to submit almost identical data sets separately to NHS Digital (formerly HSCIC) and Public Health England:
Whenever we put that into an e-mail [to Public Health England] and say, ‘Can you not go to HSCIC . . .’ they say ‘No’. It has to be in a specific format which they need.
Trio, informatics lead
Trusts also send data sets to CCGs, to monitor local contractual agreements [Commissioning for Quality and Innovation (CQUIN)] and involve payments for achievement of agreed quality targets. Trusts cannot legally end personally identifiable patient data to CCGs. Thus, the reports contain aggregated data, with sensitive – personally identifying – data, such as details of very rare conditions, removed. More information on the data and relationships is set out in Chapter 10.
The situation today: pros and cons
Our evidence suggests that the arrangements for producing routine reports are well established in the four trusts. The advantages are captured in Box 1, which sets out an account of the annual cycle of indicator selection at Trio. It conveys the point that trusts are able to review, and change, the indicators that they use to manage the quality and safety of services. They are able to use a wider range of measures than those submitted to NHS Digital, and there was a widespread view that the management and use of data were improving. Thus, one interviewee observed that:
. . . we’re at the point that the data is meaningful, and it’s trusted, and I think that just allows you to go to a different level. So you could have the technical infrastructure and be able to do all the whizzy reporting that you wanted to do, but actually if your data and your information isn’t believed and owned then you can’t move on.
Solo, senior informatics lead
[There are] 20 or so indicators there that we as a trust pick each year as our priorities for that coming year . . . each year we’ll go through this discussion with the owners [of indicators] to say have we actually embedded this quality indicator, are you happy with the results you see each month in the quality report? And the feedback you get from your day to day activities, have we achieved this? If they say yes, then I’ll ask them for suggestions as to where do we need to go in terms of quality . . . one of our other priority projects in terms of mortality is looking at sepsis. So my suggestion to the chief nurse was, if we feel this is embedded, how about we move this to another mortality-related issue, i.e. the screening of sepsis and the appropriate action and antibiotics taken as a result.
. . . [another example is] Friends and Family. The feeling from the owners of that indicator . . . was that focusing on the response rate was fine, but it actually didn’t help in terms of understanding what the patient was telling us in terms of quality. So that patient was telling us X, Y and Z . . . the focus wasn’t really on what they were saying, it was on how many said it . . . So the suggestion from the team that oversee the Friends and Family Test for this coming year’s indicator, 16–17, was that we refocus that away from the response rate, to focus on how much that feedback was positive, and the feedback that wasn’t positive . . . we can then do a deep dive and start understanding what is it that we can do differently to impact on patient experience.
The indicators for the next financial year are then formally approved by the board quality group, which includes NEDs, and then by the trust governors and, finally, by the trust board.
Less positively, however, there was a feeling that hard-working staff were trapped in a poorly designed system:
Once everyone has met that target and proved that processes are done . . . you can move onto the next one, but what tends to happen is they just grow and grow and grow.
Duo, informatics lead
There will be a national working group that’s decided . . . that we must have a three side form of VTE requirement assessments done, and that will be signed off and become national policy . . . And everybody will say ‘It’s just a minute’ . . . but actually there’s 17 of those and17 of those becomes 17 minutes.
Duo, informatics lead
There were also a number of comments about the time staff had to spend on verifying data and on producing reports. For example:
. . . out of the 20 days in a month which a person works . . . 18 of those days at the moment are about data verification . . . now that’s coming down, and gradually we’ve got to get that down to 3 or 4 days.
Duo, informatics lead
Incident reporting
We turn now to the first of four indicators that we monitored during the study. All four trusts used Datix software to record incidents and the actions taken following them, and had used it for different lengths of time. For example, Trio had used it since the early 2000s, while Quartet had used it since 2013. They have all moved from paper to PC-based recording of incidents:
We used to use a paper incident form for recording of incidents so everything was on paper . . . we made the decision, a number of years ago now, to scrap paper . . . papers were sat on desks for weeks and weeks and weeks.
Trio, senior nurse manager
Quartet had used another IT system before purchasing Datix:
Before that we had our first electronic incident reporting system called [brand name], and it was such a detriment to staff, it was slow, it was time consuming. Since we got rid of [brand name] and replaced it with Datix our incidents have gone up 50%, if not a little bit more . . . when we were on [brand name] people, staff, would phone us and say ‘it’s crashed and I’m not going to do it’ . . . I think that they thought it was going into a black hole, and nothing was done with the information.
Quartet, information team
Datix offers a range of modules. The trusts used different combinations of these, but all used the incidents and complaints modules. The members of information teams who used Datix were positive about it.
Reporting an incident
When an incident occurs, the person reporting the incident accesses Datix and fills in a form. It includes a series of questions, including the location and type of incident. The arrangements for alerting colleagues when an incident occurs were broadly similar across the four sites: Datix includes a facility to send e-mail alerts to specific members of staff, for example ward managers, matrons and risk management teams.
Risk managers used their judgement to make an initial assessment of the seriousness of an incident, for example high, medium or low. Interviewees reported that they followed NHS guidance on definitions of incidents and serious incidents during the review process. Incidents were coded to reflect the initial assumed source of the incident, for example implementation of care, appointment delays and involving a medical device. This required training:
I do coding and grading training . . . when you tell the manager of that area . . . and you say to them . . . these are the types of incidents blah de blah and these are the codes and they go ‘ooh, really’. So that then triggers them, so then they relay that at the ward meeting, ‘you should be reporting this’ . . .
Trio, information team
More generally, teams take the view that they need to maintain good relationships with clinicians:
. . . corporate functions have often seen, been seen . . . as ivory tower services . . . all they ever do is beat you over the head with a big stick . . . actually we need to be going out there and being more supportive, and it’s building the relationships and my staff being very visible and being seen out there . . .
Duo, informatics team
An incident judged to be minor will not generate an alert to a risk manager, but a more serious one will. In this mini-biography, we are primarily interested in the latter. Incidents that require management attention are referred to the relevant clinical specialists, for example tissue viability nurses for pressure ulcers, for further review.
Possible serious incidents are reported to senior managers, up to and including the chief nurse and medical director. Trust general managers and risk managers will then make a further judgement on whether an event is an incident or a serious incident. If it is the former, then the trust undertakes an investigation detailing what happened, why it happened and what actions were taken. The results of the local investigation are documented in Datix. If it is decided that the incident is serious, then it is reported to external organisations, including the relevant local CCG, initially by telephone or e-mail (an interviewee at Quartet noted that it was possible for CCG staff to access a trust’s Datix, but the CCG had requested that the trust copy and paste files into their management systems).
Investigations into serious incidents were typically completed within 60 days, as mandated by NHS England. Timelines were monitored by CCGs. Datix was used to record an audit trail during an investigation. Following a serious incident, a CCG would monitor the investigation and the trust would send its report, and proposed actions, once it was completed.
Lists of incidents were compiled into reports that were circulated to relevant managers on a daily or weekly basis (or both). Data were made available for weekly risk/incident review meetings, monthly board meetings and directorate reports. At Quartet, for example, the risk management team produced an ‘overview report’ and a ‘narrative report’ for directorates, both containing summary data about all the incidents reported in the previous month. The risk management team also sent data to the information team so that they could produce ‘insight reports’ for board meetings.
Finally, a member of the risk management team manually transferred (copied and pasted) the data from a trust’s Datix system to STEIS (Strategic Executive Information System, a national system which was managed by NHS England and is now managed by NHS Improvement).
Learning from incidents
The details of arrangements for reporting back to the team involved in an incident varied between trusts. In Duo and Quartet, once a case was closed, the person who reported the incident received an e-mail containing details about the investigation and an agreed action plan. Relevant managers also received feedback in the form of a report generated via Datix. At Solo and Duo, this function was added in 2015 which allowed the risk management team to automatically compile a report that described the investigation and the action plan. Prior to December 2015, the risk management team had to e-mail ward managers to provide feedback.
When asked if they believed that they were still under-reporting, a respondent at Quartet said that she felt that they may be missing ‘a handful’ a year, against a total of several thousand recorded:
. . . I think that governance is quite high on our agenda anyway. And we have trigger lists in certain areas as well just to encourage reporting, so one area was underreporting . . . and CQC came in and did say that [a clinical service] was under-reporting, so we actually made a trigger list and said that . . . these kind of incidents, you should be reporting these, harm or not.
Quartet, information team
Interviewees believed that their trusts were committed to learning from incidents. For example:
We used to wait until a claim was settled and then do an action plan . . . It’s settled the consultant left and retired and is now living in Australia . . . so what we do now is that we learn from the very beginning.
Trio, quality lead
Complaints
Across the trusts, the arrangements for the initial handling of complaints did not change during the course of the study. Patients submitted comments, concerns, compliments and complaints, typically by letter, to the chief executive or another member of staff. All complaints were dealt with initially by a trust complaints team. The teams are typically colocated with, or part of, teams that also manage other patient-facing activities, for example volunteering and the administration of the Friends and Family Test.
Once the team received a complaint, it assessed its seriousness. It was rated as high or low risk, or high, medium or low risk (to the trust), depending on the trust (these ratings refer to the same thing; trusts can use either). The seriousness determined how it was handled. Sometimes the view was taken that the issues raised could be dealt with straightforwardly and a response was developed with the relevant clinical team. On other occasions, in contrast:
Where there are clinical issues that are raised where you would be worried about the implications, and really feel that we need to get on top of it as an organisation, and make sure additional harm isn’t going to be delivered to other patients, this kind of thing, it goes upstairs to a quality meeting. They then do an overview and they will advise [the directorate] in terms of does an investigation need to be undertaken, this needs to be an SUI [a Serious and Untoward Incident], this needs to be a level 2, can we make sure that duty of candour has been followed.
Duo, quality lead
Throughout this process, a complaints handler maintains a detailed record in Datix. A case file tracked the progress to date, including any e-mail correspondence between the complaints team and the directorate, and the names of the people the complaints handler had spoken to. There had been a move away from paper:
It wasn’t that they weren’t computer literate. I think they just all liked the safety of a file but they had mounds of files everywhere.
Trio, informatics lead
If a complaint was deemed to be serious, the complainant would be phoned (at Trio) or would receive a letter acknowledging that the complaint had been received and giving details about the formal complaints process and the expected time scales. The complaint handler would liaise with senior nurse managers to identify a manager, often a matron, to manage the complaint. The manager would then identify an appropriate person, often another matron, to undertake the investigation. After completing the investigation, the investigator drafted a letter to the complainant. The letter was reviewed by a senior trust manager and the manager of the complaints team before it was sent.
Although the administrative arrangements have been stable, the perceived importance of complaints data changed over time:
Complaints is a big one for us . . . before we didn’t really measure how quickly the complaints were turned around and whether it was a formal response required or a telephone conversation . . . [now] we have the turnround time reported, and the themes . . . have we responded in the right way and if not why not, and the themes are they reducing or are they the same themes month on month, and what does that relate back to then? So that’s been a huge turnround for us in terms of the complaint reduction and how our teams are managing complaint responses.
Quartet, information team
One of the things [the chief nurse] was really keen to do right from the get go when she came into post was to improve our complaints responses, and it was all about trying to work on the defensive nature of some of the responses that we were delivering, and trying to change the culture around that. So a whole heap of work has gone on . . . to do things like develop very close relationships with the [directorates], so that we are seen as a helpful supportive function here, to enable them to develop a different style of writing. We do it through complaints master classes, through targeted training . . .
Duo, quality lead
Trust managers monitored the management of complaints closely. The complaints manager would meet regularly – often weekly – with senior managers, typically with the chief nurse, the medical director and a claims manager. Trusts used internal performance measures, such as the numbers of complaints received, the time to resolve complaints and the numbers of complaints reopened after ‘response’ letter was sent. Complaints were always reported on in board papers. At Quartet:
We produce a quarterly learning from experience report, which is a narrative report which pulls the themes from [directorate monthly reports] and it’s about 16 pages but that goes to the commissioners, that goes to [the board] it goes to anywhere anybody wants to access it.
Quartet, information team
NHS Safety Thermometer
The details of data collection varied between the trusts but did not change during the course of the study. For any one ward, a matron or ward manager collects data for the Safety Thermometer survey on one morning in the second week of every month. Some trusts collected data from the case notes of 10 patients, following national guidelines. Others collected it from every patient on the ward, and hence the whole trust, at the time of the survey. Some of the trusts recorded NEWS scores, or the responses to patient experience questions, or both at the same time. In other trusts with electronic whiteboards on all wards, ‘live’ NEWS scores were available to all staff.
At Quartet, for example, the survey data were recorded on paper and then entered into a Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA) spreadsheet by a member of the information team, who then uploaded the data to SharePoint (Microsoft Corporation, Redmond, WA, USA). A matron checked the spreadsheet for any mistakes or missing data:
We’ve seen a reduction in falls and that’s only because we see them month on month, and we’re questioning about what other actions do you need to put in place. We developed a falls group [with] our director of nursing to make sure it was an area we focussed on specifically. So again we’ve seen a reduction in falls and the way they are managed.
Quartet, information team
A staff member from the informatics team collated all the spreadsheets uploaded to SharePoint. If missing data were found, the spreadsheets were sent back to wards. Data from the spreadsheets were then manually transferred to the UNIFY 2 system (Department of Health and Social Care; NHS Digital), and submitted to NHS Digital. The design of the UNIFY 2 forms meant that data had to be typed in; they could not be copied and pasted. Reports were also sent to board committees and directorates. It was possible, with some effort, to monitor a trust’s performance against other trusts:
. . . we submit that to UNIFY. [A colleague] gets the information from websites, national websites where they publish . . . what has been submitted nationally and then [she] does reports which compare us, as a trust, how we are doing against other trusts for our safety thermometer.
Trio, information team
Mortality
When a patient died, ward staff recorded details on the PAS or the trust electronic patient record system. Data on all episodes, including those that included a death, were then extracted to the trust’s data warehouse. More detailed information was recorded on a form, either on paper or on a tablet.
Once data were available to an information team, extensive checking was undertaken. Corrections were made, for example to rectify an ‘out of range’ number. Clinical coding was also checked, using Healthcare Resource Group grouper software. If episodes were identified as ‘ungrouped’, they were sent to the trust coding team for review and editing, and checked again the following day. Solo told us that it checked patient demographic data against demographic data held on the NHS Spine (NHS Digital) so that it was possible to assign patients to CCGs for contracting purposes. Recorded comorbidities were ‘pulled through’ from operational system to management system and all codes were sent to the discharging consultant to check. Solo had designed a system that presented likely comorbidities associated with the main diagnosis to coders and consultants (Solo, informatics lead).
Mortality rates are a major national NHS indicator and are closely scrutinised by national bodies. All of the trusts told us that they devoted considerable efforts to ensuring that their data were correct and to interpreting them.
Box 2 sets out developments in the ways in which Trio reviews its mortality data. Some of the concerns expressed at Trio were confirmed at other trusts. For example:
I was in a meeting with some clinicians earlier this week where . . . we were talking about the national attribution of activity and if we use Dr Foster’s [data] what shows under each consultant . . . if a patient has been under three consultants nationally it can only show as having been under one . . . that may or may not have been the consultant who did the operation on them . . . clinicians might say, ‘well I do 100 of those a year! . . . and it’s showing me as doing 10!’ Or, ‘I don’t do any of those ever! I don’t know how to do them. And I’ve done five of them according to the data’.
Duo, informatics lead
The trust began to review mortality data in detail several years ago. Initially, it used the HSMR, produced by a private firm, Dr Foster. It provided reports for a ‘basket’ of conditions. The data had been difficult to interpret locally. The trust subsequently used the RAMI instead, also produced by a private firm (CHKS) on behalf of the trust. This used data on all deaths for all diagnostic codes, and was slightly more helpful.
When the SHMI was first published, the data shed a new light on the trust’s performance: it became clear that the trust had a high SHMI. The SHMI also proved to be difficult to interpret. A mortality review group was created, and found the following:
We found that it was quite hit and miss. One month it would be amber, then next month it would be green again. And so it was confusing . . . By the time we’d started pulling case notes, we had the latest QRP [Quality Risk Profile from CQC] that said we are in green again. So it was a very confused picture.
The trust had continued to use SHMI data. It purchased the HED software (Healthcare Evaluation Data developed by University Hospitals Birmingham NHS Foundation Trust, Birmingham UK), which provided an estimate of SHMI 2 months before data were published by (what is now) NHS Digital. The mortality review group produced detailed monthly reports, which grew to 50 pages in 2013, although these were scaled down to 20 pages of data per month by 2016:
It’s fair to say we understand SHMI very well now, we know exactly what contributes, we know exactly which comorbidities tick boxes, we know exactly what the risk ratings of certain things are and how that impacts negatively or positively on the SHMI.
There was, however, a problem with interpreting SHMI data that were still several months in arrears. It was not possible to link the index to the activity that it describes:
If you were thinking about a quality improvement process – which is what we’ve always wanted for mortality – if you can’t link SHMI to actual patients, to actual case notes, to actually get clinicians involved, to actually tell us what was going wrong . . . there was a big flaw in the process.
A further problem was that SHMI relied on diagnoses at admission rather than discharge:
We were focusing our clinical groups on the SHMI diagnosis groups, again, there was a flaw in the process because they were being asked to look at patients with cardiac conditions. When the cardiologists looked at it, they were not actually cardiac conditions at all, they were something else. So we were gearing up all our experts . . . we were getting all these intelligent people involved in looking at case notes that were not actually part of their daily [work] . . . it was a flaw in our process.
Every time we got a clinician to look at a set of notes that was supposed to be a death in the area, and they found it was not, it reinforced their own personal views that SHMI was a load of garbage, the coding was a load of garbage, and all that it did was put up a barrier between our clinical staff and the support staff.
Recently, Trio had decided to use its own ‘raw’ mortality data:
Firstly, crude mortality’s pretty much instant, so in the month of March we can access February’s deaths, so it’s very timely. Secondly we can focus on the episode of care we are interested in . . . so we wanted the cardiologists to look at cardiology patients, we wanted respiratory clinicians to look at respiratory patients. And then tell us what the issues are . . . we deliberately focused the crude mortality on the final diagnosis . . . we give them an Excel spreadsheet, which they can click on it any time in the month and it’ll give them their trend information, it’ll give them their . . . it’ll allow them to double click on information right down to session details.
We pure and simply looked for which was the 20% of the issue that was causing us 80% of the deaths. So within that we have got about six groups that we focus on predominantly when we focus on the crude data.
RAMI, risk adjusted mortality index.
Mortality reporting
At Trio, the information team provided ‘real-time’ reports, presented on dashboards, in weekly reports to the mortality review group (chaired by the medical director). Duo also produced weekly reports. Quartet produced a monthly report which was sent to directorate managers and the head of quality and governance.
A large activity data set, including mortality data, was submitted to NHS Digital every month using the Secondary Uses Service (SUS). Trio told us that an in-house team had written several thousand lines of code to manage the submission of SUS data sets to NHS Digital. One advantage of having in-house coding – which each of the trusts did – was that they could modify their code as NHS Digital requirements changed, for example the requirement from April 2016 to upload new maternity, and children and young people, data sets: some 500 new data items.
Ward whiteboards and mobiles: design and deployment
The four trusts were at different stages of development of their systems at the start of the fieldwork, as described in Chapter 5. Solo was able to provide information about developments that had taken place in the 2 years before the start of the fieldwork. Quartet deployed mobile technology in 2014, but this was used primarily to access an EHR system: it did not have electronic whiteboards and tablets during the study period. The text in this section is therefore weighted more to Duo and Trio than to Solo and Quartet.
Solo had long had its own development team:
We have our own in-house development team, so the application is developed in-house and therefore we have massive control over the way that we do data.
Solo, senior informatics lead
As a result, when they decided to develop real-time ward systems, they were able to undertake the work in-house. Trio had an in-house team with a background in pathology systems. At Trio in 2013:
Ours was a portal to start with, it was a front end that accessed a load of other systems, notably to start with pathology systems and diagnostic systems, but then we moved into the ward management areas, and clinical management areas.
Trio, informatics lead
As we took that into the web scenario, we went from a thick client to a web front end, we realised that there were a lot more things we could do, and that all the information we were gathering had a lot more uses.
Trio, informatics lead
At Duo, there was a legacy of a number of separate systems, including management and audit as well as data processing systems, and a pressing need to integrate the data held in them. They also had some in-house development expertise, supported by individual contractors. The need for mobile solutions was recognised, as most clinical staff were mostly ‘on their feet’. It was also felt that too many data were being captured for ‘corporate’ rather than clinical purposes (Duo, IT05). There was also a plethora of forms:
We’re able now to articulate exactly what going paperless means, there’s 30, 40, 50, 60, 70 different forms . . . we’ve got a programme of work around that, that’s going to show the progress as we work through, it feels like we have a model for transformation.
Duo, informatics lead
Once the whiteboards and mobile devices were in use on some wards, informatics teams began to see their potential. At Trio:
It didn’t take long to realise that if we made a system that topographically represented a ward, and you actually put patients into bed, you would revolutionise the way the hospital could work. We didn’t know at the time that we were doing it, but we kind of accidentally did it, we did revolutionise the way the hospital worked, because suddenly, we knew where all the patients were, we knew which consultant they were under, we knew how many beds were free in each ward, and a whole variety of other bits of information . . .
Trio, informatics lead
At Duo:
We’ve always had the concept of a single patient and a multipatient view . . . Mrs Smith was on ward 96 that day, who else was on ward 96 that day? . . . that data was in PAS in a separate system that no clinician, no doctor, no nurse ever logged on to, it was just the secretaries . . . The second you create a common bed number and you say to the nurses, ‘if you just put 1 to 24 in, in that column’, and sort it by bed number, you’ve now got a list of patients sorted by bed number, that will happen automatically in the background for you.
Duo, informatics lead
The sites also highlighted the range of responses staff had to new systems:
. . . we held up [tablets] and said, ‘if you all had one of these, would you be happy? Hands up who would use this’, and actually not many did.
Duo, informatics lead
The first ward was first generation, where we spotted all the flaws, to develop the version which would be more useful to people . . . it was staged, people knew it was coming, so if they were interested they could go to a ward that already had it. When it came they were trained, within a week they all hated it because it was electronic, but then after they actually engaged with it and used it. it was a novelty, so for about 2 weeks they really wanted it, really enjoyed it, and then after that it was the norm, it was what they used.
Trio, informatics lead
Broadly, younger nurses were enthusiastic about systems and older nurses were more wary. Sites used a number of strategies to encourage use, not least in making the systems as simple to use as possible and targeting tasks that were time-consuming and prone to errors for nurses, for example designing systems to calculate NEWS scores automatically. Thus:
All the staff could get involved with it, it would be live information all the time and like you say it wasn’t information that wouldn’t be required by any external agencies or whatever else, but it was pertinent to the trust, or to the ward individually.
Trio, informatics lead
Sites emphasised the importance of senior management and clinical support:
If he had . . . slagged us off, and said why is the trust spending money on our salaries and everything else, we would have sunk, but the fact that he did spend time with us made it work. It comes back to relationships.
Trio, informatics lead
More generally, informatics team support to clinical teams was viewed as crucial:
Normally with IT you ring up, get a ticket, and they get back to you in a few days and it gets fixed. So we tried to do that in a more timely way, if they rang us up with a problem [and] it was a bit more urgent than a screen not working, we would normally go straight out to them, rather than saying we’ll call you back in 3 days or whatever.
Trio, informatics lead
I think it’s great from a clinical user perspective, its brilliant that I as a clinical user can have a conversation with a developer directly, and we can have that kind of one-on-one conversation about what’s going to work for me, what’s going to be useable for me, what do I want out of the system and how I want it to work.
Duo, senior clinician
Development teams worked iteratively with clinical teams on the design of the systems:
I think once that mentality got out there, about the [white]boards, people would come to us saying, ‘could you do this?’, and we would give them a solution, they would take that back to their [management] group, they would agree on taking forward a development, and we would get a request from the board to start that development . . .
Trio, informatics lead
You have to make some quick decisions to change things on the basis of user feedback within a very fast timeframe, which would be challenging within a department where you didn’t have the very specific clinical involvement that we’ve got . . . we have a clinical team who are embedded within the informatics department and that’s been crucial to this happening.
Duo, quality lead
Value of the new systems
Interviewees offered a number of examples of the value of the systems:
People like the diabetic nurses who went between wards, they probably wouldn’t know where the patients were without a system like this. So those staff grew [used] to updating information on the screens for other staff . . . If you have been off for 2 weeks, say, as a speech therapist, you come back on a Monday morning, you haven’t got a clue where your patients are. But if your colleagues from the previous 2 weeks have been inputting information into the system, you just click on speech therapy, in progress, and a list of patients and their locations comes up.
Trio, informatics lead
. . . every ward every month does an audit, an infection control audit . . . we log each ward that does it, who does it, what their answers are . . . who scores well all the time, who doesn’t score well all the time and they can graph that.
Trio, senior nurse manager
. . . how long have they had C. diff [Clostridium difficile]? We can track back to this date, who that patient was next to. So we’ve done reports for them where we can just let them go through by half an hour or 10 minutes, 5 minutes, 1 minute, whatever you like for 3 days . . . It’s been used in the other way round as well, where clinical incidents have occurred on wards and one of the reasons given for things going wrong was that the ward was incredibly busy, well with this you can find out if the ward [really] was very busy.
Trio, informatics lead
What you can see is what beds you’ve got open, what bays you’ve got open, that tells me that there’s somebody in a side room who’s got an infection prevention because it’s got a [symbol].
Solo, informatics lead
Some of the value stemmed from developments that had not been anticipated:
. . . you discover teams building their own functionality out of components that you have not mentally put together at all. And a classic example of that was when we did go to meet the junior doctors to say, ‘right, we’d like to introduce [the ward system into] handover’, and they went, ‘what do you mean? We’ve been using it for months’. ‘Really? How are you doing it?’ ‘Oh, we found this column no one else was using and we worked out if you put stuff in that column you can then put that column in a special view, and it’s a handover list.’
Duo, informatics lead
Dual systems: pros and cons
The trusts were also at different stages of integrating their IT systems. From a user perspective, the trusts with electronic whiteboards had two systems, one for real-time management and the other for reporting. In 2016, they were at a point where they could see the advantages of more integrated systems, with screens available to clinical staff fed by the ‘live’ systems, including the ward systems:
. . . we are actually turning up with proper quality metrics around things, and that’s showing the investment that we are putting into the data warehouse . . . we are articulating quality and risk far better than [we were] before . . . we are trying to get to the point where we are providing insight and intelligence, not just data.
Duo, informatics lead
There were also frustrations:
So how can we get the data right at the point of entry . . . what happens at the moment is that we go through this hideous verification exercises where the data comes in, our team tracks it, says it’s wrong, goes back to the clinical teams and says can you fix your data. It comes back and it still isn’t right so it goes through this hideous cycle. The warehouse is now picking up on some of these issues.
Duo, informatics lead
. . . I think there is almost like an information overload now, the information that we have got is incredible now . . . but what you actually do with it is a different thing. You can relay the information, you can produce it on graphs or whatever, but most of the information is designed to be used in real time, it is meant to be ‘now’.
Trio, informatics lead
At Duo, which was piloting tablets on some of its wards, the real-time systems needed high capacity infrastructure:
We underestimated the volume of transactions that would be going on. We, our technical team, had underestimated the volume of transactions because obviously [at our trust] you can have eight to 10 people per ward using it at the same time potentially, and the number of transactions going through it can be significant.
Duo, informatics lead
Looking ahead, interviewees believed that the new systems had great potential:
[An online management report shows that] 60% of your NEWS were done within routine time. Whilst that’s useful to have and useful for us to observe and understand if there’s areas of concern, what I feel that we should be doing more of is looking at the live information and saying actually . . . why has Ward [X] not done their assessments today? Because it’s at that time that you can change your practice to deal with patient safety.
Solo, informatics lead
In a perfect world we’d have statistical modelling going on in the background . . . and going, ‘guys, your SPC [Statistical Process Control] chart’s just wandering off, you haven’t broken your system yet, but you’re about to break the system’. And this place has a long-established track record of not doing that . . .
Duo, informatics lead
ADL [Activities of Daily Living] is not the . . . ideal way to do nursing so [our senior nursing team is developing a] modern way of nursing which is more diagnosis led . . . that in turn [means that you develop] your care plans differently.
Trio, senior nurse manager
Commentary
In this chapter we have traced the developments, over time, of three distinct technologies and their associated practices:
-
data concerning the quality and safety of services, which are captured and managed on a number of different IT systems
-
IT infrastructure for managing data for board and other meetings and for national data submissions
-
IT infrastructure for real-time use in wards, in the forms of electronic whiteboard and mobile devices.
We are struck by the fragmented nature of the systems used to capture and manage quality and safety data for national submissions. There is no obvious rationale for this, beyond national organisations’ requirements that trusts report data separately, using different national systems (SUS, STEIS, Unify). The fragmentation leads, among other things, to multiple recording; for example, a severe pressure ulcer will be recorded in patient records in the Safety Thermometer and as an incident in Datix.
We are also struck by the time costs of trust staff in collecting and managing data for external organisations. As we discuss in Chapter 10, it is not clear why so many data analytics staff are based outside trusts rather than in trust information teams.
More positively, the four trusts had data warehouses that linked the three technologies; most of the data we have discussed were stored in them, including a subset of data from the ward systems. The warehouses are the sources of the data for internal and external reporting. They were beginning to use the data in the warehouses locally, most strikingly to review mortality data 2–3 weeks, rather than 4–5 months, after patients had died. The challenge ahead is to release staff to work on data analysis rather than validation and submission to national bodies.
This mini-biography also reveals two networks of relationships. The first comprised the relationships between information teams and clinical staff. Considerable effort goes into assuring the quality of the data reported internally and externally, and this requires ongoing liaison. The second comprises the relationships between informatics teams and clinical staff during the design and deployment of whiteboards and mobiles. Both of these have been integral to the development we have described.
Chapter 8 Board practices mini-biography
Key points
-
This mini-biography explores change over time of board-level uses, and the value, of dashboards for oversight of the quality and safety of services.
-
The overall direction of travel was towards more, and more informative, information systems.
-
Dashboards were perceived to be valuable in providing effective oversight and, in particular, in proactively managing organisational risks.
-
Conversely, there were also comments in some trusts about the number of data that had to be reviewed.
-
Trusts perceived themselves to be more open and transparent in 2016 than in 2013.
Introduction
This chapter presents a mini-biography of board-level practices across the four case study sites. It aims to provide insight into changes over time in the use of two main artefacts: data and dashboards. It focuses on the first three objectives set out in Chapter 1, concerning the design, use and value of information infrastructures.
The mini-biography draws on the following sets of data in the study sites: observations of the board and board-level quality committees; documentary evidence from dashboards, papers and minutes presented at these meetings (June 2015–October 2016) and selected indicators contained in the trust-level dashboards and reports, over the time period from April 2013 to July 2016; and interviews with chief nurses and NEDs from each site at two time points (July 2015, June 2016), and chief medical officers at one time point (July 2016) from two of the sites.
This is a long chapter, and in order to orient readers to the material to follow, we make some introductory remarks here. In the views of the people who we observed and interviewed, all four sites were aiming to enhance oversight of the safety and quality of care, provide assurance on performance (externally and internally) and pursue openness, transparency and the legal duty of candour. They emphasised that they were promoting a cultural shift in reporting. This would be achieved through good governance, the pursuit of harm-free care reliably delivered and the management of risk. Two sites (Solo and Quartet) also emphasised enhancing internal and external organisational accountability, and Duo interviewees placed stress on enhancing organisational performance. The overarching theme in Trio was somewhat different, emphasising promoting harm-free care at the bedside.
Dashboard data content changes over time
Looking across the four sites, the overall direction of travel is towards the provision of more and more detailed data, produced by the information infrastructures described in Chapter 7. Two contrasting narratives are presented first, for Trio and Quartet, which, as we have seen, pursued distinctive technology development paths but had features in common in reporting to board-level meetings.
Trio
In 2013, reports coming to the board were limited, as they were:
Predominantly numbers focused, very number focused and, unless you really understood it, you didn’t know what it was telling you . . . There was so much hidden.
Trio, senior nurse manager
By 2015, the board received a detailed 60-page set of papers, many of which included dashboards and an accompanying narrative designed to aid understanding of the dashboard and data. These included a nursing dashboard that provided data about the performance of wards. It enabled the chief nurse to ‘look at [a] high level or . . . drill down ward by ward, indicator by indicator’ (Trio senior nurse manager).
Quartet
Historically, there had been ‘a lack of oversight’ (Quartet senior nurse manager) at the trust. A review of data quality in 2014 ‘prompted the change to the new look for this information report’ (Quartet NED). Much then changed in the information infrastructures, viewed from the perspective of the board. The result was a 100-page Integrated Performance Report (IPR), covering performance, quality and safety each month.
Insight into selected indicators and changes over time
Insight into selected indicators included in the dashboards and their change over time comes from two sources, namely (1) our analysis of a quarterly sample of meeting papers and reports presented to the board between April 2013 and April 2016 and (2) documents presented in meetings of the quality committees in each site, each with a brief related to quality, safety, performance and/or patient experience and a general remit of providing oversight and assurance. The committees themselves commonly had monthly 2-hour meetings. Membership varied, with those in Solo, Trio and Quarter having NEDs as members, and sometimes as chair, whereas Duo’s comprised executive members of the board and other senior directors. They also varied in size, with those in Trio and Quartet being quite large.
Tables 13–16 in Appendix 7 present detailed evidence from board papers about the indicators that we focused on in this study, namely serious incidents, complaints, mortality, NHS Safety Thermometer, pain management, patient nutrition and vital signs/NEWS. Our observations can be summarised under the following headings:
Overall change over time
-
In one site (Trio), no change over time was evident.
-
In three sites (Solo, Duo and Quartet), the sample of board reports showed considerable change over time, primarily on changes in content, modes of presentation and depth of commentary. Further insight into the form of these changes is summarised in Table 17 in Appendix 7 for the quality metrics dashboard, provided to the Duo committee every two months.
Timeliness
-
Complaints reports have become available progressively earlier in three sites (Solo, Trio and Quartet).
-
The Safety Thermometer data’s timeliness varied by site and in Trio become less timely. Across the three consistently reporting sites, the data tended to be broken down and given at trust level, hospital level (Solo and Trio) and directorate level and ward level (Duo).
Non-reported or variably reported data
-
Pain management and nutrition data were commonly least presented, sometimes sporadically and in other instances not at all. If the data on these indicators were presented, trust and hospital compliance figures or pursuit of, for example, the nutrition care pathway, were shown.
-
Serious incidents were not routinely reported in Trio and not presented in Duo until April 2014. In Solo and Quartet, they were reported and, over time, the gap between the reporting period and the time when reports became available decreased.
-
Mortality data were not presented in Solo, but a report from the mortality review group was provided in board papers.
-
Vital signs/NEWS data were not presented in one site (Quartet) and not presented until April 2015 in Solo.
-
Safety Thermometer data were only presented in Quartet from July 2014 to July 2015.
Detail and clarifying narrative
-
Complaints data, in all sites, were accompanied by qualitative information, and in two sites (Solo and Trio) they were accompanied by an action plan.
-
Mortality data, particularly in Trio, were accompanied by informative qualitative commentaries.
-
Vital signs/NEWS were benchmarked in Duo against the trust’s internal threshold.
Our meeting observations showed continuing concerns over inconsistencies in particular data, correspondence with other survey findings and a need for changes in the way some data were presented to avoid dangers in misinterpretation of the figures. Table 5 provides insight into one site, Trio, where the least change in the data presented occurred over our observation period.
Feature | Comments |
---|---|
Inconsistencies in data | This arose in relation to the mortality report. The committee queried why different rates presented a different picture (e.g. the crude mortality rate vs. SHMI/HSMR). It discussed whether or not part of the reason could be traced to different time periods covered by the data sources (crude mortality rate being 2 months in arrears vs. SHMI being 10 months in arrears) |
External surveys correspondence | NEDs expressing disquiet that trust data did not always correspond with external surveys on the same topic; for example, in an external survey on food, the majority of patients marked the food as ‘poor’, whereas the trust’s own survey indicated that patients were happy with the food and that its quality had improved |
Presentation of change needed to avoid misinterpretation | The committee felt that the mini-scale trend graphs for mortality did not make sense and could be misleading. The need for changes in the way the ‘executive summary’ trends were presented |
The key points arising from our observations of quality committee meetings showed a variety of continuing concerns in their discussion. These ranged from issues of timeliness and accuracy, data presented and seeing data ‘in context’ (both national and health-care practice) (Quartet), to the length of dashboard information packs, reports on particular indicators and data duplication (Solo). Table 6 provides some illustrations.
Feature | Comments |
---|---|
Timeliness and accuracy |
(Solo) NEDs questioning why they only read about serious incidents that happened last year. Such outdated figures were unhelpful for assurance. The medical director commented that the trust always acted promptly on serious incidents, but the report takes more time to prepare (Quartet) ‘A constant battle’ to ensure that data were received in as timely a manner as possible so that members had sufficient time to read the report; members’ receipt of weekly reports/e-mails described as timelier than the dashboard. Concerns over increase in accident and emergency numbers, ‘but these figures don’t show this as they are out of date’ |
Dashboard information pack | (Solo) NED members expressing the view that the information pack, although more advanced over the last 12 months, took many hours to digest. It contained ‘far too much data’; it was difficult to work out how well the trust was performing and/or identify and focus on quality/safety priorities. In contrast, NEDs highly appreciated the nursing dashboard; it gave a succinct overview of what they needed |
Data presentation | (Quartet) Monthly figures only giving a snapshot, not a ‘full picture’; the need for dashboards to show more data points, over a longer period of time, to help see if something is ‘truly noise or just a trend’; and the need for statistical process control charts (their provision is now being planned) alongside the interventions taken |
Data in context and the need for benchmarking and measurable targets |
(Quartet) The need to put trust data in the national context; importance of an accompanying data narrative to avoid misunderstandings, especially for NEDs who were less likely to have a clinical background; and RAG ratings potentially being misleading (Solo) NEDs continually asking for clinical benchmarking data (e.g. medical report). Some targets and measures were challenged by group members (e.g. a NED queried the ‘what we will do?’ statements and asked how ‘improvement’ would be measured) |
Fitness for purpose of dashboards for governance change over time
Oversight
Across the four study sites, informants pointed to enhanced oversight over quality, safety and performance, and in an integrated manner, compared with 2013–14. The dashboard was providing more detailed and fuller insight. Enhanced oversight was described as having been supported by developments in the information infrastructures. In two sites (Solo and Quartet), informants explicitly depicted their trust as ‘on a journey’, with the Solo informant stating ‘. . . and we started at a relatively low level of information’ (Solo NED). In Duo, informants explicitly drew attention to the dashboards and dashboard-derived data used in reports as enhancing insight into ward performance metrics (also evident in Solo and Trio) and as providing greater insight into incidents on wards and preventative action plans. In Trio, which had a highly developed information infrastructure, enhanced oversight was explicitly depicted as enhancements in gaining assurance and the pursuit of harm-free care. Across sites, NED informants pointed to their increasing challenge of the data, reports and executive board members.
To provide further insight into informants’ perceptions of enhanced oversight now compared with 2013–14, two contrasting portrayals are presented for Trio (because of its explicit focus on ‘harm-free’ care) and Quartet (because of informants’ perceptions of the length of journey the trust needed to embark on to ensure corporate governance).
Trio
Interviews demonstrated a common view of the value of the information provided by the dashboards to the board in supporting oversight of the trust’s performance, quality and risk-focused strategies. The board focused on risk to patients:
Taking the risk route initially allows us to understand very clearly where is the impact and therefore where is your positive outcomes . . .
Trio, senior nurse manager
In addition, a highly valued nursing dashboard was in place. This also contained risk icons, which helped to:
Join the dots for people (on the wards) . . . putting a flag in the system . . . (and) then trigger somebody to do something.
Trio, senior nurse manager
At the same time, the dashboards were seen as aides, ‘a tool that should support us to do the right thing’ (Trio senior nurse manager).
Informants indicate that it was not just the number of data that was assisting board oversight but also:
Through the technology that we have, at ward level . . . through [the] IT system.
Trio, senior nurse manager
The board was now able to explore questions, such as:
Have we got the right workforce with the right skills in the right place to deliver . . . ? And therefore ensure that patients are safe?
Trio, senior nurse manager
Moreover, it was argued that the systems enabled staff to respond to problems as they arose:
. . . because we’ve got access to that information, to be able to detect, for example, a deteriorating position in a ward . . . much more speedily . . . we can respond and put measures in place to recover that position.
Trio, senior nurse manager
A further factor enhancing board oversight was the role played by the NEDs. A NED informant pointed to their being:
Very proactive . . . asking for new things to be looked at . . . [acting as] catalysts . . . there to challenge . . . stimulate . . . and just ask the daft question.
Trio, NED
This was a contrast to the past, when ‘nothing was properly discussed [by the] board’, and issues were being brought to the board by the executives, ‘. . . and you as non-execs were [only] allowed to ask the odd question’ (Trio NED).
The dashboard information, supported by the IT system, was thus ‘enabling the strategic conversations at the board to be a lot stronger and more robust . . .’ (Trio senior nurse manager).
However, whether or not there were sufficient data for oversight was more nuanced:
It is impossible to be sure . . . because you don’t know what you don’t know.
Trio, NED
Moreover, it was important to look beyond the numbers to:
Not tak[e] statistics as being the end of something . . . We learned the lesson . . . that implementation is more important actually than the [IT and dashboard] facility itself. Indeed, ‘without [the] IT system, we could say, we’ll go back to nursing process and . . . handwritten care plans . . . [The] IT system will . . . help us to make sure that those actions, those nursing interventions, are also evidenced based.
Trio, senior nurse manager
Quartet
In comparison with 2 or 3 years ago, the dashboard-derived data, included as parts of the lengthy 300 to 400-page set of papers covering performance, quality and safety, are now seen by the board. Aiding enhanced governance were the presentations made by directorates to the governors. This was in contrast to the past, when the information provided to the board was depicted as problematic:
It was very difficult to get a sense of connection with what was really happening on the wards.
Quartet, NED
A spur for change was:
Lack of information but also a genuine desire to really find out what the world (in the trust, wards, etc.) was really like.
Quartet, NED
At the same time, the board was currently querying, ‘were we collecting the data in the right way?’ (Quartet NED) and raising concerns over data timeliness (e.g. using indicators from 2–3 months earlier) and the need for ‘more information nearer real-time’ (Quartet NED), as much was still recorded on paper.
The NEDs’ role in the board had also supported the pursuit of enhanced governance. They pushed through a review of clinical audit in order to identify areas of concern and potential risk. A corporate risk register has also been established. Both developments, in principle, provide a database to strengthen board oversight. In addition, a NED informant pointed to their ‘ask[ing] some reflective questions’ (Quartet NED). At the same time, NEDs commented on the difficulties faced in digesting the lengthy set of papers, expressing doubt over whether or not members had the time to read ‘every single page,’ forgetting what had been read at the start and noting the danger of ‘too much information’ (Quartet senior nurse manager). However, overall, the dashboard-derived data provided to the board were perceived:
In terms of giving me assurance . . . giving me the language to ask a question. It does its job.
Quartet, NED
Illustrative extracts are provided in Table 7 for the other two sites, Solo and Duo.
Solo | Duo |
---|---|
As a trust now compared to where we were pre- Francis, we are in a much stronger position in terms of the quality and quantity of the information that we get. And you can always ask for moreSolo, NEDWe’re at the point . . . that the data is meaningful, and it’s trusted . . . That just allows you to go to a different level [of analysis and interpretationSolo, senior informatics lead |
The current dashboards and associated quality metrics are seen as providing the board and senior managers with ‘a broader footprint’ (Duo senior nurse manager) and integrated picture of performance, quality, safety and finance Before, in essence, the board did not know what they did not know: ‘we didn’t report serious incidents openly as much as we do now’ (Duo senior nurse manager). Neither was the board sighted on ward performance: now, ‘the board literally . . . see ward by ward performance metrics’ (Duo senior clinical manager) |
Further enabling oversight was the creation of a risk register and follow-up of lessons learned (e.g. from serious incidents) | Subsets of the dashboards are seen and discussed at other levels in the organisation, and generally by a ‘more diverse group’ (Duo senior nurse manager); ‘more people round the table tend to generate most discussion’ (Duo senior nurse manager) |
The NEDs play an active role. They get together prior to board meetings to identify issues to follow up at meetingsPart of gaining assurance is, you give a good prod and you just see what the reaction isSolo, NED | A NED commented, ‘there’s never enough [information], there are always other things that you would want to know about’, while cautioning against ‘piling in more routine report requirements’ and a danger of ‘people feeding the beast’ (Duo NED)It’s not . . . about having more information, it’s [having the right information]Duo, senior nurse manager |
Transparency, openness and legal duty of candour
Some differences were evident between the sites in relation to openness and transparency. In two sites (Trio and Solo), informants suggested that their trust was open and transparent in 2013, and in Trio as ‘much more so now’ (Trio senior nurse manager), commenting:
We have pushed and welcomed and actively sought out scrutiny in terms of welcoming overview and scrutiny visits, CCG visits, peer reviews from other sites, Healthwatch ‘Enter and View’ visits.
Trio, senior nurse manager
Informants in both of the other sites (Duo and Quartet) also spoke of now being more open when things go wrong.
Further insight is provided in Table 8 into these two contrasting site pairs.
Solo | Quartet |
---|---|
I suppose, we’ve maybe become more transparent only because our level of sophistication at providing the data to prove that we’re being transparent is betterSolo, informatics leadWe’ve always had a culture which is . . . tell us [be]cause then we can sort it out and we can learn from it . . . It’s better that we know than you hide itSolo, informatics leadHowever, a NED informant raised a concern with the “defensive culture” (Solo NED) of some senior managers:the word bullying has come up from time to timeSolo, NED | ‘We are getting better’ (Quartet NED) at getting safety-related messages going up and down the organisation, ‘but we are still not good enough’ (Quartet NED)We have been on a journey . . . Probably [the] pace of journey towards transparency and openness has really sort of sped up over the last 6 monthsQuartet, NED |
Trio | Duo |
As a board, we’re really clear, open, transparent, we don’t cheat, we don’t manipulate, we deal with the consequence if there’s an issueTrio, senior nurse managerThe conversations that I’m having with the heads of nursing and with the matrons . . . on a weekly basis . . . [Their content is] about what staff are telling them and what they’ve raised. So, I can see the shift in openness and being transparentTrio, senior nurse managerHowever:some groups in the organisation . . . feel it is incredibly threatening to have this level of challengeTrio, senior nurse managerWhile ward staff feel more ‘able to be open and honest and raise issues . . . I think we’ve still got a bit [more] work to do’ (Trio senior nurse manager) | A ‘gradual [cultural] change in confidence to speak out’ (Duo senior nurse manager)A much better shift to a safety culture . . . [which] has to come from the leadership of the organisationDuo, senior clinical manager |
A variety of ways were pursued to enhance openness and transparency. For example, in Quartet, informants pointed to change being driven by the board, helping to set the culture for the whole organisation. Initiatives included process changes relating to the reporting of incidents and responsiveness to complaints; development of weekly safety bulletins for staff, documenting what happened and learning from it; provision of feedback from the board in a written form, cascaded to and from line managers; a monthly open-forum meeting with the chief executive, encouraging staff to raise any issues they wanted; and a ‘Listening and Action’ initiative. Similar initiatives have been developed in the other sites. In Duo, there is a ‘Speak Out’ campaign, appointment of whistleblowing leads and a ‘Speak to Matron’/’Speak to Sister’ initiative, each aimed at widening the routes for staff to ‘let us know’ (Duo senior nurse manager) about quality and safety concerns. In Trio, an ‘open and honest’ driving improvement project has been operationalised, safety huddles have been adopted on many wards and a weekly staff newsletter has produced accounts of learning from incidents or good practices on wards.
Greater similarity was evident across the sites in relation to the duty of candour. All sites pointed to the development of supportive processes and systems to enable the duty of candour, for example:
We’ve got a system in place . . . wherever possible try to have those . . . open and honest conversations with patients and/or their carer.
Quartet, senior nurse manager
Illustrative extracts are presented in Table 9.
Site | Quote |
---|---|
Solo |
Solo, informatics leadSolo, senior nurse managerFor example, serious incidents: appointing one senior member of staff as chief inspector, rolling out a programme of training inter alia in root cause analysis for all internal inspectors However, cultural change was patchy:Solo, senior nurse manager |
Duo |
Now ‘we are actually standing behind . . . the duty of candour’ (Duo senior nurse manager) in reporting complaints and responses, in the way they are written, each being read by the chief nurse and chief medical officer before dispatch ‘[If] something happened, we are being open about it,’ both publically and to the board. ‘Even just this last year . . . [there has been] a ‘step-up’ in what we do as an organisation’, with dashboard data ‘guiding people to look at the data differently and ask the right questions’ (Duo senior nurse manager) |
Trio | [We are] much [more candid now], [it is] a different worldTrio, NED‘Everybody tells each other the truth . . . and people are prepared to say I don’t know . . .’ For example, complaints: ‘respond[ing] straight away . . . leads to no formal complaint’. More work was needed, particularly with the doctors, who were noted as, ‘sometimes [being] very, very slow . . . to respond to formal complaints’ (Trio senior nurse manager) |
Quartet | A cultural change in reporting and complaint responses, although further work is needed for responses to be ‘less defensive, I suppose . . . a different style of writing’ (Quartet senior nurse manager)I feel that we deal with all incidents that get raised, particularly bad ones as candidly as we possibly can. So if service is bad we will do something about thatQuartet, NED |
Practices: use and perceived value of dashboard data changes over time
Use of supplementary data: information in context
Across all sites, informants pointed to board members accessing other information and adopting approaches to see the dashboard data ‘in context’. Foremost among these were ward safety walkarounds. Informants pointed to a scheduled programme of safety walkarounds and associated initiatives to extend senior staff visibility at ward level. Examples included a ‘Listening to NEDs’ programme (Quartet); and an initiative at Trio asking all senior managers, including clinicians, to help on the ward at lunchtimes, providing them with additional insight into patient experiences and ward care delivery. Other examples of supplementary information were evident in our quality committees and are presented in Table 10.
Site | Quote |
---|---|
Solo | Insights obtained by NEDs from other members (e.g. chief nurse and medical officer), giving a sense of ‘what was going on’ at ward level. A NED who, outside the board, explored benchmarking data or reports from NHS England (Solo, NED) |
Duo | Members’, or other committee members’, own knowledge of the ward/directorate/topic under focus; other information highlighted in committee papers; and external presentations to the group |
Trio | The ‘patient story’, presented largely in the patient’s own words, read out at each meeting; and external surveys from other organisations and ward safety walkarounds |
Quartet | Presentations from directorate ward managers focused on quality and safety, staffing, performance and future challenges; and ward walkarounds undertaken within the ‘Listening to NEDs’ programme |
Non-executive directors emphasised the value of their trust/ward walkarounds and the ways in which these enhanced their understanding of the meaning of the dashboard data:
. . . You will go round the wards and see charts [on ward, on ward boards].
Solo, NED
When I do walkaround[s] and I chat to staff on the wards, you know they’re aware of the [safety] publication and all the relevant issues that it raises.
Duo, NED
NEDs [are] looking and watching them [the nurses] use the various tools for quality . . . such as reminders around investigations . . . around observations, flagging out those patients most at risk, keeping a track of whose under who.
Trio, NED
We used [these] to get out on the wards, see what was happening, find out what the big issues were for people there.
Quartet, NED
You triangulate what you are receiving with . . . what people actually say and talk about.
Quartet, NED
As a NED informant in Trio observed, ‘NEDS are useful . . . because we can talk to anybody’ (Trio-6). Indeed, a NED in Quartet remarked:
I set more stall by the face-to-face stuff than I do by the data because I think . . . you can manipulate data.
Quartet, NED
The value of the walkarounds centred on seeing the IT systems and dashboard information used in the context of healthcare practice in the ward and opportunities provided to talk to nurses and patients. In addition, a NED informant in Duo commented that the walkarounds provided evidence to reinforce his perception that the trust was:
Getting smarter at doing that sort of thing [disseminating of messages on lessons learned from incidents].
Duo, NED
Perceptions of the use and value of the dashboard information
Looking across the four sites, within the context of our 2-year observation of meetings of the quality committees, there was evidence of the ongoing use, and value, placed on the dashboard and dashboard-derived data in reports they received. This was despite concerns that committee members had about the data, as outlined above in earlier sections of this chapter. Moreover, informants and observations indicated the value of dashboard data, and the data warehouse, in providing an audit trail and enabling trends over time to be identified.
Whether or not their use was increasing over time was, however, difficult to uncover. In part, this was due to the limited observation time period and, in part, to our interview strategy. Rather than interviewing a sample of committee members, our approach involved asking selected board members (chief nurses, chief medical officers and a NED) to reflect on changes in the fitness of purpose of the dashboards and dashboard-derived data in order to achieve the board’s key remit of oversight, openness, transparency and the duty of candour. Some additional examples of the use that members of the various quality committees made of the dashboards and dashboard-derived data more generally in the sites are summarised in Table 11.
Site | Quote |
---|---|
Solo |
(Quality Committee) The dashboards were referred to throughout and framed the committee’s discussion; use of the added ‘summary page’ to focus topics of the meetings; NEDs using this to spot key issues for further exploration; use demonstrating recognition of the importance of seeing things in context; and NEDs strongly valuing the nursing dashboard, it providing a succinct overview of what they needed to see (Board) The dashboard report providing clues for further exploration, ‘a clue that . . . there’s something not quite right . . . What you need to do is go and understand . . . [and] end up looking at it properly’ or (if indications of a deterioration) ‘monitoring it’; and using the data warehouse to audit:Solo, senior informatics lead |
Duo | (Quality committee) Using the ward nurse dashboard to point to areas of action taken, and action to be taken, with a view to ensuring that these were disseminated throughout the trust; and show change over time in ward performance |
Trio |
(Quality committee) Use dashboard data to guide further work, e.g. detailed analysis of the trust’s high SHMI use of the dashboard data and executive summaries to assist discussions, e.g. to highlight RAG rated ‘red’ rated wards, enabling NEDs to seek assurance that action plans were in place (Board) Members reflecting on how to make the best use of the ‘wealth of the information [that is] in there . . . What are key questions that we want to ask to then exploit the wealth of data?’ (Trio senior nurse manager). However, a NED’s concern that ‘[while] we’re getting more and more detailed reports . . . we were doing less and less with them’ (Trio NED) (Nursing dashboard) Used to develop preventive practices and gain early insight into possible risks:Trio, senior nurse managerUse the dashboard ‘. . . to map skills and competencies on to the roster so you can see the template of skill mix available to you’ (Trio senior nurse manager) and for audit trail:Trio, senior nurse manager |
Quartet |
(Quality committee) Assisting discussion and enhance quality and safety governance. Executive summary used to spot problems and structure the meeting, beginning with red RAG-rated areas; NEDs using the data to identify the need for more action plans: ‘stop analysing and start some action’ (Quartet NED) (Board) NEDs using the dashboard-derived data to ‘[get] the broad view . . . then we’d burrow down into the areas that are still on matters of concern’ (Quartet NED). However, a query about whether ‘reliable interpretation [application] of a process’ were in place and being ‘assured that everything has been done to assure that reliability’ (Quartet NED). Use for audit trail:Quartet, senior nurse manager |
Key challenges and work for the future
Informants in Solo and Trio drew explicit attention to factors that they perceived had supported the changes in IT developments and use of the dashboards and data. In essence, this centred on staff enthusiasm and clinician engagement. For example, in relation to the electronic systems, this led, as a Solo informant stated, to ensuring that the IT team ‘understand the processes’ (Solo senior informatics lead) involved in health-care delivery. In Trio, this included enthusiasm from all staff for the IT system and its aim to support care at the bedside. Moreover, ‘we want to keep the engagement and the enthusiasm at the coal face’ (Trio senior nurse manager). Another example relates to the firm engagement of staff with initiatives to encourage them to raise issues and encourage complaints reporting and responsiveness to problems, although there was a recognised need, ‘to get the clinicians on side [in the same way] as we have got the nurses on side’ (Trio NED). Further work was still needed to encourage staff to raise their concerns internally, rather than first externally or on social media. As is evident in the subsection above (Transparency, openness and the legal duty of candour), changes in raising issues and complaints reporting and responsiveness were also of central importance in the other sites, but in the interviews these were not directly identified as enabling factors by the informants.
Across sites, a number of key challenges were outlined by informants (Table 11). There were some similarities. Solo, Trio and Quartet identified a need for more work with particular staff groups, in particular, medical staff, to gain their engagement with dashboard metrics. Solo and Quartet raised the need to reconsider the amount of dashboard data and, in Quartet, to move towards ’more real-time data’ (Quartet 5). Informants from two sites (Duo and Trio) pointed to a need to enhance ways for facilitating feedback from the board to wards and to address a perceived middle management blockage. Currently, a board member commented, the chief medical officer and chief nurse:
Have a tendency which annoys some people, to try and sort of avert the layers and just go straight to the bottom with things which have come out of the board as being a problem.
Trio, NED
A similar issue was pointed to by a NED informant in Quartet: a need to gain the engagement of directorates with the remit of the quality committee. More generally, a NED in Quartet raised the wider issue of a need to embed ‘excitement’ with a quality and safety culture in the trust.
Some similarites across sites were evident for future work. Informants in Solo, Duo and Trio pointed to the need to better exploit the potential of the data and dashboards, and to addressing the data volume presented to the board. Duo and Quartet identified a need for further work to engage ward staff with whiteboards and mobile devices, and, in Duo and Trio, the need for more work to ensure feedback from the board to wards. More generally, an aspiration in Solo, expressed slightly differently in Trio, was to be able to use the data warehouse and data to do modelling and thus preventative and proactive planning.
Review of findings for Chapters 6–8
The mini-biography in this chapter set out to provide insight into changes over time in the board-level practices across the study sites. The heart of the study concerns the generation and use of data ‘from ward to board’, and we are now in a position to review the main flows of data within the trusts. Key findings are listed by research question.
Research question 1: design
Over the last 3–4 years, across the four sites, the overall direction of travel has been towards the provision of better information infrastructures in the form of more, and more informative, validated data. These data are now presented, in all sites, in trust-level dashboards and dashboard-derived data in accompanying reports. In all but one site, the information infrastructures were developed by the site’s informatics team. They were designed to generate data for monitoring and oversight, alongside aiding and supporting health-care delivery on the wards.
Our analysis of selected indicators included in the dashboards over the study period (see Chapter 7) provided evidence of variation across the sites and limited coverage, if any, in the dashboards or other data, or reports derived from these, for two of our selected indicators (pain management and nutrition), and limited information on a third (vital signs). In Trio, there was little change over time, although considerable change was evident in the other three sites. Across all the sites, our meeting observations showed a range of continuing concerns over the dashboard data, ranging from issues of data inconsistencies, accuracy and timeliness, to dangers of misinterpreting the data and the importance of seeing data ‘in context’.
Research question 2: implementation and use
Across the sites, our observations of quality committee meetings provided evidence of the ongoing use and value placed on the dashboard and dashboard-derived data used in reports. Informants also indicated the value, and potential use, of dashboard data to provide an audit trail (for incidents or complaints) and to explore trends over time in quality, safety and performance.
Across all sites, informants pointed to a variety of ways through which board members accessed other information, setting board-level data ‘in context’. Foremost were ward safety walkarounds, providing insights into patient experiences and ward staff views. Our observations also pointed to the value placed on presentations to the group and patient stories.
Research question 3: governance
Governance
Across the four sites, informants pointed to enhanced oversight over quality, safety and performance in 2016 compared with 2–3 years earlier. Moreover, the dashboard provided an integrated perspective and enabled more detailed and extensive insight, at the level of both the trust and the ward. NEDs explicitly stressed acting as catalysts and challenging executives over presented reports and data. Solo and Quartet depicted themselves as having ‘been on a journey.’ Across the sites, this ‘journey’ was ongoing, given and supported by developments in information infrastructures. In Trio, with highly developed systems, more effective oversight was perceived to support the pursuit of harm-free care.
Informants were thus depicting their dashboards as being ‘fit for purpose’ in enabling and supporting the board’s key governance role, while also stressing the importance and value of ‘seeing the data in context’. The ward walkarounds aided insight, supporting assurance and oversight, and provided an opportunity to put patient and ward contextual detail into board quality committee members’ interpretation and use of the dashboards.
Openness and transparency
Across sites, interviewees felt that their trust was more ‘open and transparent’ now than in 2013, and that they were publishing much more information than they had in the past. In particular, informants in Solo and Trio perceived their trust as being ‘open and transparent’ in 2013, and perhaps even more so in 2016 (Trio). Informants in the other two sites (Duo and Quartet) depicted the trust as being ‘more’ open and transparent now, and being more open, both to the board and in the public section of its meetings, when things go wrong.
A variety of methods were pursued to enhance openness and transparency, centred in general on changing the culture so that trust staff felt able, and had increased confidence, to report concerns. A wide range of initiatives were promoted, including weekly staff safety bulletins, a ‘Speak Out’ initiative and the widening of the routes for staff to ‘let us know’ (Duo senior nurse manager) about quality and safety concerns. Although things had improved over the past few years, further work was needed for staff to believe that there were ‘safe spaces’ where concerns could be raised and that these would be acted on.
Legal duty of candour
Across trusts, respondents felt their trust was firmly committed to the legal duty of candour. Informants pointed to the development of supportive processes and systems to enable this. These included a more timely and early response to an initial informal complaint, and speedier responsiveness to formal complaints, along with great attention being given to the way these responses were written.
Overall aim and narrative
The four sites have embarked on journeys towards a common aim, centred on enhancing oversight of the quality and safety of services. Substantial movement in this direction is evident, although informants recognised that more work was needed to achieve and sustain their aims.
The narrative that has unfolded at the sites centres on effective governance, the pursuit of harm-free care – reliably delivered – and the proactive management of clinical and organisational risks. Evidence for this includes:
-
Better data: NEDs had substantially more, and more relevant, data to undertake their oversight and governance activities; executive team informants pointed to their being able to ask for reports on specific issues, drawing on data in their data warehouses.
-
NEDs’ proactive roles: emphasis on oversight and gaining assurance; NEDs acted as catalysts, using data to prompt questions to executive team members; seeking detailed action plans and then following these up; going out and about on ward walkarounds and talking to staff and patients.
-
Emphasis on learning from incidents: focus on learning lessons and adoption of working practices that reliably do this; change in staff confidence and ways to speak out and escalate problems.
-
Better processes and systems: systems and processes have been designed to support the publication of data about performance, to support the legal duty of candour and to encourage staff to escalate problems and speak out if necessary.
It should be emphasised that this way of describing the developing narratives emphasises their positive aspects. It should, therefore, be interpreted in the light of the routine quantitative data presented in Chapter 5, which show that performance improved on a range of measures at all four sites during the period of the study. Finally, we also detected concerns that progress could be compromised by pressures on NHS resources. These had not had a significant impact by the end of the observation period (autumn 2016), but were clearly on the minds of senior managers.
Commentary
Across the four sites, there was evidence of perceptions of enhanced governance and board oversight over quality and safety, and productive use of the dashboard and dashboard-derived reports. There was also common agreement that the trusts were, overall, more ‘open and transparent’ than in 2013, and also were making progress in meeting the legal duty of candour. However, informants pointed to the need for more timely feedback from the board to wards, with two sites experimenting with a more devolved management approach in the course of the fieldwork.
The observational and interview data support the interpretation that developments in data available to boards, and their use, form at least part of an explanation for the enhanced governance and oversight capability. Other factors were also important, including the following: in two sites, staff enthusiasm and engagement with IT developments; emphasis on systems to support ward care and its management; impetus from the Francis inquiry for greater openness, transparency and the duty of candour; board members’ safety ward walkarounds; and the development of systems and supporting processes in reporting and responding to incidents and complaints, to ensure the legal duty of candour.
The plot that underlies the narratives of these changes over time provides evidence of a shift from classic performance management to a more supportive approach. Over time, there was increasing emphasis on managing clinical and organisational risks84,85 and an orientation emphasising the quality and safety of services. 86,87
Chapter 9 Directorate mini-biography
Key points
-
This mini-biography explores matrons’ use of data.
-
Matrons used information systems for assurance about the quality and safety of services.
-
More broadly, matrons used a range of informal processes to manage their services; routine data were just one source of information.
Introduction
This mini-biography explores the use of dashboards and data recorded at ward level in the work of matrons at directorate level in the four sites. It aims to provide insight into changes over time in the use of two main artefacts – data and dashboards – and the issues of openness, transparency and the legal duty of candour. The findings are based on data drawn from 12 observations of matron meetings with their ward sisters (2015 and 2016) and interviews with six matrons (2016). As the data set and time perspective are quite limited, the findings are indicative and should be read and interpreted in the context of, in particular, the ward mini-biography (see Chapter 6). The following sections explore the matrons’ approach and ways of working; the matrons’ use and perceived value of dashboards and other information in their work; and the matrons’ perspectives of changes over time in relation to the issues of openness, transparency and the legal duty of candour.
Overall, the findings point towards an inferred, directorate-level, cross-site aim centred on ‘the delivery of good/high quality and safe patient care’, with a current additional emphasis, explicitly mentioned in one site (Trio) and implicitly in another (Duo): ‘and, now, in terms of money’. This aim was amplified by the informant in Duo: ‘efficient [patient] care with all that encompasses . . . [about] standards’. The indicative cross-site plot being pursued at directorate level is concerned with ‘using the data, gaining assurance and enabling and supporting’. It can be summarised as using information to enable the matron to gain assurance about, and to support ward sisters in, the delivery of safe and high quality patient care.
Matrons’ approach and ways of working
Across sites, the matron responsible for a set of wards or services held monthly matron meetings with their ward sisters. The matron also participated in matron meetings (building on ward reports prepared for her by the ward sisters); one-to-ones with her line manager and/or head of nursing; monthly directorate meetings; and quality assurance, performance and risk meetings with senior managers, during which areas of concern and/or potential risk might be escalated. The matron also validated at least some of the data that ward sisters entered, for example regarding complaints, incidents or NEWS. This aspect of the matron’s work (for further insight, see Chapter 7) was only explicitly mentioned by informants in two sites (Solo and Duo) and only implicitly in another (Quartet). As one matron stated, ‘we have to validate it (NEWS) every month . . . as matrons’ (Solo matron). This was reiterated for Datix and the compassion audit by one of the ward sisters in Duo: ’the compassion audit is done by matrons’ (Duo ward sister).
The matron aimed to hold monthly meetings with the ward sisters, but across sites several meetings were cancelled and/or had variable attendance. Minutes of the meetings were taken and distributed a week or so before the next meeting. The meetings primarily took the form of ward sisters passing on information to the matron; joint discussion of problems and concerns; providing an opportunity to share learning; the matron giving feedback and support to ward sisters; and the matron passing on information from directorate and other meetings, including the board. Data collected throughout the observation period exemplified inter alia:
-
ward sisters feeling able to discuss problems or incidents with colleagues and receiving supportive advice (Solo and Quartet)
-
using patient stories to promote discussion and encourage reflection on potential learning (observed 13 January 2016, Solo; August 2015, Duo; observed 2 September 2015, Quartet)
-
sisters giving feedback and updates and sharing areas of concern for their ward (e.g. ‘outliers’, bed pressures, problems with e-whiteboards) (observed, 13 January 2016, Solo), leading them to resort to paper recording in general (observed 25 February 2016, 9 September 2015, Quartet), to communication problems occurring during patient handover from another ward, and the use of agency staff (observed, 4 August 2015, Duo)
-
sharing learning (e.g. serious incident) (observed 4 May 2016, 13 January 2016, Solo; observed 4 August 2015, Duo) or passing on learning from other teams (e.g. the use of electronic rostering systems)
-
reflection on ways that e-information was used (observed 4 August 2015, Duo) and tutelage to sisters to encourage, ‘the habit of accessing it’ (Solo matron)
-
matron providing positive feedback and ‘praise’ when key performance indicators had improved in the previous month
-
matron reminding sisters what they could do ‘better’, for example regarding pressure ulcers, falls and complaints (observed 20 July 2016, Solo); sisters to use e-whiteboards within handovers (observed 4 May 2016, Solo) and in the daily safety briefing (observed 17 February 2016, Solo)
-
matron passing on information on new initiatives [e.g. matron visibility ‘Blue Thursday’ (the matron to do a half shift on the ward)] and information passed on via external presentations to the meeting, for example on infection control (observed 8 June 2016, Solo) and deep cleaning (observed 4 August 2015, Duo).
Use and perceived value of dashboards and other information
Data accessed: use and perceived value
The components of the information infrastructures at the sites are described in Chapters 5 and 7. In some directorates, in some of the sites, matrons accessed the real-time ward systems from time to time. For example, the informant in Solo indicated that:
The [real-time] dashboard perhaps is more useful for ward sisters actually, rather than me . . . The dashboard for me gives, could I use the word, superficial . . . But it gives me an ‘at-a-glance’ view of how a ward is doing.
Solo, matron
At the same time, she commented:
You can look now on any of my wards remotely . . . gauge how busy they are.
Solo, matron
The same matron used the trust-wide management system for management purposes:
To prepare for my assurance performance meetings . . . and to see how a ward is doing . . . If [the ward is] struggling, see what we can do to support [it].
Solo, matron
Another matron also used the real-time system:
I can go on to C [the system] . . . I can look and see the live data. . . I can see if obs[ervations] are up to date, that sort of thing. I can go in to patients’ record . . . [and then] raise issues with the charge nurse or whoever is on duty.
Solo, matron
The real-time data system was also used in Trio:
Any point in time I know what’s happened in our ward in terms of acute dependency of patients . . . Using the whole menu of information you’ve got.
Trio, matron
In contrast, in Quartet, informants (Quartet matron; Quartet matron) indicated using the range of static, non-real-time electronic databases to gain an overall as well as a corporate perspective. For example, they used Datix to access information about incidents and then discuss them with the relevant ward sister:
. . . then I’ll contact [the ward sister] and we discuss them and then close or action them.
Quartet, matron
Ward visits
In addition to the real-time data and routine reports, the matrons’ regular ward visits provided further information. They were used to follow up on issues of concern, sometimes identified in data. A Solo informant expressed the common perception of the value of the ward visits:
I think visits to the wards are immensely important and they give lots and lots of information. And they allow you to set the tone and the standard that you expect of all your staff.
Solo, matron
The performance indicators acted as ‘a sort of guide’ (Solo matron) of what to look for and talk with the sister and her staff.
It is really important not to just rely on the data.
Duo, matron
The ward visits enhanced the visibility of the matron:
The visibility of the matron is important. So, for me, that’s about staff knowing who they are and who they are accountable to but, more importantly, our patients and visitors.
Trio, matron
Management ‘by walking about’ was seen as essential by informants across the sites. As an informant in Quartet observed:
It’s assurance isn’t it? It’s about assurance and feeding that assurance in and making sure your checks are in place and you have adequate feedback. So you would feed up [escalate an issue] and feed [messages and learning] down . . .
Quartet, matron
Added value of data and appropriateness
Reflecting on the data available to them, the informants drew attention to their added value in contrast to what had been available a few years ago. The overall perception across sites was captured by a Trio informant:
I think we are better at managing the information and I think we are better at knowing why we’ve got the information so I think it’s focused us more . . . If something that I want . . . isn’t there, if I ask for it, I get it.
Trio, matron
A definite change, explicitly contrasting ‘now’ with ‘before’, was thus evident. For example, at Duo:
I actually think the data stuff is a bit better . . . I don’t feel that it is quite as swamping as perhaps it’s been in the past, I don’t feel that you get asked for as much data.
Duo, matron
However, a Quartet informant was somewhat more wary:
Well, it’s good if these areas, these things [data systems] can communicate with each other.
Quartet, matron
When asked what they might like to see changed, the overall impression given was that systems were valued and appropriate to matrons’ needs. However, some concerns were raised, including a desire to identify the ‘intelligence’ behind the data, potential overlap in the different systems and the potential value of additional ‘early warning’ indicators (Solo matron). Typically, comments pointed to the situation being better now than it had been 3 years earlier:
I think I have got enough data . . . I think it’s about prioritising that data and knowing which bits to look at and when and to do things with [it].
Duo, matron
I’m really confident that I’ve got the data that I need and the intelligence that I need in terms of direct observation to have a pretty good idea about that ward.
Trio, matron
Acting on the data
Two aspects are explored below. The first relates to the use of the RAG ratings universally employed within the sites. We were interested both in the approach adopted if a ward was performing well (e.g. a green rating for the last 3 months) and if it was struggling (red rated or apparently heading towards red). The second aspect relates to the matron escalating issues of concern.
If a ward was continually rated ‘green’, then light-touch monitoring would ensue (see also Chapter 8 for material on monitoring). A different approach was adopted if a ward was struggling. As an informant from Solo outlined:
If one of my wards was struggling a little . . . I’d be speaking to colleagues or speaking to other ward sisters and saying ‘how come, you know, we’re struggling here, what can we do differently, what are you doing?’
Solo, matron
The matron would offer support to the ward sister to uncover what the problems were and whether or not, and how, they could be resolved, or at least ameliorated. An interviewee at Duo echoed this approach for a ward ‘in escalation’ (continuing red-rated and needing to improve its performance):
We talk about it, so they talk about it with their teams, we talk about it to them, or head of nursing talks about it to us in a matron’s meeting.
Duo, matron
An unforeseen, if positive, consequence of the use of dashboard RAG ratings was pointed to at Duo:
[Ward] teams are competitive as well. They don’t want to be the one that aren’t achieving . . . [They begin to ask] if other people can do it, then we can do it, and what is stopping us from doing it?
Duo, matron
This provides suggestive evidence of ward nursing staff’s trust in the data, seeing them as reliable and engaging with them.
Turning to the management of topics that were raising concerns within a trust, issues would be escalated to a multidisciplinary directorate risk meeting. If appropriate, the issue would be added to the corporate risk register, which would then come to the attention of the board. For example, an informant in Solo pointed to:
A monthly review (at the directorate risk meeting) of Datix. We’d look at anything outstanding, things that the directorate need to pick up on, actions that haven’t been completed’
Solo, matron
However, the informant observed:
if [it’s] a kind of an ‘in the moment’ risk or real concern, we would actually deal with in the moment. You’re not going to wait for a risk meeting to flag that up.
Solo, matron
Another example comes from Trio, referring to a performance management dimension. If there were pressures on a ward:
We would then put that ward in a turnaround position . . . [we would] need . . . an action plan . . . If they don’t improve, we might have a change of leadership.
Trio, matron
Similarly, an informant from Duo talked through the process that was followed if a ward was not meeting the trust’s standards:
The first stage is, you look at it locally and you come up with a plan. But if you get to stage 4, then you would go to meet with an executive and with your team to talk about how they can support you, what things have you done, what things are you struggling with, how can they support you.
Duo, matron
In addition, at the matrons’ meeting, issues relating to escalation were explored. An informant from Duo commented:
So . . . everybody has a broader sense [of how things are] . . . [and] where people are in terms of escalation, who’s doing well as well, the bits that we need to improve on. [We] try and pick out some actions from that and try to make it multiprofessional in our approach . . . It’s not being reactive. It’s trying to be proactive and identifying things already . . . [And if necessary] we are able to filter things back up.
Duo, matron
This was echoed by a Trio informant who stated that ‘we are trying to be proactive’ (Trio matron) about potential risks and thus avoid harm or unsafe care for patients.
Across all four sites, if an issue was escalated from a ward to directorate, steps would be taken to support wards to improve their performance. For example, an informant from Duo commented:
[We] help facilitate those changes . . . A process to make sure that they were alright, talk about what else we need to do and how we do it, regular feedback sessions.
Duo, matron
At the same time, some things were outside of a directorate’s gift to resolve. For example, if an issue had been escalated to the corporate risk register:
The directorate has to evidence . . .. the actions [that have been] taken in response to that risk.
Solo, matron
From boards to wards
Information that a trust board wanted to be passed down to ward level would be communicated from matrons to ward sisters, and via them to their staff. One informant described it as ‘a definite cascade of information’ (Trio matron). The information passed on included information and feedback from senior leaders and matrons’ meetings; new initiatives; results of spot check environmental audits; and lessons learned in other directorates about incidents or complaints. In addition, there might be ‘shared learning’ meetings. This was mentioned in Quartet, where reference was also made to trust-wide initiatives, such as staff newsletters:
Everybody in that team [in my ward] should know, for example, feedback from governance meetings, and then that should link into your safety huddles, your handover, to try and reduce that risk again.
Quartet, matron
A variety of methods might be used to pass on information. Regular matrons’ meeting with ward sisters and their one to ones were universally used (Solo matron, Due matron, Trio matron, Quartet matron). Another approach was the use of e-mail:
[There] may be things where I might do some sort of summary and send it out via e-mail.
Solo, matron
This might lead to the subsequent display of the information:
The results [of performance reviews] are actually e-mailed on to the lead nurses and the lead nurses display it.
Quartet, matron
Openness, transparency and the legal duty of candour
Across sites, compared with 2013–14, there was a general perception that key recommendations made by Sir Robert Francis, concerning openness, transparency and candour, had been taken seriously. In part, this was linked to the greater availability and publication of data, and in part to the increased visibility of members of the executive team in the organisation. Informants reported a changing culture, reinforced by matrons’ professional responsibilities to patients. As informants from Solo, Duo and Trio commented:
I try to be a realist, and just say. This is what it is. So, I don’t like to hear where I’m being told that, ‘well actually, your areas are alright because you’ve got this this this and this.’ I want to reflect accurately . . . I’ve got a duty to not only my patients but my staff to reflect things accurately, that’s how I see it.
Solo, matron
I think personally I have always been very open and transparent I will always say sorry for things because I think that is really important for people to hear that.
Duo, matron
That’s [openness, candour] [are] embedded in our systems and processes now.
Trio, matron
At the same time, an informant from Quartet observed:
We scrutinise ourselves too much . . . But I think we have got to, you know . . . I think we have taken that to gold standard . . . [But] with duty of candour . . . you can’t say your ever too open . . . We’ve got to be honest with patients and family . . . I think the culture is less than desirable sometimes . . . But then there is a bit of blame.
Quartet, matron
Concluding comments
Matrons used real-time systems – in the sites where they were available – and routine reports. These were just one source of information, and matrons also regularly visited wards, there were meetings with ward sisters and/or there were one-to-ones with them. Matrons worked collaboratively with ward staff to address and resolve problems, reflect on lessons learned and effect changes in care delivery. That is, while performance data were available and used, the overall management style was more consensual as well as performance driven. The publication of performance data across wards, in the form of RAG ratings, had the unforeseen (to us) consequence of encouraging competition between wards and spurring staff to improve their metrics.
If we take a step back, and consider this chapter in a wider context, we can make two further points. The first is that, combined with our observations about senior managers ‘managing by walking about’ in Chapter 8, data played a significant role in management but did not replace ‘informal’ sources of information gleaned from face-to-face contacts. The second is that directorates were – for nurses, at least – the places in the trusts where the information systems came together. Matrons used both the real-time and management systems, they passed information up and down their organisations and they integrated formally and informally acquired information in the course of their work.
Chapter 10 Local and national agencies
Key points
-
This mini-biography explores the use of trust data by local and national agencies.
-
Trust data sets were submitted primarily to NHS Digital, which, in turn, provided data sets to other agencies.
-
There was extensive oversight of the performance of trusts, which relied on data sets generated initially by the trusts themselves.
-
CCGs and Healthwatch made relatively limited use of these data sets, relying more on information that they gained through local relationships with trusts.
Introduction
This chapter explores the value and use of trust dashboards and data by local service commissioners and Healthwatch, and the ways in which local and national agencies use the data provided to them by NHS Digital (previously the HSCIC). The chapter is in four parts. The first, building on Chapters 3 and 7, provides an overview of the data flows from trusts to national agencies. The second explores the data needs of local agencies, that is, CCGs and Healthwatch, and their use of these data for quality and safety oversight and assurance. The third considers national agencies’ use of data to monitor quality and safety in trusts. The fourth and final section looks at indicative evidence on national agencies’ collaborative working on quality and safety oversight and assurance of trusts.
The findings are based on eight interviews, conducted in July 2016, in five organisations: NHS Improvement (the merged Monitor and TDA, n = 1); CQC (CGC, n = 1); two regional offices of NHS England (NHS England, n = 2); CCG (n = 1); and Healthwatch (n = 3). As the local data set is limited, findings at that level are indicative only.
Data flows from the trusts
Figure 3 is a high-level schematic representation of data flows from trusts. For the set of key indicators we focused on in this study (see Chapter 7), data were submitted by trusts to NHS Digital. Some National Clinical Audit data sets were submitted to Public Health England. NHS Digital, after validating the data sets, provided data sets to other agencies: NHS England, CQC and NHS Improvement. The first two were able, legally, to request specific data sets from NHS Digital, but NHS Improvement could not, and relied on HES data sets. Trusts did not provide patient-level data to CCGs or Healthwatches: this is not permissible under the Health and Social Care Act 2012. As the interviews explored in this chapter show, data do not flow back to trusts as feedback from NHS Improvement or from CQC inspections.
Local agencies and quality and safety data
In pursuit of their commissioning roles, CCGs need data to monitor and assure the quality and safety of the services. Our CCG informant depicted this as:
. . . Understanding the quality impact on service users and to any changes in clinical pathways.
CCG
The CCG was able to use data published by the trust, and the trust also provided workforce dashboards, a nurse ward dashboard-derived report (reporting a number of indicators) and improvement plans for health-care-acquired infections. The CCG then:
. . . Concentrated [its] efforts on [identifying and working with] the right indicators.
CCG
This CCG thus did not ask the trust to provide bespoke reports, but rather:
[It] worked and continue to work on accepting the information that’s been presented to their board . . . We receive a suite of documents . . . based on our forward plan . . . our most useful data is what the trust is delivering to their own board.
CCG
Healthwatches have broad remits, spanning health, social and voluntary, and care homes, and are financed by the local authority using non-ring-fenced centrally provided monies. Their emphasis was on representing the patient’s voice:
[We] basically try and put people at the heart of health and social care.
Healthwatch
Healthwatch was, in essence:
Doing . . . an assurance validation . . . and [we] always publish our report.
Healthwatch
Our informants argued that they added value:
Deliberately bringing more of the services of patients . . . of people’s feedback to what, traditionally, quality and performance committees tended to not put/give a lot of time on their agendas . . . They all looked at their KPIs [key performance indicators] . . .
Healthwatch
. . . The data that people can give us can be so powerful that people [service providers] want to do the change that you’re suggesting. You know, the work that we’ve done with hospitals, with CCG, the doctors – we bring them an issue and we tell them how it’s affecting people and they want to change.
Healthwatch
Patient stories were a key resource. Healthwatch in Duo had a database of individual patient feedback, and Trio conducted an:
Ongoing survey . . . people can go on the website and tell their story.
Healthwatch
As the Solo informant commented, Healthwatch also has:
. . . A strong idea about what are the issues that are frequent and . . . the issues that are severe . . . We [try] to balance the two. So, one person’s story probably isn’t going to be enough . . . But if it is more than that . . .
Healthwatch
In addition, Healthwatch conduct edsurveys on specific issues and also reviewed data on the NHS ‘patient portal’, My NHS, thus triangulating data about local trusts from different sources. Another informant pointed to the need to have strong links with advocacy providers:
So we are picking up the data that they’re finding out about and again it helps with that triangulation.
Healthwatch
Healthwatches also received limited specific data, on request, from their local trusts. Generally, our Healthwatches relied on public board reports and the annual quality accounts, on which they had to comment, and papers for committees they attended, for example quality committees at Solo and Trio. A Solo informant, reflecting on the meetings, identified a key challenge:
How do you sift through the data to actually have some intelligence about what’s happening?
Healthwatch
In addition, in Healthwatch’s preparation for an ‘Enter and View’ visit:
[We] ask [the trust] for every bit of information from the whole directorate or CSU [Commissioning Support Unit], and pick a sample.
Healthwatch
Use of data for oversight of quality and safety
There was similarity in the approaches adopted by the CCG and the three Healthwatches. The CCG informant depicted this as:
A mutually respectful relationship of openness and transparency . . . It’s been more of a peer review support process rather than anything too heavy handed.
CCG
Similarly, the Solo and Duo Healthwatch informants spoke in terms of:
Quite an ongoing relationship . . . a fairly partnership approach.
Healthwatch
A reasonably good relationship.
Healthwatch
The Trio informant, reflecting on the trust’s changed view over the last few years, commented:
[They now] see us a critical friend . . . respond well to our recommendations . . . And [recently we have] done some projects with them.
Healthwatch
Their approaches permeated Healthwatch ‘Enter and View’ visits, for example giving feedback to quality matrons or making:
A report available online . . . actions agreed with the trust . . . [And] getting action plans back. And we always do a follow up on those actions.
Healthwatch
The CCG used trust data to gain assurance, for example on its forward plan and achievement of CQUINs, and expected the trust:
To identify to us areas where they’re seeking internal assurance.
CCG
It scrutinised these data to identify:
Key lines of inquiry . . . Then we put the key lines of inquiry back to the trust in advance of the meeting, and they come back with a written response to that.
CCG
The CCG then followed these up. The informant depicted its approach and use of data as forensic:
I think the providers understand that CCGs have a role in being forensic and seeking assurance . . . I think it’s a journey in terms of seeking assurance.
CCG
The Trio Healthwatch informant illustrated one way that a Healthwatch used data they received from the trust. Referring to the ‘Enter and View’ process, an informant commented:
[We give] immediate feedback . . . After the first report, they produced quite a comprehensive action plan based around our recommendations . . . We’ve used that to design the next ‘Enter and View’. So, we can monitor any improvements and areas where things haven’t improved.
Healthwatch
However, one informant queried where to take issues of concern:
Is it to the quality surveillance group? Is that the right place to have those kind of discussions? Is it to the patient experience team (in the trust)? Like, ‘if we hear about this again we will be whistleblowing to the Care Quality Commission?’. How do you deal with these sort of issues?
Healthwatch
National agency use of data
Although the three national bodies, NHS England, CQC and NHS Improvement, hold slightly different remits, each draws on trust data submitted to NHS Digital. To make sense of their multiple data sources (e.g. NHS England draws on over 30 data sources – NHS England Regional Team), both NHS England and CQC have developed analytical tools to aid their work. NHS England has developed three national dashboards (Acute and Specialist trust quality dashboard, comprising 44 indicators; Mental Health dashboard; and Primary Care dashboard) and a ‘quality risk profile’, while CQC, refining the approach of intelligent monitoring, relies on a comprehensive surveillance model. In part the latter model was perceived as an aide to the provider.
It’s for the provider to understand what it is that we expect from them, and they structure their thinking about quality assessment around it, and so the shared view of quality is in part intended to prompt providers.
CQC
Moreover, the CQC informant continued:
A lot of organisations had constructed their own governance and assurance mechanisms around our indicators . . . I was quite surprised . . . I think that in itself reflects a shift in tone and expectations.
CQC
NHS England: quality surveillance groups
Informants from NHS England told us that the national quality dashboard was used to assist the regional and national team to oversee quality and safety and to support locality quality surveillance groups (e.g. NHS England North, one of four regions, has nine localities) (NHS England regional teams). At least part of the rationale for the Acute and Specialist Trust dashboard was to assist quality surveillance groups:
The idea being that they [and] it would just really identify any risks that might affect quality at an early stage.
NHS England regional team
Each locality had a quality surveillance group, comprising representatives of the national agencies, CCGs and Healthwatches. They use the national dashboards and quality risk profile for its work, exploring trends against standards and outliers together with local intelligence provided at the quality surveillance groups.
A NHS England informant provided further insight into how data were then used for surveillance, prompts for further investigation and promoting good practice:
If there really is a concern, there are a couple of reds and it’s beginning to look a bit, you’ve had it for two or three quarters in a row . . . what then happens [is that] there will be a discussion at a more local level . . . in, say, a regional meeting . . . Then it would be actioned to take back to the local quality surveillance group and further discussions will take place to start mining into some of those areas.
NHS England regional team
This would then take place within the quality surveillance groups at locality level:
The expectation in terms of assurance conversations at a lower locality level is that there will be discussions on more real time data and sort of quantify what’s happening at a locality level.
NHS England regional team
The value of the national, region-specific dashboard was expressed in the following way:
By showing the trend data, performed against standards and the outliers taking those together with local intelligence provided at the quality surveillance group’s, it helps them to further investigate and share the good practice.
NHS England regional team
If points of concern were picked up:
Where we see . . . a particular trust has an extremely low staffing level, that would trigger a conversation with the CCG quality leads to make sure they’re aware of it. Is it actually an issue and, if it is, are they doing anything around it?
NHS England regional team
The CCG was expected to report back at the next quality surveillance group meeting:
What you’re trying to get at, isn’t it, how do people actually then make change and improve quality as a result.
NHS England regional team
All your regulators and commissioners round the table . . . come up with a score . . . a collective surveillance level for each provider . . . But I concentrate on data to do with my trust [be]cause that’s where I carry the accountability.
CCG
A Healthwatch informant observed:
Our deal is with NHS England we each get . . . that wonderful 5,000,000 Excel spreadsheet . . . Against each organisation that we look after, there is a little section that says ‘Healthwatch’ as well as Monitor, the TDA, the Health Education England, the commissioners . . . We will put our information in that . . . so everyone monitoring . . . will see it.
Healthwatch
Healthwatch’s input was to provide a more local picture. However, some scepticism was expressed about meeting attendance:
There’s a bit of me that thinks, if we update the quality surveillance reports, well, do we really need to be there? But there is the political end of being seen to be there.
Healthwatch
In contrast, another informant pointed to the potential value of their quality surveillance group’s input:
We already identified ‘continuing health-care’ was an issue. The quality surveillance group have supported that.
Healthwatch
The informant further noted that, later, this same issue came up in CCG board papers.
Care Quality Commission and inspections
The CQC informant outlined the process surrounding the use of the surveillance models and the inspection process. For example, the chief inspector would be:
Briefed on what we’re seeing as being particularly powerful indicators at sector level, not a particular provider level.
CQC
In preparing for a trust’s inspection:
On what we call ‘day zero’ which is the first day of an inspection . . . the first time that the inspection team come together, the analyst prepares . . . or presents a presentation, that takes [them] through the highlights.
CQC
Other data to inform the inspection process came from NHS England, the trust and Healthwatch:
The [NHS England] Ops and Delivery and Nursing people . . . collate that data along with other qualitative information they’ve got and they put it together in a pack . . . They submit it for each Care Quality Commission inspection . . . And then we get risk summits which are triggered by the quality surveillance group and sometimes in other ways as well.
NHS England regional team
The person who prepares and is responsible for the inspection data pack will also look at the [trust’s] board papers.
CQC
Moreover, in the context of a trust inspection:
The provider should be giving us their view of quality using their information, so we will go in to a provider and we will say are you safe, are you effective, are you well led?
CQC
Healthwatch informants told us:
We will give the Care Quality Commission everything we have done with that organisation in the last . . . 18 months or so.
Healthwatch
Any reports we do we share with Care Quality Commission . . . And [for an inspection], we provide what information we’ve got.
Healthwatch
National agency feedback to trusts
Direct feedback to the trusts from the local, NHS England quality surveillance groups was via the CCG. Our CCG informant pointed to her action subsequent to a local quality surveillance group meeting:
I update my providers on the discussions at the quality surveillance group and what surveillance level they are on.
CCG
In contrast, feedback from CQC was direct to the trust, and was published. However, the Trio Healthwatch observed:
But . . . I didn’t really see anything change until the Care Quality Commission required improvement.
Healthwatch
NHS Improvement also provided some feedback to trusts on the data they used. The informant told us that their regional improvement teams:
Send back [data] to providers for their use, so that they can see where they stand compared to their peer group.
NHS England regional team
Moreover, in the future, the Patient Experience dashboard that NHS Improvement is developing (NHS Improvement Patient Experience Headline Tool)88 ‘. . . will be available to providers’ (NHS England regional team).
National agencies’ collaborative working
The core issue was summarised by the CCG informant:
Nobody seems to be agreed nationally, and this is a big issue across all quality surveillance groups. Nobody can define the definition of assurance . . . We’re all using different measures, aren’t we?
CCG
This echoes issues raised in Chapter 2. The informant continued more positively:
[We are] starting to get a collaborative [view].
CCG
Some indications of movement to greater collaboration were provided in the interviews with the national agencies. For example, the NHS Improvement interviewee told us:
We integrating, we’re going in to an integrating phase, and certainly the focus currently is on technology integration.
NHS England regional team
Not only was there movement towards technology integration, but, a NHS England informant indicated that there was also work towards having one data source for quality and a common definition and set of indicators across national agencies:
. . . The other key players around quality . . . the Care Quality Commission and NHS Improvement, we may well try and work towards having just one data source that we will tap into with a common set of indicators . . . There’s a National Quality Board that’s working as a working group around measuring quality . . . On that group is everybody on the system, so HSCIC, NHS Improvement, Department of Health [and Social Care] everybody that’s working around measuring quality in some way . . . They’ll be working towards having a common definition of quality . . . a common set of quality measures. But it’s a long process.
NHS England regional team
Concluding comments
As we saw in Chapter 3, national policies have substantially determined the data sets that acute NHS trusts are required to submit to NHS Digital. Chapter 7 described the considerable effort involved in providing those data sets. In this chapter, we have seen that NHS Digital, in turn, provides data sets to other national agencies and to CCGs. We need to stress that we conducted a relatively small programme of interviews, and were able to interview only one CCG representative.
However, if we put the evidence from the three chapters together, we can make three points. The first is that the data sets are extensively curated after they leave trusts: it seems reasonable to say that NHS information infrastructures extend far beyond them. The second is that the information infrastructures are designed for ‘one way data flows’. Data sets flow out of trusts, but we found only limited examples of data flowing back in, for example in reports after CQC inspections. In principle, national agencies might have produced comparative data sets that would allow trusts to compare themselves with one another, but, as we saw in Chapter 7, these are not produced for the data sets underpinning the NHS Outcomes Framework. National Comparative Audits produce comparative data for a range of clinical conditions, but are managed separately from the data sets of interest here. Third, and finally, the evidence of Chapter 3 and the last five empirical chapters indicates that the four trusts were being extensively performance managed. Healthwatch organisations might demur from this conclusion, but the number of external organisations, and the number of data they review, emphasise the fact that substantial resources are devoted to surveillance rather than, say, to data-driven quality improvement.
Chapter 11 Discussion
Key points
-
Ward teams have the data they need to manage their work – but the combinations of IT that they use to manage it vary from trust to trust.
-
Trusts believed that they had always been open, or were more open in 2016 than they had been in previous years. They published far more data about the quality and safety of services in 2016 than in 2013.
-
Trust boards have the data they need for effective oversight of the quality and safety of services.
-
The current data and IT infrastructures are amalgams of technologies.
-
The infrastructures support the centralisation of control over the quality and safety of services.
-
We have also found evidence for an emerging alternative model, involving the decentralisation of control to ward teams.
-
Implications for research and for health care are identified.
Introduction
In this chapter, we discuss our findings. In the next section we address our aims and objectives, and then outline a Biography of Artefacts, pulling together the findings of our surveys and mini-biographies. We then comment on a broader question raised in our proposal, concerning the centralisation or decentralisation of control over decisions about the quality and safety of services. Finally, we set out implications for practice and recommendations for research.
Review of aims and objectives
The main aim of this study was to establish whether or not ward teams in acute NHS trusts have the information systems they need to manage their own work, and to report on that work to trust boards and other stakeholders.
We conclude that ward teams at the four trusts in the main field study had the information systems they needed to manage their own work effectively. Each of the trusts captured and used similar data, but used different combinations of IT to do so. Solo and Trio had already introduced real-time ward management systems at the start of the main fieldwork, and staff had integrated them into their working practices. Duo offered us the opportunity to observe the introduction of mobile devices, some aspects of which went smoothly, while others did not. Quartet was introducing components of recognisable real-time ward management technology towards the end of our observation period.
We assumed, at the start of the study, that data captured in ward systems, and used locally to manage wards, would have a sort of secondary life. They would be used to prepare quantitative reports – presented on dashboards – for board and other trust meetings, and data sets for submission to other organisations. This was not the case. There was a distinction between two broad types of data. One was data used in the routine monitoring of a patient’s status, including vital signs, risk assessments and care plans. These data were attached to patients throughout their stays, and little ‘escaped’ into management reports. They were data that only really made sense in the context of the care of a patient, or the management of a ward on a particular shift. The other types of data were part of the parallel systems that we described in Chapters 5 and 7, including data on mortality, incidents and the NHS Safety Thermometer. These data are counts of adverse events, or of possible adverse events in the case of deaths in hospital, and are data that national bodies believe to be indicators of quality and safety.
Taking the two types of data together, along with the IT systems used to capture and manage them, we can say that wards report comprehensively on their work to boards and to external agencies. The way in which they do this, however – across hybrid infrastructures and involving considerable staff time – raises questions, which we discuss below.
First objective: assess the extent to which trusts are able to integrate activity, quality, outcome and cost information in [whiteboards], to enable ward teams to manage their services effectively and to improve services over time
The ward mini-biography demonstrates that nurses are able to integrate data in order to manage wards. The data were available via real-time systems (at three trusts) to support discussions in handovers and patient safety huddles, and via information teams which produced monthly reports – including trend data – for managing ward performance over time. Comparative data on ward performance, presented on dashboards, appears to have encouraged some wards to improve their performance relative to others in their trusts. Ward sisters and matrons also used management reports to identify issues, such as pressure ulcers or falls, that merited focused action.
The ward mini-biography also describes the shifts in the combinations of technologies on wards that were used to capture data. Trusts have been progressively rationalising the technologies they use over time, so that data are recorded in a smaller and smaller number of (paper and digital) technologies. Indeed, viewed as a story of progressive integration over time, the wards are on a journey towards a point where the majority of the data they need are available on a few screens. Those screens present data that allow staff to monitor clinical risks, and in doing so, mark a break with the data processing systems that have dominated NHS IT for decades.
Second objective: evaluate the impact of the use of [whiteboards] on clinical and management practices at ward level
We presented data in Chapter 5 that showed that the quality and safety of services improved in all four trusts between 2013–14 and 2015–16. The evidence presented in Chapters 6–9 suggests that it is reasonable to attribute at least some of the improvement to the availability and use of data. As we have already discussed, these data were not only available on whiteboards, but were drawn together from different sources using different IT systems.
We are struck by the stability of handovers, within and across trusts: whatever technologies are available, handovers are managed in similar ways, using similar data. In some wards, new technologies appear to have been embedded in working practices with relatively little fuss, while in others deployment appears to have been a bumpy ride. Put another way, the new technologies have not disrupted key nursing meetings. To set against this, we also note that interviewees on some of the wards felt that new technologies disrupted other aspects of their work.
There were positive comments about Datix software in managing both incidents and complaints data, and more generally about improvements in their overall management. There were, however, also comments to the effect that there was still a need to ‘close the loop’, to ensure that lessons were learned at ward level.
Third objective: assess the extent to which dashboards provide data that are valuable to other local stakeholders, including trust boards, Healthwatch and commissioners
All four trusts have developed comprehensive reporting arrangements, with boards now having detailed routine data on quality and safety available to them every month. The situation at the end of 2016 can be contrasted with that in 2013, when boards at the four trusts received very limited routine quality and safety data.
The phrasing of this objective implies that data are captured on wards and aggregated for use by trust managers and external bodies. As we have seen, this is not what happens. Indeed, our evidence suggests that, in general, senior managers do not use the real-time systems to monitor or manage wards. Indeed, they were clear that the real-time systems were for operational management, with matrons only rarely intervening on the basis of monitoring ‘live’ services.
Our evidence shows that boards’ quality committees used routine data for assurance rather than for performance management. By and large, committees discussed issues that were already being dealt with elsewhere, typically in the relevant directorate. It also shows that dashboards are just one source of information for board members. Executive directors and NEDs have developed a range of strategies over the last 3 years, including ward visits and meetings where staff can raise issues or concerns, for assuring and improving quality and safety.
We saw in Chapters 3 and 10 that trusts submit a large number of data to NHS Digital and other agencies. We have not studied the work of these agencies in any detail, but we have struggled to identify any value in the submission of transaction data on quality and safety: numbers of incidents and complaints, NHS Safety Thermometer data and so on. Constructive relationships with external bodies have, rather, been based on members of those bodies engaging directly with trust staff, as in the case of Healthwatch organisations.
Fourth objective: identify the barriers to, and facilitators of, the effective redesign and use of dashboards
The ‘After Francis’ HSDR call indicated that research teams should identify barriers to, and facilitators of, improving the quality and safety of services. We are not persuaded that it is helpful to think about barriers or facilitators in acute NHS trusts, or indeed in organisations more generally. This is because the terms are too vague to be of analytical value. For example, the barriers to change could include a lack of money or other resources, a genuine disagreement about the best way to achieve a change in working practices or a breakdown of trust between those advocating change and those who are expected to change the way they work. Put simply, many things can be a barrier to change.
That said, it is possible to identify reasons why developments have taken, or, more accurately, are taking, months and years rather than days and weeks. Any change in working practices takes time, with a new way of working having to be integrated into current practices without disrupting them. We saw in Chapter 7, for example, that staff need time to get used to a new technology before they begin to see that there are new ways of exploiting it. There are also technological reasons why developments take time. At Duo and Quartet, for example, the communication networks needed to be upgraded before other developments could go ahead.
Looking ahead, the trusts are acutely aware of the potential of the data in their data warehouses. Exploiting these data, to improve the quality and safety of services, will require clinical staff and information teams to be relieved of the current – considerable – burden of data collection, validation and preparation of data sets and reports. The burden might be reduced through automation of data capture and management, or a reduction in the volume of data handled, or a combination of the two. It will also require current staff to learn how to exploit the data, a process that is likely to involve a combination of formal training, less formal ‘getting heads round’ how to use it, and implementing and testing improvements. All of these steps will take time.
Turning to facilitators, even a cursory reading of the literature on institutional change suggests that there is a long list of potential facilitators, including money, incentives, leadership and staff commitment. Again, the list is too long and too general to be of value here or in practical settings. More positively, we are in a position to comment on the successful deployment of real-time systems. First, commitment from trust executives has been crucial. The negative experiences of the NHS National Programme for IT made many trust executives very cautious about IT investments: this caution has given way to a determination to support more and better IT solutions.
Second, we were struck by the clarity of thought of senior executives: they had the ability to visualise alternative ways of working, and in particular how nursing practices might change as a result of using IT. Third, the trusts were able to execute the necessary changes, albeit to differing extents. As the infrastructure and board mini-biographies describe, success hinged on effective working relationships between senior managers, senior clinicians, ward nursing teams and informatics and information teams.
Open, transparent and candid?
The second aim was to establish the extent to which ward-level whiteboards provided a basis for achieving the openness, transparency and candour envisaged by Sir Robert Francis.
The wording of this aim, in common with the third objective, implies that data presented on whiteboards would be published more widely. We now understand that data are captured in a range of technologies at ward level. This point made, we are in a position to address this aim.
Our interviews, across mini-biographies, indicate that trust staff believed that they had responded positively to the legal duty of candour – they talked to patients and carers when something has gone wrong, and apologised when it was appropriate to do so. They also believed that their services, and trusts more generally, were open and transparent – in simple terms, willing and able to publish performance data. Some respondents argued that their trusts had always been open and transparent, while others stated that they were more open and transparent than they had been 3 or 4 years ago. This view is supported by the increase in the volume of data published by trusts in the last 4 years on their websites, notably in papers for board meetings. Our findings indicate that there was a marked increase over a short period, in 2013 and 2014. These data complement data that were already being published, for example, in Quality Accounts.
To set against this undoubted change, we wonder who the publications are aimed at: to whom are trusts open and transparent? Indeed, do the two terms actually mean anything? Some of our interviewees noted that no patient or relative had ever commented on the data on the information boards outside wards to staff, even when they clearly stated that wards were understaffed. At the other end of the scale, national bodies have long had access to comprehensive data about the performance of acute trusts, and we saw in Chapter 10 that regional surveillance groups do just that: engage in surveillance of trust performance, albeit using data that describe activities that are 2 or 3 months old. We wonder whether, in practice, ‘open and transparent’ and ‘surveillance’ are synonyms.
It is possible that the most convincing rationale for publication is concerned with accounting – trusts are publicly funded and should publish accounts of the work they do. If, the argument runs, individuals and voluntary groups that were concerned about the performance of Mid Staffordshire NHS Foundation trust had had access to reliable data in 2006 and 2007, they could have influenced the course of events in some way. If this is the right argument, then this study raises four issues that merit some thought. First, we have already commented on the costs of capturing data and preparing it for publication. Second, as noted in Chapter 5, it is surprisingly difficult to find meaningful quality and safety metrics in Quality Accounts and published board papers. Even when relevant data can be found, the numbers in different documents do not always tally (for good technical reasons – but citizens may not realise this). An issue noted in Chapter 10, concerning the lack of agreement nationally for measuring quality and safety, compounds the problem. The current set of measures enable boards, who are used to interpreting them, to arrive at an overall assessment of services, but citizens may not be able to interpret the data as easily.
Third, we have also noted that data sets submitted to national bodies are of a particular type; data about transactions. It is as if the centre of the NHS is more concerned with data on the number of people involved in road accidents, rather than reporting on the ways in which local authorities are proactively managing the risks of accidents and reducing accident rates.
There is a deep and, as far as we can see, unexplored assumption that ‘counting data’ can be accessed and used by anyone who is interested in quality and safety, and that they will come away with useful insights.
Fourth and finally, the ‘open and transparent’ assumption in the second Francis report assumes that CQC and other bodies will respond earlier in the future if they detect signs of problems. Some of our board-level interviews commented positively on experiences with CQC and Monitor (now part of NHS Improvement), noting that one or the other of them had required them to focus on a key aspect of the quality or safety of their services. However, we come away from the fieldwork with a distinct impression that national bodies continue to performance manage the four trusts: relationships had not shifted from the ‘low trust’ to ‘high trust’ assumed by the second Francis report. ‘Open and transparent’ reporting was thus a double-edged sword, with trusts tending to feel the punitive edge rather than the supportive and encouraging one. Putting these four points together, it is not clear to us that the case for the ‘value of openness’ has yet been made.
Patient and public involvement
Our PPI group made valuable contributions to the study. They commented on our study design and methods, and later on commented on drafts of the ward and board mini-biographies. They reminded us to focus on whether data reflected the experiences of patients. More specifically, the group encouraged us to extend the range of data items that we monitored. This directed us towards one of our findings. Much data, including patients’ nutritional status, is typically ‘stuck’ to patients – it is relevant to particular patients at particular times. It was recorded in patient notes and mentioned at handovers and safety huddles if patients had particular needs. Other data, including NHS Safety Thermometer data, was the opposite – it was captured for use by managers and external agencies. A third type of data, including NEWS scores and incident and complaint data, sat somewhere in between the first two, used both in the course of a ward shift and for real-time monitoring of activity across a ward. This point led us to understand that there is no such thing as ‘quality and safety data’. Rather, different types of data are used to infer the quality and safety of services, depending on the context.
Study design: strengths and weaknesses
We used normalisation process theory to guide the design of the main phase of the fieldwork. 35,36 It proved to be very helpful, particularly in articulating the questions that we needed to ask in the course of our fieldwork. Our view is that using the theory was a strength of the early phase of the field study: it gave us confidence that we were asking an appropriate set of questions about the different phases of technology design and deployment. Equally, as noted in Chapter 2, we found that the theory was not suitable for the technologies that we were studying. This was not a weakness of the theory, but rather a mismatch between the theory and the technologies (information infrastructures) that we were observing.
We used the Biography of Artefacts approach to analyse the data presented in Chapters 6–10. 26 The main strength of the approach is that it provides an effective means of studying large-scale organisational phenomena, where there are important developments in more than one part of an organisation. We found that it was possible to study a complex phenomenon in several different places, across four acute trusts, and arrive at meaningful conclusions. We also found that the narrative, or biographical, approach generated valuable insights about the development of information infrastructures.
To set against this, we found that we had to develop some of the technical features of the approach. We made three main additions to published accounts of the approach, concerning the role of direct observation of working practices (many published biographies rely on a combination of documents and interviews), combining evidence from different sources (observation, interview, documents) and integrating accounts from multiple sites. The net effect of these observations is that we have found the Biography of Artefacts approach very helpful. However, our findings should be interpreted in light of the fact that we were developing aspects of the approach during the course of the study.
More broadly, our use of the Biography of Artefacts approach involves trade offs with other approaches. The approach emphasises the importance of change, over time, in large-scale infrastructures. As a result it de-emphasises detailed features of technologies and working practices that would be highlighted in other studies. For example, a human–computer interaction study would highlight the detailed design of screens in ward systems, or the fine details of the presentation of data in board papers – we have relatively little to say about these important features of systems. Similarly, we might have used practice theory,89 and focused on the ways in which working practices produce particular cultures on wards or in board level meetings. Indeed, a study of this kind could be valuable in a post-Francis NHS context. We would, however, have had to give up our focus on information infrastructures in order to pursue a practice theory-informed study.
Turning to our field methods, our reflections on the interviews is that most of our interviewees were giving us straightforward accounts of developments as they saw them. Interviewees were typically open about difficulties that they were facing. Similarly, we did not encounter any difficulties observing board quality committees or working practices on wards. To set against this, it is clear that the mini-biographies paint a relatively positive picture, particularly in the sites that made substantive progress with their real-time ward management systems. This jars with published accounts of deep-seated problems with quality and safety in acute NHS trusts. We do not have a good explanation for this gap, but we speculate that there might be three reasons. The first is that the study focused principally, if not entirely, on nurses, who may have benefited more from the new technologies than other staff groups. If we had focused more on consultants, or possibly allied health professionals, the tone of the report might have been different.
Second, the three sites with developed information infrastructures were genuinely proud of their achievements. As we comment later, the ward-level developments in particular mark a departure from the IT experiences of the 2000s, described in Chapter 3. Put another way, some credit is due to trusts that have begun to overturn the NHS’ reputation as a very difficult place to deploy IT systems. Third, as the data in Chapter 5 indicate, the sites really did make progress in improving the quality and safety of services during the study period. The evidence of our observations and interviews is consistent with the quantitative indicators. Although the quality of our data, and the routine data, can both be questioned, they do paint a consistent picture. Thus, the perspective of a trust manager is, quite reasonably, that services have been improving. This is not inherently in conflict with the observation that there is clearly a long way to go, on many fronts, before we can say that acute trusts are safe places to be a patient.
A Biography of Artefacts
The mini-biographies set out the direction of developments at the four trusts between 2013 and 2016. Taken together, they portray the development of infrastructures for capturing and using data about the work of acute hospital trusts. Our evidence shows that data and technology infrastructures have developed over many years to support the movement of data from ward to board, and beyond to external agencies. Moreover, the data are of a particular type, about transactions, or what might be termed data for ‘counting and accounting’. Viewed from this perspective, trust data warehouses and board dashboards are developments of established arrangements.
Taking the long view encouraged by the Biography of Artefacts approach, however, real-time ward management systems mark a departure. The departure can be characterised as a step change from data processing to real-time clinical systems, and from transactions – data for ‘counting and accounting’ – to data to actively manage clinical risks. We appear to have been observing ‘interim technologies’, that have a useful shelf life but are eventually superceded. 90 The most interesting example is electronic whiteboards. It seems possible that wards will eventually move to a situation where all staff have a mobile device for capturing and accessing data, and will no longer need whiteboards.
Trust data and IT infrastructures, and the national infrastructures that surround them, embody two distinct assumptions about the nature and purposes of quality and safety data. The first assumption is that top-down performance management is the most effective strategy for improving the quality and safety of services. This was one of the key assumptions made in the second Francis report on Mid Staffordshire NHS Foundation Trust. We conclude that top-down performance management dominates other approaches, such as clinical risk management and quality improvement, in the relationships between external agencies and trusts. However, we have identified a number of reasons to doubt that routine data on quality and safety are being used directly for performance management within trusts.
The second assumption is that the infrastructures support the proactive management of the quality and safety of services, as outlined in Chapters 6 and 7. Two developments suggest that the infrastructures are being used in new ways: they are being harnessed to support local, real-time monitoring of quality and safety. The first, and most marked development is that whiteboards and mobile technologies on wards make it easier to manage clinical risks proactively, that is, to manage patients’ risks rather than manage performance using transaction data. The focus on nurses’ information requirements can be contrasted with the historical focus on systems, including electronic records systems, to support doctors’ decision-making. The real-time systems have been developed largely or entirely by in-house informatics teams, working closely with ward nurses and other clinical staff, supported by trust boards. The relevant data – vital signs, risk assessments and so on – are integral to patient treatment and care.
Second, the four trusts have data warehouses, home to data from a number of ward and departmental systems (or from the integrated clinical system at Solo). These warehouses are environments where teams can undertake analyses on data sets that are ‘almost live’. They can run analyses on data from the last 24 hours, or last week – recent enough for staff to be able to recall the events the data relate to and act on it. The most striking example we came across was the analysis of ‘raw’ mortality data at Trio: the trust used to wait months for national performance indicators, but now analyses its own data much earlier, starting within days of the end of each month. This example hints at the potential currently locked up in trust data warehouses.
The second assumption, and the evidence supporting it, chimes with arguments in substantial academic and practitioner literatures on clinical risk management, quality management and patient safety initiatives. They all emphasise the importance of clinical teams taking responsibility for their services. We can summarise here by saying that trusts have been implementing a version of democratic experimentalism. 91 The term was coined to draw attention to two features of decentralised governance arrangements. They are democratic because they involve negotiation within frontline teams, and between those teams and senior managers. They are experimental in the sense that teams have the freedom to innovate – to improve services. Teams have to provide evidence that they are providing safe and high quality services, and are improving them over time: this can be achieved, in part, by reporting agreed metrics to senior managers. In the ideal version of democratic experimentalism, teams are autonomous and can negotiate the ways in which they will demonstrate their effectiveness. In the actual version we have observed here, in the four trusts, local authority sits within a set of centrally determined reporting arrangements. They have developed in spite of, rather than because of, the central data straitjacket.
It is tempting to believe that the tension embodied in the data and IT infrastructures can be resolved. The development of real-time ward management systems can be encouraged and perhaps increasingly used as sources of data for clinical audit and for management reports. The management of data sets for reporting could be streamlined over time, and the costs associated with managing them reduced. If this is to happen, we think that strategic choices will have to be made.
Logics of control
This line of argument brings us to our final points, concerning the strategic choices about the management of the quality and safety of services. There are three broad options. The first is to continue with the current arrangements. This study provides clear evidence that national bodies have a major influence on the data that trusts capture, use themselves in board reports and submit to NHS Digital and other national bodies. They have also had a major influence on the NHS-wide technology infrastructure, which, intentionally or not, strongly favours upwards reporting rather than the horizontal sharing of data between organisations. It is difficult to imagine why any of the interested parties would wish to continue with the current technological arrangements.
The second option is democratic experimentalism, whereby authority, and responsibility, lies principally with clinical teams. Trusts, as publicly funded bodies, will always be required to account for their work to national bodies, and indeed upwards to parliament. However, they would do so within a much-revised set of governance arrangements, whereby IT investments would be focused on supporting the work of frontline staff, and data for reporting to boards and beyond would be a by-product of frontline data capture (as envisaged in the 1992 Information Management and Technology Strategy, but not implemented before now). This option would recognise the importance of relationships between clinicians and informatics teams, and the fact that information requirements change over time, necessitating ongoing changes to the data captured and reported.
The third option is the information panopticon. A panopticon, a concept sometimes attributed to the nineteenth century philosopher Jeremy Bentham, is a prison with cells arranged around a central well, from which prisoners can be observed at all times. Zuboff92 extended this concept, arguing that IT could be used as information panopticons to gaze down on workers – clinical staff, in our case. The NHS policies Personalised Health and Care 2020,93 and the clinical utilisation review,94 reflect this line of thought. They envisage a NHS in which real-time data, presumably data about today’s or, at worst, yesterday’s services, continually feeds a central ‘brain’, which is able to monitor services. There have been a number of such centralised initiatives in the past, including attempts to manage wars from a continent away,95 to manage nuclear missile silos across the USA from a central command96 and the CyberSyn experiment in Chile in the early 1970s,97 where all industrial production was to be managed centrally. These experiences were not, it has to be said, encouraging. The fact that data were made available to central command in real-time – perhaps a few hours later, or the next day – disguises the point that the data were ‘stripped down’, removed from the context in which they were captured and hence much more difficult to interpret than they would be at ground level. In the NHS, the evidence of a long series of official inquiries, including the second Francis report, does not suggest that national agencies respond rapidly, or sometimes at all, even when there is clear evidence of unsafe practices.
It is not for us to say which of these options should be pursued. We feel that we can, however, point to the considerable progress made by the four trusts. The second and third options would build on the important move to real-time management of wards, but in very different ways, involving different technology infrastructures and radically different governance arrangements.
Implications for health care
-
Real-time ward management systems have been developed largely in-house, using agile methods and with ward nurses closely involved. They mark a significant departure in thinking and practice from the NHS’s historical reliance on commercially available data processing systems.
-
The trusts are acutely aware of the potential locked in the data sets in their data warehouses, and in their informatics and information teams. Their capacity to exploit the potential is currently very limited: most of their time is committed to preparing national data submissions. Trusts need to be able to free up staff time if they are to achieve data-driven quality improvements.
-
The development of real-time management systems presents the NHS with a strategic choice. Will sustained, substantive quality and safety improvements be achieved by centralising authority, and hence the flow of data, to boards and to external agencies? Or will they be achieved by the clinical teams caring for patients, supported by real-time management systems? The decision will have a major effect on the future development of these critical infrastructures.
Recommendations for research
-
The growth in the use of mobile technologies on wards for the management of clinical risks, as well as for recording patients’ status and treatment, may have an effect on the quality and safety of services. These effectiveness need to be established.
-
Similarly, there has been a significant growth in the use of electronic whiteboards on acute wards in the last 3 years. This study raises the question: are these interim technologies that will disappear when mobile devices are ubiquitous, or do they have an important role to play in monitoring the quality and safety of services?
-
National data submissions have developed in piecemeal fashion during the last 2 decades. A number of reports have drawn attention to the time that clinicians spend recording data or searching for it, but this is the first study that has highlighted the opportunity costs of managing data for national submissions. The overall design of national data submissions, and their costs and value, merit review.
-
Acute trusts now have data warehouses, which appear to have considerable potential to support analyses of current performance and modelling options for service improvements. The scope for greater local exploitation of these data sets needs to be evaluated.
Acknowledgements
We are grateful to the participants in the 15 trusts who took part in the telephone survey reported in Chapter 4, and the four trusts and external agencies who took part in main field study, reported in Chapters 5–10. We thank them for their time and commitment.
We thank our steering group members – Professor David Cottrell, Dr Tracy Finch, Professor Russell Mannion and John Varlow – for their advice and support throughout the project.
We also acknowledge the valuable comments and guidance from our PPI group – Peter Dransfield, Jean Gallagher, Laraine McNichol and Manoj Mistry.
We thank Professor Robin Williams and Dr Mark Hartswood, and members of the Social Informatics Cluster at the University of Edinburgh at the seminar on 1 July 2016, for their comments and clarifications on the Biography of Artefacts approach.
Finally, we would like to thank David Brennan and Anna Halliday for their administrative support during the project, and Andrew Meggs for formatting the final report.
Contributions of authors
Justin Keen (Professor of Health Politics, University of Leeds) led the project, was the first author of the final report and participated in all aspects of the research.
Emma Nicklin (Research Assistant, University of Leeds) contributed to the data collection, analysis and the writing of the final report.
Andrew Long (Professor in Health Systems Research, University of Leeds) contributed to the study design, data collection, analysis and the writing of the final report.
Rebecca Randell (Senior Research Fellow in Health Informatics, University of Leeds) contributed to the study design, overall conduct of the study and drafting of the final report.
Nyantara Wickramasekera (Research Assistant, University of Sheffield) contributed to data collection, analysis and report writing.
Cara Gates (Research Assistant, Leeds Beckett University) contributed to data collection, analysis and report writing.
Claire Ginn (Senior Manager, Analytical Services, NHS England) was a contributing member of the project team, which met bimonthly to review and advise on the study design, report writing and study dissemination outputs. She also helped to run the PPI meetings.
Elizabeth McGinnis (Clinical Co-ordinator, University of Leeds) was a contributing member of the project team, which met bimonthly to review and advise on the study design, report writing and study dissemination outputs. She also to helped run the PPI meetings.
Sean Willis (Senior Nurse, Leeds Teaching Hospital Trust) was a contributing member of the project team, which met bimonthly to review and advise on the study design, report writing and study outputs.
Jackie Whittle (Head of Nursing, Leeds Teaching Hospitals Trust) was a contributing member of the project team, which met bimonthly to review and advise on the study design, report writing and study outputs.
Data sharing statement
We are prepared to make pseudonymised interview transcripts available to researchers, subject to them obtaining the appropriate ethics approvals. Researchers should contact Justin Keen in the first instance.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
- An Organisation With a Memory. London: Department of Health and Social Care; 2000.
- Learning From Bristol. Cm 5207(I). London: The Stationery Office; 1999.
- Independent Inquiry Into Care Provided by Mid Staffordshire NHS Foundation Trust. HC375-I. London: The Stationery Office; 2010.
- Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry. HC 898. London: The Stationery Office; 2013.
- Kirkup B. The Report of the Morecambe Bay Investigation. London: The Stationery Office; 2015.
- Hard Truths. The Journey to Putting Patients First. Cm 8777-1. London: The Stationery Office; 2014.
- Vincent C, Burnett S, Carthey J. Safety measurement and monitoring in health care: a framework to guide clinical teams and healthcare organisations in maintaining safety. BMJ Qual Saf 2014;23:670-7. https://doi.org/10.1136/bmjqs-2013-002757.
- Deeny SR, Steventon A. Making sense of the shadows: priorities for creating a learning healthcare system based on routinely collected data. BMJ Qual Saf 2015;24:505-15. https://doi.org/10.1136/bmjqs-2015-004278.
- A Promise to Learn – A Commitment to Act. London: Department of Health and Social Care; 2013.
- Mannion R, Freeman T, Millar R, Davies H. Effective board governance of safe care: a (theoretically underpinned) cross-sectioned examination of the breadth and depth of relationships through national quantitative surveys and in-depth qualitative case studies. Health Serv Deliv Res 2016;4.
- Bardsley M. Understanding Analytical Capability in Health Care. London: Health Foundation; 2017.
- Jarman B. Hospital standardised mortality ratios – their use and misuse. Med Leg J 2015;83:72-9. https://doi.org/10.1177/0025817215583211.
- Learning from Bristol: The Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995. London: The Stationery Office; 2001.
- Wanless D. Securing Our Future Health: Taking A Long-Term View. London: Her Majesty’s Treasury; 2002.
- The National Programme for IT in the NHS: An Update on the Delivery of Detailed Care Records Systems. HC 888 Session 2010–12. London: The Stationery Office; 2013.
- Keen J, Margetts H, Hood C. Paradoxes of Modernisation. Oxford: Oxford University Press; 2010.
- Fitzpatrick G, Ellingsen G. A review of 25 years of CSCW in healthcare. Comput Supported Coop Work 2013;22:609-65. https://doi.org/10.1007/s10606-012-9168-0.
- Randell R, Greenhalgh J, Wyatt J, Gardner P, Pearman A, Honey S, et al. Digital Healthcare Empowering Europeans. Amsterdam: IOS Press BV; 2015.
- Dowding D, Randell R, Gardner P, Fitzpatrick G, Dykes P, Favela J, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015;84:87-100. https://doi.org/10.1016/j.ijmedinf.2014.10.001.
- Free C, Phillips G, Watson L, Galli L, Felix L, Edwards P, et al. The effectiveness of mobile health technologies to improve health care service delivery processes: a systematic review and meta-analysis. PLOS Med 2013;10. https://doi.org/10.1371/journal.pmed.1001363.
- Hartswood M, Procter R, Rouncefield M, Slack R, Voss A, Ackerman MS, et al. Resources, Co-Evolution and Artefacts. New York, NY: Springer; 2008.
- Barrett M, Oborn E, Orlikowski W. Creating value in online communities: the sociomaterial configuring of strategy, platform, and stakeholder engagement. Inform Syst Res 2016;27:704-23. https://doi.org/10.1287/isre.2016.0648.
- Beane M, Orlikowski WJ. What difference does a robot make? The material enactment of distributed coordination. Organ Sci 2015;26:1553-73. https://doi.org/10.1287/orsc.2015.1004.
- Sittig D, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010;19:i68-74. https://doi.org/10.1136/qshc.2010.042085.
- Pollock N, Williams R. Software and Organisations. Abingdon: Routledge; 2008.
- Hyysalo S. Health Technology Development and Use. Abingdon: Routledge; 2010.
- Siskin C. System: The Shaping of Modern Knowledge. Cambridge, MA: MIT Press; 2016.
- Keen J, Nicklin E, Long A, Randell R, Wickramasekera N, Gates C, et al. Information Systems: Monitoring and Managing from Ward to Board. Study Protocol n.d. www.journalslibrary.nihr.ac.uk/programmes/hsdr/130768/#/ (accessed April 2018).
- Pope C, van Royen P, Baker R. Qualitative methods in research on healthcare quality. Qual Saf Health Care 2002;11:148-52. https://doi.org/10.1136/qhc.11.2.148.
- Pope C, Ziebald S, Mays N, Pope C, Mays N. Qualitative Research in Health Care. London: Blackwell Publishing; 2006.
- Ritchie J, Lewis J. Qualitative Research Practice: A Guide for Social Science Students and Researchers. Thousand Oaks, CA: Sage; 2003.
- George A, Bennett A. Case Studies and Theory Development in the Social Sciences. Cambridge, MA: MIT Press; 2006.
- Campbell D, Brenner M, Marsh P, Brenner M. The Social Contexts of Method. London: Croom Helm; 1978.
- Finch T, Mair F, O’Donnell C, Murray E, May C. From theory to ‘measurement’ in complex interventions: methodological lessons from the development of an e-health normalisation instrument. BMC Med Res Methodol 2012;12. https://doi.org/10.1186/1471-2288-12-69.
- Murray E, Burns J, May C, Finch T, O’Donnell C, Wallace P, et al. Why is it difficult to implement e-health initiatives? A qualitative study. Implement Sci 2011;6. https://doi.org/10.1186/1748-5908-6-6.
- May C, Finch T. Implementation, embedding, and integration: an outline of Normalization Process Theory. Sociology 2009;43:535-54. https://doi.org/10.1177/0038038509103208.
- Crabtree BF, Miller WL, Stange KC. Understanding practice from the ground up. J Fam Pract 2001;50:881-7.
- Crosson J, Stroebel C, Scott J, Stello B, Crabtree B. Implementing an electronic medical record in a family medicine practice: communication, decision making and conflict. Ann Fam Med 2005;3:307-11. https://doi.org/10.1370/afm.326.
- Ventres W, Kooienga S, Vuckovic N, Marlin R, Nygren P, Stewart V. Physicians, patients, and the electronic health record: an ethnographic analysis. Ann Fam Med 2006;4:124-31. https://doi.org/10.1370/afm.425.
- Elliott J. Using Narrative in Social Research. Qualitative and Quantitative Approaches. London: Sage; 2005.
- Pollitt C, Bouckaert G. Public Management Reform: A Comparative Analysis. Oxford: Oxford University Press; 2004.
- Griffiths R. NHS Management Inquiry. London: Department of Health and Social Security; 1983.
- Korner E. First Report of the Steering Group on Health Services Information. London: Department of Health and Social Security; 1982.
- Black D. Data for management: the Körner report. Br Med J 1982;285:1227-8. https://doi.org/10.1136/bmj.285.6350.1227.
- Keen J, Margetts H, Hood C. Paradoxes of Modernisation. Oxford: Oxford University Press; 2010.
- Crossing The Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
- Learning from Bristol. Cm 5207(I). London: The Stationery Office; 1999.
- The New NHS: Modern, Dependable. Cm 3807. London: Department of Health and Social Care; 1997.
- Short Guide to NHS Foundation Trusts. London: Department of Health and Social Care; 2003.
- Modernising Government. London: The Stationery Office; 1999.
- The NHS Plan. London: Department of Health and Social Care; 2000.
- The NHS Cancer Plan. London: Department of Health and Social Care; 2000.
- Reforming Emergency Care. London: Department of Health and Social Care; 2001.
- Briefing: Payment by Results. London: The King’s Fund; 2007.
- The NHS Improvement Plan: Putting People at the Heart of Public Services. London: Department of Health and Social Care; 2004.
- Bevan G. Setting Targets for Health Care Performance: Lessons from a Case Study of the English NHS. London: National Institute for Economic and Social Research; 2006.
- Commission for Health Improvement . Acute Trust Overview 2003. http://webarchive.nationalarchives.gov.uk/20050301192907/http://www.chi.nhs.uk/Ratings/Trust/Overview/acute_overview.asp (accessed 26 June 2017).
- Building a Memory: Preventing Harm, Reducing Risks and Improving Safety. London: NPSA; 2005.
- Donabedian A. An Introduction to Quality Assurance in Health Care. New York, NY: Oxford University Press; 2003.
- Womack J, Jones D. The Machine that Changed the World. London: Simon and Schuster; 2007.
- The Productive Ward: Releasing Time to Care Programme. Warwick: NHS Institute for Innovation and Improvement; 2009.
- High Quality Care For All: NHS Next Stage Review Final Report. London: Department of Health and Social Care; 2008.
- NHS Outcomes Framework: At-a-Glance. London: Department of Health and Social Care; 2016.
- NHS Digital . Quality Accounts n.d. http://content.digital.nhs.uk/qualityaccounts (accessed 26 June 2017).
- Health and Social Care Act 2012. London: The Stationery Office; 2012.
- Cameron D. PM Speech on Life Sciences and Opening up the NHS 2011. www.gov.uk/government/speeches/pm-speech-on-life-sciences-and-opening-up-the-nhs (accessed 26 June 2017).
- Building the Information Core: Implementing the NHS Plan. London: Department of Health and Social Care; 2001.
- Information for Health. London: Department of Health and Social Care; 1998.
- Craig D, Brooks R. Plundering the Public Sector. London: Constable; 2006.
- Wanless D. Securing Our Future Health: Taking A Long-Term View. London: HM Treasury; 2002.
- Delivering Twenty First Century IT Support for the NHS: National Strategic Programme. London: Department of Health and Social Care; 2002.
- Nicholson D. Public Accounts Committee. The National Programme for IT in the NHS: Progress Since 2006. HC153. London: TSO; 2009.
- Cross M. Computer scientists call for audit of NHS IT programme. BMJ 2006;332. https://doi.org/10.1136/bmj.332.7547.930-c.
- Health Informatics Review. London: Department of Health and Social Care; 2008.
- The Power of Information. London: Department of Health and Social Care; 2012.
- Five Year Forward View. London: NHS England; 2014.
- BBC . NHS to Get £4bn in Drive for ‘Paperless’ Health Service 2016. www.bbc.co.uk/news/health-35514382 (accessed 23 April 2017).
- Edwards N. Sustainability and Transformation Plans: What We Know So Far. London: Nuffield Trust; 2016.
- Personalised Health and Care 2020. Leeds: National Information Board; 2014.
- Launch of The National Clinical Utilisation Framework: Guide for Clinical Commissioning Groups. London: NHS England; 2015.
- Dunleavy P, Margetts H, Bastow S, Tinkler J. Digital Era Governance. Oxford: Oxford University Press; 2006.
- NHS Institute for Innovation and Improvement . The Productive Ward n.d. http://webarchive.nationalarchives.gov.uk/20150401091939/https://www.institute.nhs.uk/quality_and_value/productivity_series/productive_ward.html (accessed 14 April 2018).
- Power M. The Risk Management of Everything: Rethinking the Politics of Uncertainty. London: Demos; 2004.
- Power M. Organized Uncertainty: Designing a World of Risk Management. Oxford: Oxford University Press; 2007.
- Vincent C, Amalberti R. Safer Healthcare. Strategies for the Real World. London: Health Foundation; 2016.
- Hollnagel E. Safety-I and Safety-II. The Past and Future of Safety Management. Ashgate: Farnham; 2014.
- Patient Experience Headlines Tool. n.d.
- Feldman M, Orlikowski W. Theorizing practice and practicing theory. Organization Science 2011;22:1240-53. https://doi.org/10.1287/orsc.1100.0612.
- Peacock R, Moore J, Keen J. Interim realities. Public Manag Rev 2012;14:1109-24. https://doi.org/10.1080/14719037.2012.657836.
- Sabel C, Heckscher C, Adler P. The Firm as a Collaborative Community. Oxford: Oxford University Press; 2006.
- Zuboff S. In the Age of the Smart Machine. New York, NY: Basic Books; 1989.
- London: Department of Health and Social Care; 2014.
- NHS England . Launch of the National Clinical Utilisation Review Framework n.d. www.england.nhs.uk/commissioning/wp-content/uploads/sites/12/2015/08/cur-ccg-guide.pdf (accessed May 2018).
- Halberstam D. The Best and The Brightest. New York, NY: Ballantine; 1993.
- Schlosser E. Command and Control. London: Penguin; 2014.
- Medina E. Cybernetic Revolutionaries. Cambridge, MA: MIT Press; 2011.
Appendix 1 Letter to University of Leeds Medicine and Health Research Ethics Committee
Appendix 2 Consent form and information sheet
Appendix 3 Interview topic guide
Appendix 4 Publication alerts
We have not identified a literature review as one of our ‘deliverables’. We did, however, undertake a search of papers published since 2000 at the start of the study. The study included PubMed, MEDLINE, Web of Science, CINAHL, ProQuest, Science Direct and the British Nursing Index (BNI). As noted in Chapter 1, there were very few relevant papers, and most of them were identified in the 2014 systematic review by Dowding et al. 20
We also set up publication alerts so that we could identify relevant new papers published in the course of the study, covering the databases listed above, and also using RSS feeds and Twitter feeds for relevant organisations, for example Digital Health Intelligence and the Health Foundation.
Domain | Acute hospital wards | ‘Hospitals in context’ |
---|---|---|
Information systems | How formal information systems, including dashboards, are used on wards to monitor and manage services | The use of management information, including dashboards, and performance indicators more broadly |
IT | The implementation and use of IT systems on wards | The use of IT systems across hospitals and beyond, including IT integration |
Organisation and culture |
Management of wards Ward culture |
Hospital culture Monitoring hospital performance |
Appendix 5 Letter from University of Leeds Medicine and Health Research Ethics Committee
Appendix 6 Consent form and information sheet
Appendix 7 Board paper analysis: Tables 13–17
Time period | Serious incidents | Complaints | Mortality | Safety Thermometer | Pain | Nutrition data | Vital signs/NEWS | |||
---|---|---|---|---|---|---|---|---|---|---|
April 2013–April 2014 |
Data 1–2 months in arrears Data source is identified as Datix Data show trust and hospital figures Visual images include few bar graphs, plotted line graphs and tables The visual images were accompanied by a text box briefly describing the cause of each SI reported in the previous month In the appendix each SI is summarised, followed by the identified recommendations from the root cause analysis January 2014: Medical Directors’ Report – each SI is summarised, followed by recommendations from the RCA |
January 2014: Medical Directors’ Report – each SI is summarised, followed by recommendations from the RCA |
Data 1–2 months in arrears Data show hospital and directorate figures Visual images include plotted line graphs and tables The visual images are accompanied by a small paragraph explaining the data and the key points January 2014: Patient Experience Quarterly Report – shows the number of complaints from previous quarter. The report also highlights the main themes from the complaints and examples of learning and action plans |
January 2014: Patient Experience Quarterly Report – shows the number of complaints from previous quarter. The report also highlights the main themes from the complaints and examples of learning and action plans |
Data show trust and hospital figures for SHMI, RAMI and CMR Data 1 month in arrears (CMR); 6 months in arrears (RAMI); 5–10 months in arrears (SHMI) Few visual images include plotted line graphs, bar charts and SPC charts Data are plotted monthly, over long periods of time (13–17 months) Data are benchmarked against other trusts’ data (SHMI) SHMI data are accompanied by qualitative paragraphs explaining how the trust is ranked in comparison to other trusts. Other visual images are accompanied by a paragraph explaining the data and the source of the data |
Data 1 month in arrears Data shows trust and Hospital figures Data show ST measures (harm free care, all pressure ulcers, falls with harm, etc.) Data are presented in many plotted line graphs The data are presented monthly, over 10–15 months Alongside each graph there is an explanation of what the graph shows |
No data |
Data 1–2 months in arrears Data show trust and hospital figures Visual images include plotted line graphs and tables Data show compliance with nutrition risk assessments Data are benchmarked against the trust’s internal targets Little to no qualitative data January 2014: Patient Experience Quarterly Report – featuring the findings of the internal beverage and food trial |
January 2014: Patient Experience Quarterly Report – featuring the findings of the internal beverage and food trial | No data |
January 2014: Medical Directors’ Report – each SI is summarised, followed by recommendations from the RCA | ||||||||||
January 2014: Patient Experience Quarterly Report – shows the number of complaints from previous quarter. The report also highlights the main themes from the complaints and examples of learning and action plans | ||||||||||
January 2014: Patient Experience Quarterly Report – featuring the findings of the internal beverage and food trial | ||||||||||
April 2014–April 2015 |
Data 1 month in arrears Data source is identified as Datix Data show hospital figures Visual images include bar graphs The bar graphs are accompanied by a text box briefly describing the cause of each SI reported in the previous month January 2015: Serious Incident Summary Report – incidents are presented in detail for learning and dissemination |
January 2015: Serious Incident Summary Report – incidents are presented in detail for learning and dissemination |
Data 1 month in arrears Data source is identified as Datix Data show hospital and directorate figures Many visual images including plotted line graphs and tables The visual images are accompanied by a small paragraph explaining the data and the key points Regular Patient Experience Quarterly Report – shows the number of complaints from previous quarters. The report also highlights the main themes from the complaints and examples of learning and action plans |
Data show trust and hospital and trust figures for SHMI, RAMI and CMR (trust figures) Data 1 month in arrears (inpatient deaths); 4 months in arrears (RAMI); 10 months in arrears (SHMI) Visual images include plotted line graphs, bar charts and tables Data are plotted monthly, over long periods of time (≥ 36 months) Data are benchmarked against other trusts’ data (RAMI) and national averages (SHMI) Qualitative information describing the SHMI, RAMI indicators and the analysis of the trust performance |
Data 1 month in arrears Data show trust and hospital figures Data show ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in many plotted line graphs The data are presented monthly, over 12–21 months Data are benchmarked against the national targets Alongside each graph there is an explanation of what the graph shows (Excluding January 2015 – no data) |
No data |
No data April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether or not the trust was meeting nutritional needs |
April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether or not the trust was meeting nutritional needs | January 2015: 6-monthly review of progress with the trust’s Quality and Patient Safety priorities for 2014–2015: 1 plotted line graph showing the percentage of routine observations completed on time on all wards. The trust’s target is also plotted on the graph | January 2015: 6-monthly review of progress with the trust’s Quality and Patient Safety priorities for 2014–2015: 1 plotted line graph showing the percentage of routine observations completed on time on all wards. The trust’s target is also plotted on the graph |
January 2015: Serious Incident Summary Report – incidents are presented in detail for learning and dissemination | ||||||||||
April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether or not the trust was meeting nutritional needs | ||||||||||
January 2015: 6-monthly review of progress with the trust’s Quality and Patient Safety priorities for 2014–2015: 1 plotted line graph showing the percentage of routine observations completed on time on all wards. The trust’s target is also plotted on the graph | ||||||||||
April 2015–April 2016 |
Data 1 month in arrears Data source is identified as Datix Data show hospital figures Visual images include bar graphs and tables No qualitative data accompany visual images |
Data 1 month in arrears Data source is identified as Datix Data show hospital and directorate figures Many visual images including plotted line graphs, bar graphs and tables The visual images are accompanied by a paragraph explaining the data, key points, action plans and any comments/concerns about complaints Regular Patient Experience Quarterly Report – shows the number of complaints from previous quarters. The report also highlights the main themes from the complaints and examples of learning/action plans |
Data show trust and hospital and trust figures for SHMI, RAMI and CMR (trust figures) Data 1 month in arrears (inpatient deaths); 4–7 months in arrears (RAMI); 10 months in arrears (SHMI) Many visual images include plotted line graphs, bar charts and tables Data are plotted monthly, over long periods of time (≥ 12 months) Data are benchmarked against other trusts data (RAMI) and national averages (SHMI) Qualitative information describing the SHMI, RAMI indicators and the analysis of the trust performance |
Data 1 month in arrears Data show trust and hospital figures Data show ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in many plotted line graphs and tables The data are presented monthly, over 12 months Data are benchmarked against the national targets Before the visual images there is a summary of the ST measures and results for the current month, and alongside each graph there is a small commentary |
No data | No data |
Data 1 month in arrears Data show trust, hospital and department figures Summary tables and plotted line graphs Data show NEWS completed within 1 hour of prescribed time; the percentage of ED admissions with a NEWS score; weekly ED admissions with a NEWS score; patients with no NEWS; patients with NEWS and percentage with NEWS Data are benchmarked against other the trusts internal target No qualitative data |
Time period | Serious incidents | Complaints | Mortality | Safety Thermometer | Pain management | Nutrition data | Vital signs/NEWS | |||
---|---|---|---|---|---|---|---|---|---|---|
April 2013–April 2014 | No data |
Data 3–4 months in arrears Data show trust, directorate and specialty figures Visual images include tables and plotted line graphs Data are benchmarked against the internal trust target The visual images were accompanied by qualitative text explaining the trusts trends and themes in complaints and updates/actions with the complaints indicator |
No data October 2013: Dr Foster Report – trust HSMR and SHMI figures. Data are 10 months in arrears and are benchmarked against the national targets. Qualitative data include key findings and analysis published by Dr Foster |
October 2013: Dr Foster Report – trust HSMR and SHMI figures. Data are 10 months in arrears and are benchmarked against the national targets. Qualitative data include key findings and analysis published by Dr Foster | No data | No data | No data | No data | ||
October 2013: Dr Foster Report – trust HSMR and SHMI figures. Data are 10 months in arrears and are benchmarked against the national targets. Qualitative data include key findings and analysis published by Dr Foster | ||||||||||
April 2014–April 2015 |
Data 1–2 months in arrears Data show trust and directorate figures Visual images include tables and bar graphs Data show SIs that year categorised by directorates and SI occurrence per 1000 bed days Data are benchmarked against the national target The visual images are accompanies by qualitative data including root cause analysis, descriptions of SIs, key recommendations made and actions taken |
Data 2–7 months in arrears Data show trust, directorate, wards and specialties Vast number of visual images including plotted line graphs, bar graphs, SPC charts and tables Data are presented over a long time period The visual images were accompanied by text including the data summary; data trends; how the data is monitored; data targets; and explanations of changes in data |
No data |
Data 1 month in arrears Data show trust ST figures Data shows ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in an overview RAG table and bar graphs The data are presented monthly, over 13 months Alongside each image there is a comments box where any comments can be displayed, such as comments on the data, targets, interventions, improvement plans (October 2014 only) |
Data 1–2 months in arrears Data show trust and directorate figure for pain management assessment compliance alongside other nursing metrics Visual images include bar graphs and RAG ‘overview of change’ tables The tables and graphs are accompanied with qualitative text explaining the data; what the data is being used to do/help; and the ongoing development of the nursing metrics (No data for April 2014) |
Data 1–2 months in arrears Data show trust, directorate and ward figures for nutritional assessment compliance alongside other nursing metrics Visual images include bar graphs and RAG ‘overview of change’ tables The tables and graphs are accompanied by qualitative text explaining the data; what the data is being used to do/help; and the ongoing development of the nursing metrics (No data for April 2014) |
Data 1–2 months in arrears Data show trust, directorate and ward figures for vital sign assessment compliance alongside other nursing metrics Visual images include bar graphs and RAG ‘overview of change’ tables The tables and graphs are accompanied with qualitative text explaining the data; what the data are being used to do/help; and the ongoing development of the nursing metrics (No data for April 204) |
|||
April 2015–April 2016 |
Data 1–2 months in arrears Data show trust and directorate figures Visual images include tables, bar graphs and SPC charts Data show SIs that year categorised by directorates and SI occurrence per 1000 bed-days Data are benchmarked against the national target The visual images are accompanied by qualitative data including root cause analysis, descriptions of SIs, key recommendations made and actions taken July 2015: Learning from experience: review of the methods for sharing learning and lessons from serious incidents and complaints |
July 2015: Learning from experience: review of the methods for sharing learning and lessons from serious incidents and complaints |
Data 2 months in arrears Data show trust figures Visual images include SPC charts, bar graphs and tables Small amount of text accompanies the images, stating the trust improvement goals in relation to complaints (Excluding July 2015/January 2016) |
No data Mortality Review Group papers/minutes. The papers include a summary of the mortality rates presented in the Mortality Review Group |
Mortality Review Group papers/minutes. The papers include a summary of the mortality rates presented in the Mortality Review Group |
Data 1–2 months in arrears Data show trust ST figures Data show ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in an overview RAG table and bar graphs The data are presented monthly, over 13 months Alongside each image there is a comments box where any comments can be displayed, such as comments on the data, targets, interventions, improvement plans |
No data July 2015: Inpatient Survey 2014 – percentage of patients reporting experiencing pain. The statistics are accompanied with an action plan |
July 2015: Inpatient Survey 2014 – percentage of patients reporting experiencing pain. The statistics are accompanied with an action plan | No data | No data |
July 2015: Learning from experience: review of the methods for sharing learning and lessons from serious incidents and complaints | ||||||||||
Mortality Review Group papers/minutes. The papers include a summary of the mortality rates presented in the Mortality Review Group | ||||||||||
July 2015: Inpatient Survey 2014 – percentage of patients reporting experiencing pain. The statistics are accompanied with an action plan |
Time period | Serious incidents | Complaints | Mortality | Safety Thermometer | Pain management | Nutrition data | Vital signs/NEWS | |||
---|---|---|---|---|---|---|---|---|---|---|
April 2013–April 2014 | No data |
Data 1–4 months in arrears Data source is identified as Datix Data show trust, hospital and specialty figures Many bar graphs, plotted line graphs, SPC charts and tables Data are presented over a long time period Data is benchmarked against other hospital sites and other specialties The visual images were accompanied by qualitative text: outlining trends in data, examples of complaints, complaint themes identified, actions taken as a result |
Data show trust and hospital figures for HSMR, SHMI, provisional SHMI, RAMI and CMR Data 4–5 months in arrears (HSMR) 3–4 months in arrears (CMR/RAMI); 10 months in arrears (SHMI); 5–6 months in arrears (provisional SHMI) Vast number of plotted line graphs, bar charts, SPC charts, web graphs and tables Data are plotted monthly, over long periods of time Data are benchmarked against other trusts’ data and national targets Each visual image has a commentary paragraph below it, highlighting the overall trends in the data January 2014: Mortality Improvement summary progress report – summary of the trust latest position in terms of mortality improvement |
January 2014: Mortality Improvement summary progress report – summary of the trust latest position in terms of mortality improvement |
Data 1 month in arrears Data show trust and hospital figures Data shows ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in many RAG tables and bar graphs The data are presented monthly, over 2–8 months Qualitative text surrounds each image, highlighting any comments regarding the data. The comments are followed by any actions been taken in this area by the trust |
No data |
No data April 2013 – Nutrition Report October 2013: action plan from the inpatient survey addressing patients responses to Q21: How would you rate the hospital food? and Q23: Did you get enough help from staff to eat your meals? The questions have been given an action plan about how to improve results and how the trust will monitor in the future |
April 2013 – Nutrition Report | October 2013: action plan from the inpatient survey addressing patients responses to Q21: How would you rate the hospital food? and Q23: Did you get enough help from staff to eat your meals? The questions have been given an action plan about how to improve results and how the trust will monitor in the future |
April 2013 only Data 2 months in arrears Data show trust and hospital figures Summary table and plotted line graphs Data show whether or not the patient observations have been recorded and whether or not a completed NEWS score has been recorded with each set of observations Qualitative text accompanying each image highlighting any information about the indicator and any comments regarding the data |
January 2014: Mortality Improvement summary progress report – summary of the trust latest position in terms of mortality improvement | ||||||||||
April 2013 – Nutrition Report | ||||||||||
October 2013: action plan from the inpatient survey addressing patients responses to Q21: How would you rate the hospital food? and Q23: Did you get enough help from staff to eat your meals? The questions have been given an action plan about how to improve results and how the trust will monitor in the future | ||||||||||
April 2014–April 2015 | No data |
Data 2 months in arrears Data source is identified as Datix Data show trust, trust targets, mean averages Visual images mainly plotted line graphs, SPC charts and tables Data are presented over a long time period The visual images were accompanied by text outlining trends in the data and comments on the most recent month’s performance |
Data show trust and hospital figures for HSMR, SHMI, provisional SHMI, RAMI and CMR. Provisional SHMI shows data by diagnostic group Data 4–5 months in arrears (HSMR); 3 months in arrears (CMR/RAMI); 10 months in arrears (SHMI); 5–6 months in arrears (provisional SHMI) Vast number of plotted line graphs, bar charts, SPC charts, web graphs, pie charts and tables Data are plotted monthly, over long periods of time Data are benchmarked against other trusts data and national targets Each visual image has a commentary paragraph below it, highlighting the overall trends in the data Each month there is also the Mortality Improvement summary progress report, a summary of the trust latest position in terms of mortality improvement |
Data 2 months in arrears Data show trust and hospital figures Data show ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in many RAG tables and bar graphs The data are presented monthly, over 3–11 months Qualitative text surrounds each image highlighting any comments regarding the data. The comments are followed by any actions being taken in this area by the trust |
Data 2 months in arrears Data show trust and hospital figures Visual images include tables and plotted line graphs Indicator PE5 – Pain Management – patient felt staff did everything to help control pain/improve comfort Data are benchmarked against the trust’s target Qualitative text accompanying the table explaining the indicator and any comments regarding the data |
Data 2 months in arrears Data show trust and hospital figures Visual images include tables and plotted line graphs Indicator PS7/8 – whether or not the nutrition care pathway was followed Data are benchmarked against the trusts’ targets and other hospital data Qualitative text accompanying each graph highlighting any comments regarding the data. The comments are followed by any actions been taken in this area by the trust October 2014: PLACE (Patient Led Assessment of the Care Environment) report feedback. Report has a section on food, nutrition and hydration services |
October 2014: PLACE (Patient Led Assessment of the Care Environment) report feedback. Report has a section on food, nutrition and hydration services |
Data 2 months in arrears Data show trust and hospital figures Summary table and plotted line graphs Data show whether or not a completed NEWS score has been recorded with each set of observations; and whether the appropriate actions have been taken with NEWS score recorded Qualitative text accompanying each image highlighting any information about the indicator and any comments regarding the data Data are benchmarked against other hospital data and the trust’s data threshold |
||
October 2014: PLACE (Patient Led Assessment of the Care Environment) report feedback. Report has a section on food, nutrition and hydration services | ||||||||||
April 2015–April 2016 | No data |
Data 2 months in arrears Data source is identified as Datix Data show trust, trust targets and mean averages Visual images mainly plotted line graphs, SPC charts and tables Data are presented over a long time period The visual images are accompanied by text outlining trends in data, comments on that month’s performance, analysis been done and actions been taken |
Data show trust and hospital figures for HSMR, SHMI, provisional SHMI, RAMI and CMR. Provisional SHMI show data by diagnostic group Data 4–5 months in arrears (HSMR); 3 months in arrears (RAMI); 2 months in arrears (CMR); 10 months in arrears (SHMI); 5–6 months in arrears (provisional SHMI) Vast number of plotted line graphs, bar charts, SPC charts, web graphs, pie charts and tables Data are plotted monthly, over long periods of time Data are benchmarked against other trusts data and national targets Each visual image has a commentary paragraph below it, highlighting the overall trends in the data Each month there is also the Mortality Improvement summary progress report, a summary of the trust latest position in terms of mortality improvement |
Data 2–4 months in arrears Data show trust and hospital figures Data show ST measures (harm-free care, all pressure ulcers, falls with harm, etc.) Data are presented in many RAG tables, plotted line graphs and bar graphs The data are presented monthly, over 5–23 months Data are benchmarked against national averages Qualitative text surrounds each image highlighting any comments regarding the data. The comments are followed by any actions been taken in this area by the trust |
No data January 2015: Inpatient Survey action plan – discussed how the trust will ensure that all patients feel staff have done everything to control their pain |
January 2015: Inpatient Survey action plan – discussed how the trust will ensure that all patients feel staff have done everything to control their pain |
Data 2 months in arrears Data show trust and hospital figures for Visual images include tables and plotted line graphs Indicator PS7/8 – whether or not the nutrition care pathway was followed Data are benchmarked against the trust’s targets and other hospital data Qualitative text accompanying each graph highlighting any comments regarding the data. The comments are followed by any actions been taken in this area by the trust April 2015 and October 2015: Highlight Report form National Nutrition and Hydration Week |
April 2015 and October 2015: Highlight Report form National Nutrition and Hydration Week |
Data 2 months in arrears Data show trust and hospital figures Summary table and plotted line graphs Data show whether or not a completed NEWS score has been recorded with each set of observations, and whether or not the appropriate actions have been taken with NEWS score recorded Qualitative text accompanying each image highlighting any information about the indicator and any comments regarding the data Data are benchmarked against other hospital data and the trusts data threshold |
|
January 2015: Inpatient Survey action plan – discussed how the trust will ensure that all patients feel staff have done everything to control their pain | ||||||||||
April 2015 and October 2015: Highlight Report form National Nutrition and Hydration Week |
Serious incidents | Complaints | Mortality | Safety Thermometer | Pain management | Nutrition data | Vital signs/NEWS | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
April 2013–April 2014 |
Data 1–2 months in arrears Bar graphs/tables showing SIs over several months (excluding January 2014) Data show trust figures and individual patient data Little commentary accompanies the visual images Extra information describing SIs can be found in exception reports/appendix |
Data 1–2 months in arrears Data show trust, directorate and ward figures Bar graphs and tables Some images are accompanied with commentary – namely trends in the data, complaint theme and feedback from the departments October 2013: Complaints Annual Report: An in-depth review of the last year’s complaints. The data are broken down into directorates and wards. The report gives examples of complaints; the themes of the complaints; response times/performance; and action plans |
October 2013: Complaints Annual Report: An in-depth review of the last year’s complaints. The data are broken down into directorates and wards. The report gives examples of complaints; the themes of the complaints; response times/performance; and action plans |
Data show trust figures for HSMR, SHMI and CMR Data 5 months in arrears (HSMR/CMR); 9–10 months in arrears (SHMI) Plotted line graphs, bar charts funnel plot graphs and tables Data are benchmarked against other trusts’ data Some qualitative commentary accompanies the images – action plans and learning October 2013: the CHKS Review summary and action plans based on the trust’s adjusted mortality statistics. Images are a mixture of scatter, line, bar graphs and tables showing data such as activity and death by age group, emergency admission death, etc. Includes feedback from CHKS on the trust’s trends and clinical action plans |
October 2013: the CHKS Review summary and action plans based on the trust’s adjusted mortality statistics. Images are a mixture of scatter, line, bar graphs and tables showing data such as activity and death by age group, emergency admission death, etc. Includes feedback from CHKS on the trust’s trends and clinical action plans | No data | No data | No data | No data | |||
October 2013: Complaints Annual Report: An in-depth review of the last year’s complaints. The data are broken down into directorates and wards. The report gives examples of complaints; the themes of the complaints; response times/performance; and action plans | ||||||||||||
October 2013: the CHKS Review summary and action plans based on the trust’s adjusted mortality statistics. Images are a mixture of scatter, line, bar graphs and tables showing data such as activity and death by age group, emergency admission death, etc. Includes feedback from CHKS on the trust’s trends and clinical action plans | ||||||||||||
April 2014–April 2015 |
Data 1–2 months in arrears Very few visual images Data show individual patient data SI reports individually describe the SIs from the past 1–2 months |
Data 1–2 months in arrears Data show trust figures Bar graphs and tables Qualitative section in the tables October 2014: Closed Complaints Report – shows complaints by grade (RAG); which CBU the complaint was made in; a description of the complaint; lessons learned; outcomes; and actions taken |
October 2014: Closed Complaints Report – shows complaints by grade (RAG); which CBU the complaint was made in; a description of the complaint; lessons learned; outcomes; and actions taken |
Data show trust figures for HSMR, SHMI and CMR Bar graph, plotted line graphs, tables and SPC charts Data 4–6 months in arrears (HSMR); 3–10 months in arrears (SHMI); CMR 2 months in arrears Data are benchmarked against national targets; trust target trajectory and other NHS trusts Small amount of commentary accompanying the images, explaining trends in the data April 2014: deaths review: an independent review of deaths occurring at the trust in April 2013. The review was conducted because of a higher than expected mortality rate observed in both HSMR and SHMI July 2014: Advancing Quality Alliance Mortality Review |
April 2014: deaths review: an independent review of deaths occurring at the trust in April 2013. The review was conducted because of a higher than expected mortality rate observed in both HSMR and SHMI | July 2014: Advancing Quality Alliance Mortality Review |
Data 1 month in arrears Dashboard data show ST measures (harm free care, all pressure ulcers, falls with harm, etc.) The data are presented monthly, over 3 months. They also show a RAG arrow, which shows whether the data this month are better/worse than the previous month No commentary accompanying the dashboard |
No data |
No data April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether the trust was meeting nutritional needs |
April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether the trust was meeting nutritional needs |
No data January 2015: NEWS audit – key themes of NEWS audit and key recommendations |
January 2015: NEWS audit – key themes of NEWS audit and key recommendations |
October 2014: Closed Complaints Report – shows complaints by grade (RAG); which CBU the complaint was made in; a description of the complaint; lessons learned; outcomes; and actions taken | ||||||||||||
April 2014: deaths review: an independent review of deaths occurring at the trust in April 2013. The review was conducted because of a higher than expected mortality rate observed in both HSMR and SHMI | ||||||||||||
July 2014: Advancing Quality Alliance Mortality Review | ||||||||||||
April 2014: CQC Essential standards of Q&S. Dashboard/table showing status of whether the trust was meeting nutritional needs | ||||||||||||
January 2015: NEWS audit – key themes of NEWS audit and key recommendations | ||||||||||||
April 2015–April 2016 |
Data 1 month in arrears Bar charts, tables and heatmaps showing SIs over 12–14 months Data show trust, directorate and ward figures Commentary accompanies the visual images – describing the nature of the SIs Data are benchmarked against national targets SI reports individually describe the SIs from the past 1–2 months |
Data 1 month in arrears Data show trust figures Bar graphs, line graphs and tables Qualitative section in the tables |
Data show trust figures for HSMR, SHMI and CMR Bar graph, plotted line graphs, tables and SPC charts. And a performance summary table showing SHMI and HSMR for a rolling 12-month period Data 4 months in arrears (HSMR); 9–10 months in arrears (SHMI); CMR 1–2 months in arrears Data are benchmarked against national targets; trust target trajectory and other NHS trusts Small amount of commentary accompanying the images, explaining trends in the data |
No data April 15: Safety Thermometer Performance Report: |
April 15: Safety Thermometer Performance Report: |
No data July 2015: NHS National Children’s Inpatient survey - data showed whether or not patients felt ‘staff did everything to ease the pain’ |
July 2015: NHS National Children’s Inpatient survey - data showed whether or not patients felt ‘staff did everything to ease the pain’ | No data | No data | |||
April 15: Safety Thermometer Performance Report: | ||||||||||||
July 2015: NHS National Children’s Inpatient survey - data showed whether or not patients felt ‘staff did everything to ease the pain’ |
Feature | April 2015 | July 2016 |
---|---|---|
Length |
17 pages (including front page) pp. 2–3: overview report (national, local, improvement goals, specialty-specific measures, CQUINs indicators) Subsequent pages, detail on each |
15 pages:pp. 3–4: QOF and Quality Improvement Ambition pp. 5–14: indicators by QOF domains 1, 4 and 5 (domains 2 and 3 remain to be addressed) p. 15 lists work to be done (QOF D2–3, further work in D1, 4, 5 and CQUINs) |
Time period |
April 2014 to April 2015 in general Some – different periods, for example: |
May 2015 to May 2016 in general Some – different periods, for example: |
Types of charts |
RAG rating – current and trend; easy to read Trend (bar or graph) lines, against target, where appropriate; for example, MRSA; patients > 3 months for follow-up OP appointment No confidence intervals or SPC charts |
Graphs, bar charts against target Trend lines and SPC charts, with confidence intervals Heat maps (RAG rated) |
List of abbreviations
- CCG
- Clinical Commissioning Group
- CD
- compact disc
- CQC
- Care Quality Commission
- CQUIN
- Commissioning for Quality and Innovation
- EHR
- electronic health record
- FT
- foundation trust
- HCA
- health-care assistant
- HES
- Hospital Episode Statistics
- HSCIC
- Health and Social Care Information Centre
- HSDR
- Health Services and Delivery Research
- HSMR
- Hospital Standardised Mortality Ratio
- IT
- information technology
- LoW
- laptop on wheels
- NED
- non-executive director
- NEWS
- National Early Warning Score
- PAS
- Patient Administration Systems
- PPI
- patient and public involvement
- RAG
- red, amber, green
- SHMI
- Summary Hospital-level Mortality Indicator
- SUS
- Secondary Uses Service
- TDA
- Trust Development Authority
- VTE
- venous thromboembolism