Notes
Article history
The research reported in this issue of the journal was funded by the HSDR programme or one of its preceding programmes as project number NIHR134314. The contractual start date was in January 2021. The final report began editorial review in April 2022 and was accepted for publication in October 2022. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HSDR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Permissions
Copyright statement
Copyright © 2023 Glasby et al. This work was produced by Glasby et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This is an Open Access publication distributed under the terms of the Creative Commons Attribution CC BY 4.0 licence, which permits unrestricted use, distribution, reproduction and adaption in any medium and for any purpose provided that it is properly attributed. See: https://creativecommons.org/licenses/by/4.0/. For attribution the title, original author(s), the publication source – NIHR Journals Library, and the DOI of the publication must be cited.
2023 Glasby et al.
Chapter 1 Introduction and context
Box 1 provides a summary of Chapter 1 of this report.
Summary of key points
-
The social care system in England is under significant pressure, and there are funding gaps and workforce challenges that make it difficult to keep up with rising demands. The exit of the UK from the EU, and the COVID-19 pandemic, have created additional pressures.
-
Technology has been suggested as one way to improve care and address pressures in the health and social care system. It was specifically highlighted as a priority within recent social care reforms as a way to support people living independently within their homes.
-
Sensor-based technology with artificial intelligence capabilities is one type of technology that may be useful in some contexts. There is evidence to suggest that this type of technology can potentially improve some aspects of care and care planning, although there are key gaps in evidence that need to be addressed.
-
The uptake and sustainability of technology is influenced by wider factors, including implementation processes, support available for front-line staff, and engagement with people who draw on care and support and their carers.
-
This study is based on scoping work identifying a key gap in evaluations of artificial intelligence in social care. It focuses on decision-making and implementation processes.
Social care in England
‘Social care’ is a term which refers to the practical assistance which is given to people with disabilities, people with mental health problems, people with learning difficulties, frail older people and unpaid carers to manage activities of daily living (getting washed and dressed, eating, going to the toilet, etc.) as well as support to have choice and control over their own lives. This can take the form of direct services (home care, day care, residential care, etc.), or a direct payment (where the person receives funding with which to design their own care and support, potentially employing their own personal assistants). It is funded and organised by local councils (known as ‘local authorities’ in England, sometimes known as ‘municipalities’ internationally), as well as by people who are funding and/or organising their own care, and direct services are delivered by a mixed economy of private, voluntary and public sector services.
Even prior to COVID-19, the social care system in England was under significant pressure. There is an increasing demand for support, with a total of over 1.9 million requests for adult social care in 2019/2020, a number which has increased by 120,000 since 2015–6. 1 A growing number of older people nationally, as well as increases in working-age adults with care and support needs, has contributed to this trend. In the same time period, the number of adults receiving long-term care has fallen,1 indicating growing unmet and undermet need. In recent years, it has been estimated that there are 1.2 million older people in England with unmet care needs. 2
Funding gaps are an important issue in social care. While spending increased in 2019–20 compared with previous years, this should be understood within a context of years of low funding/cuts followed by only a very modest increase from 2015 to 2020. Compared with 2010–11, spending in 2019–20 was higher in real terms, but lower on a per-person basis. Increasing demands and rising costs (even adjusting for inflation) have contributed to this funding gap, along with general financial pressures on local authorities that commission social care. 1 This highlights the need for innovation and renders the long-term sustainability of social care a pressing issue in policy and practice. 3
The social care workforce is also facing challenges. Adult social care is characterised by large numbers of unpaid carers, and workforce issues among paid carers, including high staff turnover and poor conditions. 2 The vacancy rate in social care decreased slightly in 2019–20 compared with 2018–9, although it was still high compared with 2012–3 and with the overall unemployment rate, potentially made worse by low pay, low status and poor levels of public understanding. 1 In order to meet projected demands for social care, a 2021 report estimated that an extra 627,000 social care staff would be needed, representing a 55% growth over the next decade. This far exceeds the growth in the social care workforce that has occurred over the past 10 years. 4 The exit of the UK from the EU may also impact on the social care workforce [16% of whom are non-British (7% from European Economic Area countries)], which relies heavily on migrant workers from EU and non-EU countries. 5
The fragility of the social care sector has been exacerbated by the COVID-19 pandemic. 6 Social care workers are at an increased risk of infection and death from COVID-19, and the pandemic has likely exacerbated issues in social care around underfunding, unmet social care needs, the burden on unpaid carers, and widespread social and economic inequalities. 7 Although there were fewer requests for social care during the COVID-19 pandemic, we do not yet know what impact there will be in the long term (e.g. in terms of a ‘backlog’ of need to be addressed, an exacerbation of need that was temporarily unmet during lockdown, and/or the long-term implications for public finances). 1 The impacts of ‘long COVID’ are also still unclear and may impact social care needs in the future.
Digital technology in health and social care
As social care in England faces growing pressures, there has been a turn towards new technologies as a potential solution that might ease pressures across a number of public services and promote independence. While this has often focused more on the health service, there has been increasing awareness of the potential of digital technology for social care in recent years. 8 In 2018, the Health Secretary pledged £500 million to transform care technology within both health and social care. The announcement of this money included references to a ‘tech transformation’, with funding set aside for hospital- and home-based technology, electronic systems within the NHS, finding new technology with applications in health and social care, and driving culture change. 9
For people not from a technology background, the language used in different policy announcements and in the broader literature can often be confusing, with lots of different terms used, sometimes interchangeably. Before we explore recent policy and the underpinning evidence in more detail, Box 2 sets out some key definitions.
Artificial intelligence (AI) | The intelligence demonstrated by a machine or software system which can learn and make decisions for itself. |
Algorithm | A set of instructions, rules or procedures used by a software system to accomplish a set task. |
Application programming interface | An interface that allows software programs/apps to communicate with one another regardless of how each application was originally designed. |
Assisted living technology/assistive technology | Technology used as part of a range of services that help people maintain independence and improve a range of outcomes. It includes both telehealth (remote monitoring for clinical biomarkers) and telecare (e.g. alarms, sensors, reminders) designed to deliver health and social care services to the home. |
Broadband | High-speed internet access in which a single cable can carry a large amount of data at once. |
Clinical informatics | The application of data and information technology to improve patient and population health, care and well-being outcomes. It can be used to advance treatment and the delivery of personalised, coordinated support from health and social care. |
Data | Any information stored on a computer that is not the computer code. It can be either structured or unstructured. Structured data (e.g. ‘patient records’) can be organised and used for multiple purposes, whereas unstructured data are normally used for a single purpose and includes documents, pictures, videos or sound recordings. |
Data protection impact assessment | A process that helps organisations identify and minimise risks that result from data processing. DPIAs are usually undertaken when introducing new data processing systems, processes, or technologies. |
Digital literacy | The ability to use digital technology and communication tools to seek, find, understand, and evaluate information. |
Electronic health record | A comprehensive medical record of the past and present physical and mental state of health of an individual in electronic form. |
GP Connect | A service that allows GP practices and authorised clinical staff to share and view GP practice clinical information and data between IT systems. |
Graphical user interface | A computer program that enables a person to interact with different electronic devices using graphical icons, such as pointing devices (via a mouse). |
Health and Social Care Network (HSCN) | A data network for health and care organisations which replaced the NHS network, N3. |
Information governance | The overall strategy for securely and appropriately managing information/data. It provides a framework to bring together all the rules and guidance, whether legal or simply best practice, that apply to the handling of information. |
Internet of things | The ability of everyday objects (rather than computers and devices) to connect to the internet. Examples include kettles, fridges, televisions and wearable technology such as smartwatches and fitness trackers. |
Interoperability | The ability of computer systems or software to exchange and make use of information. |
Machine learning | The development of computer modelling and algorithms that use data and learn from them to produce predictive models with minimal human intervention. |
NHS Digital Social Care Programme | A framework developed to support front-line staff, people using services, and commissioners to maximise digital opportunities in health and social care. |
NHSX | Unit within the UK government responsible for policy and best practice around technology, digital, and data within the NHS. This was absorbed into the NHS Transformation Directorate in 2022. |
(Digital) platform | Includes both hardware (the device you are using) and software (the operating system you are using) on which applications can run. |
Predictive analytics | The process of learning from historical data, statistical algorithms and machine learning techniques in order to predict future activity, behaviour and trends. |
Remote monitoring | The process of using technology to monitor individuals in non-clinical environments, such as in the home, assisted living, or care home settings. It includes sensors and wearable devices. |
Surveillance technology | Surveillance technology includes CCTV, cameras and microphones. |
Telecare | Telecare services offer remote care, often for frail older people and people with cognitive or physical impairments, providing the reassurance needed to allow them to remain living in their own homes. Typically, a monitoring service is provided which will escalate alarm activations to a named responder or, if appropriate, the emergency services. |
Telehealth | The use of electronic sensors or equipment that monitor people’s health in their own home/community (e.g. equipment to monitor vital signs such as blood pressure, blood oxygen levels or weight). These measures are then automatically transmitted to a clinician who can observe health status without the person leaving their home environment. |
Use case | A use case identifies, clarifies, and organises system requirements. It is made up of a set of possible sequences of interactions between systems and users in a particular environment related to a specified goal. |
Wearable technology (‘wearables’) | Electronic devices that can be incorporated into clothing or worn on the body as implants, or accessories that can send and receive data via the internet, often via smartphones. |
Wi-Fi | A wireless network which enables computers and mobile devices to connect over a wireless signal to the internet without using wires or cables. |
The NHS Long Term Plan, published in 2019, promoted digitally-enabled care as one of its central tenets. In particular, the Plan mentions that technology can help automate some aspects of care, improve the quality of care and free up staff to do other tasks. The Long Term Plan also highlights that home-based monitoring equipment can play a key part in preventing hospital admission, and identifies AI-based technology as a practical solution to drive digital transformation in the NHS. 10 In line with this, the Government Digital Service and the Office for Artificial Intelligence have published recommendations for public sector organisations to: assess whether AI has the potential to assist services to meet the needs of their users; describe how the public sector may best use AI; support the safe, fair and ethical adoption of AI; and guide organisations on how to plan and prepare for the implementation of AI. 11
In the same year, NHSX was created to take forward digital transformation within the NHS, signalling further commitment to digitally-enabled care within the health and social care system. It was recently announced that NHSX and NHS Digital (a non-departmental body created in 2013 to support information, data and information technology (IT) systems within the NHS) will merge into NHS England. According to representatives from the Department of Health and Social Care, this merger will help to accelerate digital transformation within the NHS. 12 Digital Social Care was also created in 2019 to support and advise the social care sector on technology. 8
In 2019, Eric Topol led an independent review on how to prepare the health-care workforce to deliver digitally-enabled care. This review recognised that the deployment of AI and other technologies can free up more time to deliver care. Recognising the direction of travel towards digital health and social care, the review identified a need for improved digital literacy of staff and targeted support to implement digital technology. While other technologies are also discussed in the report, AI was highlighted as one of the central approaches to meeting the financial challenges of delivering high-quality care. 13
The most recent reform of adult social care, as reflected in the White Paper People at the Heart of Care, also pledged at least £150 million of additional funding to drive the adoption of technology and achieve digitisation to support independent living and improve the quality of care. 14 The paper also explores how technology fits into a broader 10-year vision for adult social care, setting out various commitments to accelerate the adoption of technology, ensure that individuals will be able to adapt their homes and access practical tools and technology to live independently and well in their homes, and ensure that staff working in social care will have the confidence to use technology to support care needs and free up time in social care. 14 The paper also mentions the potential for technology to be fully utilised to ‘enable proactive and preventative care and to support people’s independence’, including by using technology to identify risks, prevent incidents and ensure quick responses to care needs. 14 To achieve this, the White Paper identifies a need for practical guidance and training on technology and digital skills among the social care workforce, additional investment and funding models, and data infrastructure to support technology in care. 14
Although COVID-19 has caused additional pressures for the health and social care system in England, it has also accentuated awareness of the potential of technology to deliver or facilitate the delivery of care and support. 8,15 Since the pandemic, more people who draw on care and support have needed to receive care remotely, and more technology has been implemented across health and social care to allow care to continue as best it can, for both COVID and non-COVID patients.
AI-based technology for social care
One type of technology that may be particularly relevant for adult social care is sensors with AI capabilities. These can be thought of as a type of telecare, which includes technology and devices used to monitor people who draw on care and support, generate and analyse data about them, and connect them to or provide them with health and care services. 8
There are often said to be three generations of telecare, which increase in complexity. In the first, a button is pressed by the user during an emergency, and an alert is sent to a nominated individual via a telephone connection. Second-generation technologies do not rely on intervention by the user (i.e. by requiring the user to press a button). Instead, an automatic alert is sent when a potential breach to health and safety (e.g. a fall) is detected by the sensors. The third generation uses more innovative systems that focus on functionality, with the provision of remote support and lifestyle monitoring to pre-empt, detect and reduce problems relating to activities of daily living. This is the type of technology we are focused on in this evaluation, and we refer to it as ‘AI-based technology for social care’ or ‘new and emerging technology’. The aim of these technologies is to improve quality of life for people who draw on care and support and their carers (rather than exclusively providing reassurance to the person or their carer, or to ensure that help can be available after a fall, as was often the case with previous generations of technology). 16–18 They aim to maintain or increase independence by generating alerts when problems occur and by spotting emerging issues early to prevent problems or crises before they arise. 19–21
Other mechanisms by which AI-based technology for social care is intended to improve health and well-being can include better personalisation of care, greater control and choice for people who draw on care and support (whether at home or in care homes), and a reduction of costs for health and social care services. 22,23 There are also more novel applications of new and emerging technology in social care, such as interactive robots placed within people’s homes,24 although these are less well developed and less widely used in the UK.
While these technologies are not new in social care, technological advances are leading to the development of more sophisticated AI devices that use machine learning to identify patterns in people’s daily activity, recorded by multiple sensors around a person’s home. 3 Machine learning (or computational learning) is defined as ‘the set of techniques and tools that allow computers to “think” by creating mathematical algorithms [read as a set of instructions] based on accumulated data.’3, p. 6 The machine learning aspect of AI can allow such technologies to utilise data collected through sensors for offline learning with the aim of improving care.
Evidence around technology in social care
While it cannot replace personal care, technology has the potential to help people with care and support needs to live independent and active lives, and to assist those providing support. However, in order for technology to be successfully embedded into care, it needs to be ‘based on a persuasive evidence base; useful to those who are caring; and – first and foremost – help achieve the outcomes users desire’. 25, pg. 1
Research supporting the effectiveness of these types of technology in the context of social care is, however, mixed. A 2008 randomised controlled trial by the then Department of Health [the Whole Systems Demonstrator (programme)] aimed to explore the cost-effectiveness of technologies for health and social care. 8,20,26–29 The WSD evaluation did not look specifically at AI, which was less well developed at that time, yet findings from this study provide key learning relevant to other types of technology. The aim of the WSD evaluation was to provide local health and social care commissioners with knowledge on which to base investment decisions, and to provide suppliers of the technology with an understanding of potential business models. Results were mixed, and neither telehealth nor telecare was found to be cost-effective compared with usual care. 8,30 The trial identified barriers to the use and acceptance of telecare among people drawing on care and support, including concerns around a lack of technical competence to operate equipment, misunderstandings around what skills are needed to use technology, and fears associated with ageing and self-reliance with respect to using telehealth and telecare resources. 31 Despite early indications from the Department of Health that telehealth and telecare had resulted in reductions in emergency and elective admissions, bed-days, costs and mortality rates,32 it has been argued that the findings of the study were misinterpreted based on political pressure and that the actual findings of the study were less positive. 33
There are some indications that home sensors with AI technology may be effective in some contexts. A recent randomised controlled trial of an AI- and machine learning-based technology using home sensors for people with dementia (the Technology Integrated Health System study) found that the technology led to mental health and well-being benefits for people who draw on care and support, and that it prompted early intervention to avoid hospital admissions. Furthermore, the technology was well accepted by people drawing on care and support and carers. 34,35 However, some people raised concerns about the reliability and user-friendliness of the technology, highlighting internet connectivity issues and the need for more passive forms of data collection that do not require input from the person with dementia, who may become annoyed by having to use devices to collect data. 34 It is worth noting that this evaluation focused upon health rather than social care outcomes, specifically: urinary tract infection, agitation, irritation and aggression. 34,35
Despite these relatively positive results, other studies more focused on social care outcomes found that similar technology was not effective. A health technology assessment (the ATTILA randomised controlled trial) of another assistive technology for people with dementia who continue to live independently at home found that the technology did not allow people to live in the community for longer, nor did it decrease carer burden, depression or anxiety relative to a basic care programme. 36
What influences the success of technology in health and care?
The wider literature around the adoption and implementation of technology in health and social care points to the importance of considering wider contextual factors and how these influence the success of technology. For instance, Greenhalgh et al. (2017) argued that many previous studies have tended to focus on the short-term adoption of simple innovations by individual adopters and on individual barriers and facilitators, failing to theorise the adoption of new technology. 37 In contrast, they propose a new framework for theorising and evaluating the adoption/non-adoption, abandonment and challenges to the scale-up, spread and sustainability of health and social care technologies: the NASSS (adoption/non-adoption, abandonment, scale-up, spread, sustainability) framework. This framework includes elements relating to: the condition or illness of the person/population; the technology itself; the value proposition in terms of what the technology will accomplish; adopters of the technology (e.g. patients, staff and carers); organisational considerations; elements of the wider system (e.g. policy, regulation); and how the technology is embedded and how the system around it is adapted over time (see Box 3). 37 This framework (informed by theory and evidence) describes the barriers to successful uptake of assistive technology innovations, and provides a guide to the type of issues that should be considered by commissioners and providers when deciding which technologies to adopt, as well as by researchers interested in evaluating the implementation of assistive technologies in health and social care. 37 Chapter 10 of this report draws on this framework (alongside insights from the ‘rational model’ of policy implementation) to draw out key learning from our findings.
The framework proposed by Greenhalgh et al. (2017) incorporates the following influences:
-
Condition (the nature of the condition or illness of users, comorbidities, sociocultural influences)
-
Technology (material features; type of data generated, knowledge needed for use, technology supply model)
-
Value proposition supply-side value (to developer), demand-side value (to patient)
-
Adopters for staff (role, identity), for users (simple vs. complex input), for carers (availability, nature of input).
-
Organisation (capacity to innovate – leadership etc, readiness for technology/change, nature of adoption/funding decision, extent of change needed to routines, work needed to implement change)
-
Wider system (political/policy, regulatory/legal, professional, sociocultural)
-
Embedding and adaptation over time (scope for adaptation over time; organisational resilience).
Greenhalgh et al. recommend that evaluators of assistive technologies for health and social care consider each of these groups of influences on the success of the implementation effort.
Reproduced with permission from Greenhalgh T. 37 This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: https://creativecommons.org/licenses/by/4.0/. The text above includes minor additions and formatting changes to the original text.
Copyright © 2023 Glasby et al.
The wider literature also points to some well-known problems associated with the adoption of new technologies. For example, there is often a mismatch between the value that developers of technology see in creating a business case and sales model for their products (the supply side), as opposed to the value that providers and decision-makers within the health and social care system (the demand side) expect from the technology in terms of benefits to people who draw on care, staff and the health and social care system. 38,39 Training needs have been highlighted as an issue in the increasing presence of AI-based technology in social care settings. For example, there is a need for training for staff in how to interpret the data collected by AI technologies. 3 There are also key considerations such as internet connectivity, particularly in certain parts of the UK, along with ethical questions and data privacy and security concerns, which need to be taken into account when commissioning technology for adult social care. 40
Lastly, the implementation process matters greatly in terms of how successful a digital technology may be in health and social care. Findings from the NHS England Test Bed programme suggest a number of key lessons in supporting implementation, adoption and spread of technologies in health and social care,41 as described in Box 4.
-
Dedicate sufficient time and resource to engage with end-users.
-
Co-design or co-production with end-users is an essential tool when implementing technology.
-
Identify the need and its wider impact on the system, not a need for a technology.
-
Explore the motivators and barriers that might influence user uptake of an innovation.
-
Ignore information governance requirements at your peril.
-
Don’t be afraid to tailor the innovation along the journey.
-
Ensure adequate training is built in for services using the technology.
-
Embedding the innovation is only half the journey; ongoing data collection and analysis is key.
-
Ensure there is sufficient resource, capacity and project management support to facilitate roll-out.
-
Recognise that variation across local areas exists and adapt the implementation accordingly.
Similar factors were identified in a report looking specifically at assistive technology in adult social care. This pointed to the importance of engaging people who draw on care and support and using language they can understand around assistive technology, engaging with staff to encourage culture shifts and greater acceptance of technology, ensuring that technology meets identified needs when commissioning it for social care, having clear protocols around the use of data, and ensuring that digital infrastructure will support technology. 42
Gaps in the evidence
It has been recognised that technology is not being used to its full potential in social care, with a range of stakeholders (from national voluntary organisations such as Carers UK to the government’s Industrial Strategy) identifying the need to increase investment in this area. 23,43 There are substantial gaps in understanding the use of emerging technologies to deliver public services, including for assistive technologies in health and social care. 3,19,20,44–46 Specifically, there is a lack of evidence on the expected or achieved impact of these technologies on people drawing on care and support and carers, hindering the wider use of technology in social care. 20 This lack of evidence has been acknowledged by commissioners of services, who have also expressed concern about this in terms of how to implement technology alongside existing systems, the potential impact on future service use, and cost-effectiveness. 20 In addition, although there is a wide range of devices and systems commercially available to support people with care and support needs, there is very little validated information to help select the most suitable technologies. 19
Research and evaluation of AI-based technologies for social care in the literature are underdeveloped and limited by methodological issues, and a call has been made for more robust research in this area. 3,20 The evidence base is characterised by gaps in knowledge across several themes, including the attitudes and perspectives of different stakeholders (people who draw on care and support, carers and care staff) and the impact on the broader workforce and services. Little is known about how the care workforce may be affected by the adoption of AI-based technology, or what is required from them to facilitate acceptance by people drawing on care and support and carers. 3 A lack of involvement of the social care workforce in the design and development of AI for social care has also been acknowledged as a barrier to its implementation. 3 The value of exploring the experience of people who draw on care and support and carers using qualitative methods has been acknowledged in the literature, for instance via input into the design and implementation of assistive technology. 47 However, research and evaluation specifically on the experiences and attitudes of people drawing on care and support and of carers towards AI-based technology is sparse,3 and there is a lack of evidence around the perspectives of technology providers and innovators and those commissioning care. Lastly, although there is a clear interaction and overlap between health and social care,28 most literature and empirical research on assistive technology focuses on health outcomes.
As a result, discourse on the promise of technological innovation (e.g. from policy-makers and technology companies) is contradicted by evidence showing a poor track record of assistive technologies. They are not widely used in social care currently as they often fail to be scaled up at a local level, or spread widely, and are not sustained in the long term at a system or organisational level. 37 This dissonance has been identified in the literature around remote care technologies for social care in the UK, pointing to the need to fill gaps in the evidence before promoting technology as a ‘silver bullet’ for existing challenges in social care. 48
Context for study
Interest from policy-makers, commissioners and care providers in developing and using technology for social care makes it a current priority for evaluation. 11,22,49 In July–November 2019, an NIHR-funded national prioritisation exercise was held, during which organisations and individuals with knowledge of adult social care and social work identified promising innovations. 50 As part of this, a prioritisation workshop using James Lind Alliance principles was held with 23 members, including people who draw on care and support, carers, care staff, researchers, commissioners and policy-makers. 51 This prioritisation exercise resulted in a shortlist of the top innovations which would benefit from a rapid evaluation, several of which related to new and emerging technology.
This study builds on the prioritisation exercise by evaluating how one example of new and emerging digital technology (a system of sensors with AI which we have anonymised using the name ‘IndependencePlus’) was implemented in case study sites across England. In particular, we focus on the decision-making process for and implementation of this technology, including consideration of the outcomes that local authorities/care providers are trying to achieve by adopting new and emerging technology, and the way in which this is experienced by people who draw on care and support, carers and care staff.
As a result of this scoping work, the resulting study seeks to answer the following core research questions:
-
RQ1. How do commissioners and providers decide to adopt new and emerging technology for adult social care? (Decision-making)
-
RQ2. When stakeholders (local authorities and care providers, staff, people who draw on care and support and carers) start to explore the potential of new and emerging technology, what do they hope it will achieve? (Expectations)
-
RQ3. What is the process for implementing technological innovation? (Implementation)
-
RQ4. How is new and emerging technology for adult social care experienced by people who draw on care and support, carers and care staff? (Early experiences)
-
RQ5. What are the broader barriers to and facilitators of the implementation of new and emerging technology in addressing adult social care challenges? (Barriers and facilitators)
-
RQ6: How has the COVID-19 pandemic influenced responses to the questions above? (Impact of COVID-19)
-
RQ7: How can the process of implementing new technology be improved? (Making improvements)
IndependencePlus
Below (see Box 5), we provide a brief overview of the features of the IndependencePlus technology that was piloted at case study sites involved in this study. While our study is not an evaluation of this specific technology, this is meant to provide context that will be useful in understanding this report and the study findings.
-
Sensors: Sensors placed in key locations in individual homes collect information about an individual’s daily activities, without video or audio recording. Information that is collected includes: opening and closing of doors; getting out of bed; opening the refrigerator; using the kettle; and flushing the toilet.
-
AI capabilities: Once data are collected on an individual for a sufficient period of time, a baseline can be established, allowing for automated processing of data indicating whether a measure has increased or decreased from the baseline, indicating potential improvement or decline of the individual drawing on care and support. To establish a baseline, continuous data collection is required over a period of time, which may be disturbed if sensors become disconnected or if they run out of battery.
-
Individuals living alone: Since the technology cannot distinguish between individuals within the living space, it is best used for individuals living alone, rather than in group living situations.
-
Connectivity and Wi-Fi: Individuals using IndependencePlus must have Wi-Fi connection for their data to be collected, and sensors must be plugged in or have a charged battery in order to operate.
-
Data dashboard: Data from sensors for each individual are displayed on a data dashboard. In case study sites, only social care workers had access to this data dashboard, although it could theoretically be made available to individuals drawing on care and support or their carers. Social care staff then needed to interpret the data, and understand what increases or decreases in different parameters meant for the individual person.
Part way through local pilots, IndependencePlus evolved the data it collected to include a range of health data (see Chapter 7 for further discussion of this development).
Chapter 2 Methodology
Box 6 provides a summary of Chapter 2 of this report.
Summary of key points
-
This study was conducted across two stages. Stage 1 took the form of scoping work to better understand new and emerging digital technologies for social care and the challenges and lessons learnt from previous research and evaluation efforts. Stage 2 involved evaluation of the implementation of new and emerging digital technology, using the example of IndependencePlus, through qualitative data collection.
-
The study has generated a number of lessons about the key things to consider when implementing new and emerging technology in social care.
-
However, significant recruitment challenges were faced during stage 2 of the research, particularly related to the pressures of COVID-19 on social care, as well as a range of other factors. This had significant implications for our research, and is discussed in detail at the end of this chapter.
To address the research questions outlined in Chapters 1 and 3, we undertook two stages of data collection, each of which is outlined in more detail below:
-
Stage 1: Scoping work to better understand new and emerging digital technologies for social care, with a focus on home sensors with AI technology, and the challenges and lessons learnt from previous research and evaluation efforts. This consisted of a rapid review of the literature, nine key informant interviews, three online project design groups and selection of potential case study sites.
-
Stage 2: An evaluation of new and emerging digital technology, using the example of IndependencePlus, through qualitative data collection and analysis from three case study sites (20 interviews), supplemented by three interviews and focus groups with care technology providers/innovators and regulatory bodies. The work in case study sites consisted of selecting and recruiting local authorities and care providers who had been exploring the potential of new and emerging digital technology, interviewing key stakeholders (seeking to include decision-makers and operational leads, care staff, carers and people who draw on care and support), reviewing documents from case study sites and, finally, analysis and synthesis. This was supplemented by interviews with technology providers/developers, and interviews with national organisations involved in developing policy for and regulating AI in social care. We had originally intended to hold local workshops with case study sites to sense-check emerging findings, but due to recruitment and engagement challenges (described later), this was not possible during the initial life of the project. However, we will offer this to our sites later in 2022 in case circumstances have changed sufficiently for this to be a helpful contribution.
Each of these stages will be outlined in further detail below. However, we encountered significant challenges in recruiting case study sites and participants to interview (partly due to the impact of COVID, but also due to the nature of the pilots themselves, which were all deemed not to have worked and were subsequently abandoned). We reflect on this at the end of the chapter.
Although we were interested in new and emerging digital technology generally, we collected data from sites that had chosen to use a specific type of technology – home sensors with AI technology –provided by a single technology company. This form of AI technology had been identified via the NIHR-funded prioritisation exercise as worthy of rapid evaluation (see below for further discussion), with various examples of new and emerging technology cited. In this report, we have anonymised the technology provider and sites in order to maximise learning about the decision-making and implementation process more generally, rather than to explore the effectiveness of a single technology or product. We therefore refer to this technology and the company providing it using the pseudonym ‘IndependencePlus’. All case study sites were also anonymised to guard against reputational risk in situations where local authorities or care providers have experimented with new ways of working and not been able to deliver desired outcomes – but still generated important learning that could benefit others. During our scoping work, a range of stakeholders felt that this was a helpful safeguard which would maximise their ability to share genuine learning, whether outcomes were perceived to be ‘positive’ or ‘negative’.
Stage 1: Scoping work
To inform our study design, the research team undertook scoping work to better understand new and emerging digital technologies for social care, with a focus on home sensors with AI technology. We conducted a rapid scan of the literature, nine key informant interviews and three project design groups with stakeholders, and selected our potential case study sites. Scoping work included an exploration of which questions we should ask stakeholders, which themes to consider, and how best to collect the data from care providers and commissioners, people who draw on care and support, carers and care staff. It also enabled us to explore the challenges and lessons learnt from previous research and evaluation efforts and develop our research questions. This work guided our approach to stage 2 of the work. The approach to the scoping work is outlined below, and the results are described in the following chapter.
Rapid scan of relevant evidence
As part of our scoping work, we conducted a rapid scan of international literature relating to the implementation of new and emerging digital technology within social care, as well as evidence from research and evaluations of home sensors with AI technology specifically. Unlike systematic reviews, rapid evidence scans do not involve systematic methods of searching or selecting articles and do not aim to produce an exhaustive or reproducible summary of a topic. Instead, this was a pragmatic scan of articles for the purpose of informing possible research questions and identifying tools and analytical frameworks for the study (to be developed and tested with our key informants and project design groups). It was carried out through Google searches (including Google Scholar), recommendations of articles from experts, and ‘snowballing’, where sources emerged through looking at references and citations of relevant sources. 52
Although the search was not intended to be comprehensive, we used search terms encompassing the following areas:
-
‘social care’
-
‘AI’ OR ‘artificial intelligence’ OR ‘preventative sensor technology’ or ‘machine learning’ or
-
‘sensors’ or ‘assistive systems’ or ‘telecare’.
Due to the informal nature of this rapid scan of the evidence, we are unable to provide exact search strings, the number of articles reviewed or reflections on the quality of the evidence, as would be required in a more formal or systematic review.
Key informant interviews
In total, nine interviews were conducted over several months in 2020 on the phone or online with research experts in the field and with decision-makers and operational leads (i.e. staff involved in the implementation of AI sensor technologies) from local authorities and care providers. Interviewees from local authority commissioners and care providers were selected on the basis that they had used IndependencePlus. Informants with research expertise in this area were also interviewed and were identified from relevant literature and through networks established within the research team. Interviews were semistructured, allowing a flexible approach in the topics covered. The topic guides can be found in the project documentation.
Online project design groups
The research team held three online project design groups with stakeholders in October 2020. Design group 1 involved four local authority commissioners and care providers who were decision-makers and operational leads with experience of using IndependencePlus. Design group 2 involved four people who draw on care and support, and carers. Design group 3 involved three people who draw on care and support, and carers. Attendees of the second and third groups were from the BRACE Health and Care Panel and the University of Birmingham Social Work Service User and Carer Contributor group. To avoid exclusion on the basis of an individual’s access to and proficiency or confidence with digital technology, individuals were asked if they would prefer to speak over the phone with a member of the research team, rather than attend the design meeting. Furthermore, one carer who was interested in contributing but was unable to attend the dates of the online project design groups was offered an individual interview, which took place over the telephone in October 2020. In these online project design groups for both groups of stakeholders, we asked participants about the type of questions and themes they thought we should ask in the evaluation and enquired about the practicalities of collecting data (particularly given the COVID-19 pandemic). This was to help us develop the topic guides for data collection and select appropriate and feasible methods in our evaluation.
Identification of case study sites
As part of the scoping work for this study, the research team created a list of potential case study sites. These were all of the local authorities and care providers in the UK known to be using IndependencePlus. We developed this list through e-mailing and then speaking on the phone or online with the organisations listed on the IndependencePlus website, which listed all their clients. We were aware of eight sites using IndependencePlus and had communication with five of these (three others did not respond), all of which provisionally stated an interest and willingness to participate in this study. These were three local authorities (i.e. councils with responsibility for adult social services, who assess people’s needs/eligibility and commission care and support services) and two care providers (public, private or voluntary organisations commissioned to provide care). Our process of obtaining case study sites was therefore pragmatic, with no selection based on any characteristics other than them using IndependencePlus and being interested in participating. These case study sites, however, cover both local authorities and care providers, a mix of urban and rural areas, and different groups of people who draw on care and support (e.g. older and younger ages, and a range of mental and physical health conditions). While we did not assess how similar or different these providers or councils are from other organisations providing social care, there is no reason to believe they are atypical in any significant way.
Stage 2: Evaluation of new and emerging digital technology
After conducting the stage 1 scoping work, stage 2 of the study focused on evaluating the implementation of new and emerging digital technology, using the example of IndependencePlus, through qualitative data collection and analysis from three case study sites, supplemented by interviews and focus groups with care technology providers/innovators. This stage of work consisted of recruiting case study sites, interviews with key stakeholders, and review of documents, each of which will be discussed here.
Recruiting case study sites
We formally invited each of the five sites identified from the stage 1 scoping work (three local authorities and two service providers) to take part in the evaluation. However, two sites were subsequently unable to take part due to a culmination of different reasons described later in the limitations section of this chapter. Therefore, we worked with three case study sites (one care provider and two local authorities). Table 1 provides an overview of each of the three case study sites.
Case study site 1 | Case study site 2 | Case study site 3 | |
---|---|---|---|
Organisation type | Social care provider (charity; lots of services across the country) | Local authority (rural) | Local authority (urban with some rural areas) |
Care setting | Care home with nursing services | Care in the communityb | |
Number of service users who used IndependencePlus | 23 | 9 | 20–30 |
Demographics of people using IndependencePlus | People with complex physical and sensory impairments | People with physical impairments, people with learning disabilities, people with mental health problems, older people, people with dementia | People with learning disabilities, people with physical impairments, people with dementia |
Length of time of the IndependencePlus pilot | <12 months | 12 months | 12–18 months |
Stated reasons for ending the pilot | Issues with technology making it difficult to use Change within IndependencePlus towards focusing on health data in response to COVID-19 |
Coordination and communication with case study sites
The team had already begun building relationships with the five potential case study sites that had expressed an interest in participating during the scoping work, through e-mail and telephone contact with decision-makers and operational leads, and through a scoping workshop. For the three case study sites that agreed to participate, these relationships were strengthened by the allocation of a member of the research team to each case study site and the identification of a lead contact at each case study site. We had regular and clear communication with case study leads and other local contacts to ensure that the relationship remained strong throughout the study – especially in such a difficult external policy and practice context.
Interviews with key stakeholders
Recruitment
Decision-makers and operational leads were recruited through the links made with case study sites during the scoping work. Interviews were undertaken between April and December 2021 (this is longer than initially intended due to the impact of COVID). Following each interview with decision-makers and operational leads, the local leads were asked to identify front-line staff, people who draw on care and support and carers who had experience of using the sensor technology (even if very limited) and who might be interested in taking part in a subsequent interview. We did not want this to enable local senior stakeholders to pick and choose who we approached, but it was designed to ensure that we did not invite someone to take part in inappropriate circumstances (e.g. if an older person had recently died). Inclusion criteria for people who draw on care and support, carers and care staff were:
-
having had the sensor technology set up in their home and having attempted to use it (for people who draw on care and support or their carers)
-
having had experience of implementation and delivering care through the use of sensor technology (for care staff)
-
having had experience of using the sensor technology for a range of social care needs (i.e. no particular user groups were excluded).
There were no limitations as to the perceived level of ‘success’ of the sensor technology used. We were interested even if the technology was abandoned, as we were keen to learn from a wide range of experiences of the use of technology (and suspected that there may be even greater learning for others in situations where the technology was not taken forward than in pilots that were felt to have had positive outcomes – albeit it can be difficult to get access to sites that perceive their innovation has not worked as they intended; see below for further discussion). In addition, there were no exclusion criteria related to the length of time for which the technology had been used, so that we could obtain varied experiences. The criteria for sampling people who draw on care and support were deliberately broad, to include anyone needing support with their daily living, and may include older people, people with dementia or mental health problems, people with physical impairments or learning disabilities, and family carers.
The local lead(s) passed information about the study to people who draw on care and support, carers and care staff, seeking permission for the research team to contact people interested in taking part or finding out more. Where permission was granted, a member of the team e-mailed or rang potential participants to describe the study and its aims in detail. Each participant returned a consent form prior to commencing the interview and was given 1–2 weeks to decide whether they would like to participate. Prior to commencing the interview, participants also had the opportunity to ask questions about the study and/or wider Birmingham, RAND and Cambridge Evaluation Centre (BRACE)-related work. It was made clear that participants could withdraw from the study at any time, without having to give a reason, and they were also given information about how to find out more about the study, or to raise concerns about its conduct.
At each of the case study sites, we aimed to conduct qualitative semistructured interviews with 2–3 decision-makers and operational leads, five staff with experience of working with IndependencePlus, and five people who draw on care and support or carers. However, we faced significant challenges in recruiting individuals to interview (for reasons described later in the limitations and caveats section). Partly due to this, the stakeholder groups were widened to include interviews with a series of technology providers/innovators supporting the development of care-related digital technology (this included an interview with IndependencePlus itself) as well as with respondents from national bodies which regulate care. The total number of interviews was 23 (with 24 individuals) and we were unable to secure any interviews with people who draw on care and support (see below for further discussion). One carer provided feedback in written form, which is included in the count of 23 interviews below. The breakdown of the interviews conducted is provided in Table 2.
Case study 1 | Case study 2 | Case study 3 | Technology providers/innovators | Regulatory organisations | Total | |
---|---|---|---|---|---|---|
Decision-makers and operational leads | 3 | 9 | 3 | - | - | 15 |
Care staff, care providers and unpaid carers | 0 | 2 | 3a | - | - | 5 |
Technology provider | - | - | - | 2 (1 = IndependencePlus) | - | 2 |
Regulatory organisations | - | - | - | - | 1b | 1 |
Total | 3 | 11 | 6 | 2 | 1 | 23 |
Within this report, interviewees are referred to using a code encompassing their case study site or role (e.g. technology provider) and a unique identifier. For example, the first interviewee from case study site 1 is referred to using the code ‘CS1 P01’.
Interview conduct
Topic guides were developed for the interviews: one for decision-makers and operational leads, another for staff, and a third for people who draw on care and support and carers (see the project documentation for topic guides). The themes and questions covered in the topic guides were informed by the stage 1 scoping work.
The interviews invited decision-makers and operational leads to reflect on how decisions were made about the adoption of IndependencePlus; the process of implementation; early experiences of its use (including potential costs and savings); and key learning points that could help others considering using these types of digital technologies.
Care staff and carers were asked about their practice/lived experiences of the implementation process; practical realities of the technology use; the extent to which the technology had enabled, improved or hindered daily activities; the extent to which the technology had increased or compromised feelings of choice and control; and how this type of technology (or the implementation of it) may be improved.
Interviews with stakeholders lasted 30–60 minutes and were conducted online using Microsoft Teams® or Zoom, with the option of telephone interviews also offered. Interviews were audio-recorded (subject to consent being given) and detailed notes were taken by the interviewer.
Review of case study and technology provider documents
Once an interview with a decision-maker or operational lead was completed, we asked whether they could identify any documents (not containing sensitive information) that could be shared with the research team. This was to help build a picture of the decision-making process, the implementation process and any challenges/successes. The documents shared by case study sites are outlined in Table 3. We reviewed these as background and contextual information.
Case study 1 | Case study 2 | Case study 3 | Technology provider |
---|---|---|---|
Feedback notes | IndependencePlus issues and solutions workshop notes | IndependencePlus COVID package | User survey |
Staff survey | IndependencePlus risk register | Dashboard ideas | Matrix of different levels of intervention in different types of site |
Technology strategy | Info sheet | ||
IndependencePlus alerts |
Data analysis
Once we had completed the interviews, the team held a data workshop to discuss our provisional findings and identify themes.
Given the limited number of interviews per case study, and the similarity of themes arising from the interviews, the data from each case study were analysed together. Our analysis explored the desired outcomes of new and emerging technologies for social care; how these outcomes are expected to be achieved; and what resources, approaches and activities are supporting the implementation in practice. Thematic analysis of interview transcripts and documents provided by interviewees was conducted using guidance from Braun and Clarke. 53 We iteratively developed a coding frame (see the project documentation) using early interviews and discussion among the study team. 54 NVivo 12 qualitative analysis software was used for interview coding. Pilot coding of a small number of interviews was conducted independently by two researchers (SP and IL) and the team then met to agree on refinements to the coding framework and to add any additional emerging codes. The remaining interviews were then coded independently by four members of the research team (SP, IL, LH and DT).
In seeking to place our subsequent findings in a broader context, we drew on the NASSS socio-technical framework developed by Greenhalgh et al. (2017). This framework describes considerations such as the service’s capacity for innovation, the expected input/adaptation from care staff and the context for widespread use of the technology,37 and proved a helpful mechanism for making sense of our key findings. We also compared insights from our study with those of the ‘rational model’ of implementation set out in much of the literature (see Chapter 10). By considering both frameworks, we hoped to acknowledge the reality of complexity and uncertainty when implementing new and emerging technology, while also exploring some of the key headings or ‘stages’ that others might consider if planning future implementation.
Ethical approval and consent
The University of Birmingham research governance team identified this study as a form of service evaluation which did not require approval by the Health Research Authority (HRA) or an NHS Research Ethics Committee. Ethical approval was obtained from the University of Birmingham Humanities and Social Sciences Ethical Review Committee to conduct the stakeholder interviews (ERN_13-1085AP41, ERN_21-0541 and ERN_21-0541A). We sought and received an amendment to our initial ethical approval when later including interviews/focus groups with IndependencePlus and national technology providers/innovators.
Information sheets and consent forms were shared with potential participants, which set out the study aims, design, risks, benefits and who to contact if they had further questions. The information sheet also made clear that participants have a right to withdraw from the study at any point, without needing to give a reason. Information sheets and consent forms were shared with potential participants via e-mail. Of the 24 interviewees, 23 completed the consent form electronically and e-mailed it to the research team, and the other interviewee provided verbal consent having expressed a desire to consent verbally rather than via the consent form. At the start of each interview, the researcher confirmed that the interviewee was happy to proceed and to be audio-recorded.
Strengths, limitations and caveats
There are a number of strengths, limitations and caveats to note for this study, which are described here. Most notable are the challenges of engaging and recruiting case study sites and interviewees, which are outlined in the later subsection.
The research team considered whether to include multiple types of digital technology for social care, provided by a number of companies. We decided to have just one ‘exemplar’ digital technology used consistently across our case studies because, through our scoping work, it was evident that by keeping this constant, we could better establish what factors influence local decision-making and early experiences of implementation of new and emerging digital technologies. This decision was taken in conjunction with potential case study sites on the basis that different companies are constantly developing new technology, and that a helpful focus for this study would be on the underlying decision-making and implementation process rather than a direct evaluation of a specific digital technology. The use of IndependencePlus was therefore our sampling frame, making sure that the type of new and emerging technology being implemented was consistent across all sites. Had we included different technologies in different case study sites, there would have been too many variables to draw broader conclusions.
Interviews were conducted online to reduce the spread of COVID-19 and in line with government public health guidance. Benefits of online or video data collection have been acknowledged, including reduced travel time for researchers and the ease of scheduling a time that is suitable for interviewees. 55 There are, however, some drawbacks to conducting interviews online. For instance, it can be more difficult to build rapport or monitor how a participant is feeling. 56 Since the start of the COVID-19 pandemic, however, the research team have been developing their skills in online and telephone interviewing with health and social care professionals, people who draw on care and support and carers. The team have in place some techniques and solutions for ensuring interviews are conducted appropriately and that participants feel comfortable, such as sending lay information and discussion topics ahead of time, spending more time on rapport-building prior to the interview, active listening, consideration of and planning for technical difficulties, and upholding good practice and regulations for security and confidentiality.
Some stakeholders may not have had a positive experience with the technology, and their dissatisfaction or frustration may make them less keen to participate, or they may have thought we were only interested in ‘success stories’. Therefore, we ensured that invitations to participate clearly outlined the aims and potential benefits of the study (e.g. learning from past experiences and providing local authorities and care providers with better information for future decisions – irrespective of the ‘success’ or otherwise of local pilots). We also spent significant time in our scoping phase working with key leads across our case study sites, reminding them that we were interested in lessons which would help other local authorities and care providers exploring new and emerging digital technologies in future – not just in whether or not the technology ‘worked’. We feel that this is a key feature of the current study, but we found that we needed to give ongoing reminders and provide significant reassurance to sites throughout, as they were not used to sharing learning from innovations which had been abandoned or not as successful as they had hoped. We believe that there is a significant methodological and ethical issue here. However, the sheer effort expended in helping sites to feel comfortable sharing lessons that did not feel positive to them (for the benefit of others) was substantial and needed to be sustained throughout the life of the project – and there are important practical lessons here, too.
Challenges in recruitment of sites and interviewees
As mentioned, negotiating access and subsequent data collection with the case study sites was very challenging during the pandemic. We attempted many different ways of supporting recruitment, such as multiple e-mail reminders/calls, flexibility in (re)scheduling interviews, and offering participants the opportunity to provide written feedback via e-mail (rather than taking time out for an interview). Throughout the study, we were in almost constant touch with sites, trying to form a judgement as to whether to persevere with data collection or to pause and recommence when service pressures reduced. During our scoping work, sites were adamant that they wanted to take part – both to share learning with others and to help them reflect on their own learning. However, the conduct of the study in the midst of a pandemic was extremely arduous for the research team and for sites, and we had to keep reflecting on whether we were persevering with the study in order to enable people to share their experiences or simply ‘making a nuisance of ourselves’ when people had more important priorities (and this was a constant source of discussion and reflection in our team meetings, and in interaction with local leads). The reality of conducting very topical research in adult social care during the pandemic was that many months could elapse without proposed interviews having taken place (with multiple slots scheduled, cancelled, rescheduled, paused, not taking place on the day, rebooked – and then eventually going ahead when pressures temporarily eased). We believe passionately that there is helpful learning from this study, but extracting the learning took much longer than planned, and required significant perseverance, commitment and flexibility on behalf of everyone involved. We remain incredibly grateful to participants for nonetheless making the time to share key lessons with others, at possibly the most difficult time in the modern history of adult social care.
In addition to those mentioned, other significant engagement challenges existed due to a culmination of multiple factors:
-
The five initial sites we approached adopted the technology on a pilot basis, but experienced significant difficulties, and all stopped using it. We believe that there is important learning for others from these experiences, but it meant there was less incentive for sites to engage, given that they had often moved on in practice. Relatedly, care staff perceived several of the pilots to have failed and, in some senses, to have ‘wasted’ their time. While this is a significant finding in its own right, it also made recruiting additional participants difficult.
-
On occasion, there were politically contentious or difficult topics to work through with staff (e.g. negative attitudes towards the technology provider) – which again makes the learning valuable, but also made it difficult to recruit participants.
-
The timing of the prioritisation exercise and the subsequent commissioning/scoping of the project meant that these pilots were run some time ago, and things had often moved on significantly at a local level.
-
The technology was designed to be unobtrusive and to exist largely in the background, so people drawing on care and support (many of whom were older people, including some with dementia) did not recall the pilot.
-
It can be difficult to engage technology providers in such research, given that a number of the pilots were perceived to be unsuccessful, and given a number of issues around commercial confidentiality.
-
As discussed above, the research took place when adult social care was under unprecedented strain due to the COVID-19 pandemic – with significant additional pressures falling on staff whose role is technology-related (who often had to drop all other work and focus solely on adapting to the pandemic). Participating sites have been very generous with their time and very passionate about sharing their experiences – but we spent significant time building relationships and had to change plans instantly if more important priorities suddenly emerged locally. The team gave potential participants multiple opportunities to participate, made participation as quick and unobtrusive as possible, and informally chased and prompted as much as was appropriate in the context – but any more would have placed undue pressure on sites in a very challenging context, and would not have felt ethical.
Despite this, the study has generated significant learning about the key things to consider when making decisions about and implementing new and emerging digital technology, and has produced some practical guidance and learning that future organisations could find helpful when considering the adoption of AI-based technology. The later inclusion of the perspectives of technology providers is also an additional strength, often absent from the literature (see Chapter 1). As anticipated, the study fills key gaps in the literature, provides useful learning and works well within the context of the broader NASSS framework. It remains very timely given the potential benefits of new digital technology and the rapid changes which have been brought about by the pandemic – but progress was often painstakingly slow/hard-fought and the study was by no means easy to conduct.
Chapter 3 Results of the scoping exercise
Box 7 provides a summary of Chapter 3 of this report. The chapter presents the results of the scoping work which was conducted for this study, including all results from the rapid literature review.
-
The scoping exercise encompassed the first stage of this research and consisted of a rapid scan of relevant literature, nine key informant interviews and three online project design groups.
-
Key themes were fed into the design of stage 2 of the study. Given the challenges described in Chapter 2, it is possible that stage 2 may not have been feasible to complete without the insights and relationships developed during stage 1.
To inform our study design, the research team undertook scoping work to better understand new and emerging digital technologies for social care, with a focus on home sensors with AI technology. We conducted a rapid scan of relevant evidence, undertook nine key informant interviews, ran three project design groups with 11 stakeholders, and selected our case study sites. Scoping work included an exploration of which questions we should ask stakeholders, which themes to consider, and how best to collect the data from care providers and commissioners, people drawing on care and support, carers and care staff. This also enabled us to explore the challenges and lessons learnt from previous research and evaluation efforts and develop our research questions.
Rapid scan of relevant evidence
As part of our scoping work, we conducted a rapid scan of literature relating to the implementation of new and emerging digital technology, as well as evidence from research and evaluations of home sensors with AI technology specifically. We did this to clarify the research questions, develop research tools and identify appropriate analytical frameworks for the study. This was an informal process, using Google and Google Scholar searches, as well as ‘snowballing’ techniques, where sources emerged through looking at references and citations of relevant sources. 52 As mentioned in Chapter 2, due to the rapid and pragmatic nature of this rapid scan of the evidence, we are not able to provide exact search strings, the number of articles included in the rapid scan or a reflection on the quality of the evidence. This was a pragmatic step to inform the design of stage 2, and should not be understood as a systematic review of the literature.
From this informal/rapid scan we identified the studies and literature discussed in Chapter 1 of this report – many of which were categorised into three main themes (see Table 4). First of all, there was a widespread sense that technology in general was not being used to maximum effect in social care. In particular, there seems to be a lack of evidence around the implications of new and emerging digital technologies – hence why this topic emerged as such a priority for rapid evaluation in the national prioritisation exercise which led to the current study. Second, there can be significant difficulties in terms of selecting what technologies might be most appropriate within social care. There is apparently little in the way of guidance for those procuring digital technology and a lack of awareness of the needs and opinions of the people who are ultimately expected to benefit – those who draw on care and support. These challenges around choosing the most appropriate example of the technology are exacerbated by a rapidly changing marketplace populated by numerous smaller start-up companies that provide new and sometimes optimistically described technology. The final theme relates to the management of what can sometimes seem unrealistic expectations of the technology compared with the actual impact it can have on social (as opposed to health) care. The extent of these expectations is not aided by the lack of demonstrable evidence of the efficacy of much of this new technology.
Key themes | Subthemes |
---|---|
Understanding the functionality of digital technologies | Technology not being used to its full potential in social care |
Gaps in understanding of new and emerging technology in particular | |
Selecting appropriate digital technology | Little information to support stakeholders in selecting technology |
Importance of the voices of people who draw on care and support and carers | |
Fast pace of change of the technology market and challenges this poses to the use of technology for social care | |
Managing expectations | A lack of evidence concerning the outcomes expected by key stakeholders (including commissioners and providers, people who draw on care and support and carers) |
Predominance of health over social care outcomes |
Key informant interviews
We spoke to strategy and technology leads, commissioners and programme managers from across our case study sites. The interviews covered three key topics. First, we discussed the key areas that might be explored in our evaluation and the nature of the questions that should be asked of senior managers and care staff. Second, we discussed potential challenges and solutions when undertaking research of this kind. Third, we explored the practicalities of trying to collect data from a diverse workforce and a range of people who draw on care and support and carers, especially in a challenging policy context. This helped to ensure that stage 2 of our study was designed in such a way as to ask the right questions, generate helpful learning and ensure our methods of recruitment and data collection were practical and appropriate.
Emerging themes are summarised in Table 5. In relation to the content of the evaluation, our stakeholders felt it was important to understand not only the potential benefits of using new and emerging digital technologies to provide predictive care, but also in which circumstances it might be used and to surface any ethical concerns about collecting and using individual-level data in this way. Our key informants also felt it was important to understand the implications for the workforce, and whether it met the needs and expectations of senior staff, as well as capturing the experiences of front-line workers who had to use the technology in practice. In considering the key challenges and potential solutions when conducting such research, participants felt that we should focus on a single piece of technology, rather than including multiple interacting technologies and systems across different sites. They were also keen to capture outcomes relevant to social care (rather than to health care, as in most of the literature), and felt that something like the NASSS framework (which we shared with them) would be a helpful way of making sense of what we were likely to find. Finally, in considering the practical considerations of data collection, it was felt important to capture the experiences of people who draw on care and support and their carers, to include a range of different perspectives from different stakeholders, and to consider and accommodate the impact of COVID.
Theme | Subthemes |
---|---|
Questions to ask in relation to AI technology | Potential benefits of technology for social care in terms of planning care |
Ethical and political consequences of using data to make decisions | |
Importance of living situation (e.g. whether or not a person is living alone) | |
A need to better understand what service managers want from technology | |
Implications for front-line staff and the ways in which they interact with technology | |
The key challenges/solutions for conducting research in this area | Limitations of having multiple technologies from different providers in a single evaluation – focus on a single piece of technology across sites |
The need for more research with social rather than health outcomes | |
The appropriateness of an underpinning framework such as NASSS | |
Practical considerations of data collection | Importance of collecting data from people who draw on care and support and carers |
Incorporating different stakeholder views in the study design and in terms of their experience of the technology | |
The impact of COVID-19 on recruitment and data collection, and the importance of being sensitive to the pressures sites may be facing |
Online project design groups
These online project design groups (for local authorities/care providers, and for people drawing on care and support and carers) helped us reflect on the type of questions and themes that group members thought we should explore in the evaluation and the practicalities of collecting data (particularly given the COVID-19 pandemic). This was to help us develop the topic guides for data collection and select appropriate and feasible methods in our evaluation.
A number of key themes emerged in relation to areas of importance and concern to both groups, as summarised in Tables 6 and 7. For local authorities and care providers, key themes included understanding what the technology can offer and which specific groups might benefit, and the risk of unrealistic expectations. This might be a particular issue if sites were exploring new and emerging technology with a view to generating savings, which might not materialise if the pilot was not successful. They also felt that it was important to understand the broader context in which technology was being implemented, with a risk that new ways of working might not be aligned with key pressures and drivers in the broader context. Other key themes were related to the digital literacy of the social care workforce, the best way of engaging a range of different stakeholders in implementation, and the difficulty of evaluating outcomes if what counts as success is often unclear.
Key themes | Subthemes |
---|---|
Understanding the technology and the problem being addressed | Importance of a clear understanding of what technology can offer at the outset of adoption |
Identifying care and support needs before selecting which technology to use | |
Unrealistic expectations may drive decision-making and negatively impact resource planning | |
The importance of the environment/context in which the technology works | |
Digital literacy of those involved | The skills required of staff and users are a primary consideration and influence on success |
Challenges around understanding what new and emerging technologies can do in a fast-changing market | |
Participant engagement | Strategies for how best to approach and engage staff, people who draw on care and support and carers in the research |
Measuring outcomes | What constitutes success and how to measure it is often unclear |
Key themes | Subthemes |
---|---|
Impact on care and shared decision-making | Will people still have choice and control over their care, and can they choose whether or not to engage with new technology? |
Impact on social contact with care workers | |
Data protection and information governance | What are the implications for privacy? |
How data are gathered and used | |
Accountability/responsibility | |
Digital literacy | Technical skill and requirements of people who draw on care and support and carers |
Strategies for supporting the participation of people who draw on care and support and carers |
In contrast, people who draw on care and support and their carers expressed interest in how the technology might impact on their care and the shared decision-making associated with it. This included concerns that pilots might reduce social contact with care staff, and might erode choice and control (e.g. a feeling that analysis by the technology might drive what care is provided, rather than the person being able to exercise a degree of choice and control). They also raised questions about privacy, accountability and the use of data, as well as around the technological skills of people who draw on care and support and carers (see Table 7).
Research questions
As a result of this scoping work, the study seeks to answer the following core research questions:
-
RQ1. How do commissioners and providers decide to adopt new and emerging technology for adult social care? (Decision-making)
-
RQ2. When stakeholders (local authorities and care providers, staff, people who draw on care and support and carers) start to explore the potential of new and emerging technology, what do they hope it will achieve? (Expectations)
-
RQ3. What is the process for implementing technological innovation? (Implementation)
-
RQ4. How is new and emerging technology for adult social care experienced by people who draw on care and support, carers and care staff? (Early experiences)
-
RQ5. What are the broader barriers to and facilitators of the implementation of new and emerging technology in addressing adult social care challenges? (Barriers and facilitators)
-
RQ6: How has the COVID-19 pandemic influenced responses to the questions above? (Impact of COVID-19)
-
RQ7: How can the process of implementing new technology be improved? (Making improvements)
We answer each of these questions in turn in Chapters 4–9, with Chapter 10 (Discussion and conclusion) exploring the final research question and putting findings in a broader context.
Chapter 4 Findings: Expectations of those adopting AI in social care
Box 8 provides a summary of Chapter 4 of this report.
Summary of key points
-
Perhaps unsurprisingly with new and emerging technology, there was a lack of understanding of exactly what AI-based technology can provide, which led to the emergence of a broad range of anticipated benefits.
-
The expected outcomes anticipated by multiple stakeholders can be summarised within five domains: increasing preventative care; improving assessments and diagnoses; supporting independent living; providing reassurance for those who draw on care and support and their carers; and the conservation of resources and reduction in costs.
-
While some participants could cite anecdotal examples of ways in which some of their aspirations had been met for some people, others provided caveats to these observations or offered alternative perspectives. Overall, participants felt that the sensors were not sufficiently stable or effective to collect reliable data over the necessary period of time.
This chapter explores the first six of our seven research questions, with the final chapter (our Discussion and conclusion) placing our findings in the context of Greenhalgh et al. ’s (2017) ‘NASSS framework’37 and the ‘rational model’ of implementation, and addressing our final question: how can the process of implementing new technology be improved?
Desired outcomes
Adult social care faces a series of financial, demographic and service pressures, and sites recognised that new and emerging technology could have a significant role to play. However, there was a lack of clarity as to exactly what such technology might deliver (or how), and different people often had different assumptions about what might be possible (even in the same site). In several cases (and with the benefit of hindsight), the ‘shininess’ of new technology seemed very seductive for sites looking for solutions to a series of different problems and hoping that technology could help. Others reflected that it was hard to know what new and emerging technology might achieve in advance of giving it a go.
In particular, different stakeholders seemed to be trying to achieve five different outcomes, sometimes all at the same time:
-
Preventative care: Some people hoped that the digital technology (home-based sensors with AI capabilities) would help understand patterns of behaviour so that changes could be identified and interventions implemented earlier to prevent a crisis or a deterioration in someone’s condition: (CS2 P05)
Part of the selling point of IndependencePlus was that, you know, the machine learning would pick up when somebody’s daily routine had changed and would alert you to that fact. So, you know, the kit would send a text message to a carer saying ‘Usually your mum has five cups of tea by this point and today she’s only had one’, you know. ‘Do you want to check this out?’ or, you know, ‘Your mum’s usually out of bed by this time; she hasn’t got up yet, might be worth going round’.
-
Improve the accuracy of assessments and diagnoses: Other people suggested that the increased detail captured by a broader range of more sensitive sensors might enable better informed and more accurate assessments, leading to care packages that more fully met people’s needs: (CS2 P04)
We were hoping it would help them with the assessment process and help them to genuinely understand how people use their homes and therefore what their needs were, so if that person wasn’t really showing, for example, that they were making or seeming to be making themselves regular drinks, that’s something that we would be able to factor in, because sometimes when you speak to someone and they say ‘oh yes I eat very regularly and oh yes I’ve had no trouble at all making a cup of tea’, but they’re either telling you what they think you want to hear or forgetting or fibbing or something, so we felt it would be a useful kind of tool to help a professional really understand the holistic needs of someone in not too intrusive a way.
-
Promote independence: The more nuanced data produced by the digital technology might enable a more accurate determination of the potential risks faced by those who draw on care and support and better inform mitigating interventions: (CS1 P02)
[Our] ethos is to help people … to live and work as independently as they choose and that’s very much what we’re trying to do in the assistive technology project.
-
Increase reassurance for those who draw on care and support and their carers: The flow of real-time data could be monitored by family members (in situations where people had family to take on this role), thereby providing reassurance as to their relative’s safety. Some people also hoped that people drawing on care and support would feel reassured and more confident going about their daily lives if they knew that the digital technology was monitoring them and looking out for potential problems: (CS2 P06)
We had a few young people… moving on from living at home but parents were very concerned but wanted to have paid support in the environment, they wanted people in the house all the time, whereas we were convinced, I mean, part of the enablement service is to set people up in their homes, give them daily living skills, set them up into routines so they can manage on their own – we know what they could do – we knew that they would be safe – so this was just a reassuring piece of kit that would hopefully support the parents a bit more so they weren’t so anxious.
-
Conserve resources and reduce costs: It was hoped that the use of a unified system (containing sensors and an incorporated AI element to support data analysis) would reduce costs. If support could be better tailored to people’s actual needs, then there may also be scope to reduce the amount of staff time spent on providing direct care: (CS2 P03)
We wanted to change the way we did things in order that we can make our money go further basically, and our way of doing that is to take a strength-based approach and to promote independence and we feel that the technology influence and potential is something that really aligns with that vision.
Sometimes, different people in the same site identified various of these different outcomes, raising the question of whether any one new way of working could ever deliver all these different aspirations at once. If different stakeholders wanted the technology to achieve different things, it was highly likely that many people – perhaps everyone – would end up disappointed. Table 8 sets out these different outcomes in more detail, alongside often anecdotal examples of where individual participants felt that some of these benefits had (at least partially or for a few individuals) been realised. However, other participants would often offer caveats or alternative perspectives, and perceived responses by people drawing on care and support and carers were largely those reported by senior and operational leads, rather than by these groups themselves.
Outcomes | Anticipated benefits | Were anticipated benefits met? (Perceptions of different/individual participants) |
---|---|---|
1. Preventative care | Observe changes over time in behaviour patterns | There was enough data captured for one individual (including time spent active, hours in bed asleep, the number of occasions they left the house, etc.) to confirm there was no deterioration over time in their levels of activity. |
Observe changes over time in physiological functions | ||
2. Improve the accuracy of assessments and diagnoses | Increased understanding of daily routines | Sensors demonstrated that the kettle was being used regularly and that the fridge was being accessed, leading the care provider to infer that the individual was eating and drinking frequently. |
Short-term changes in the data could be used in diagnosis | A number of restless nights recorded by the bed sensor were considered an indication of the onset of vertigo for one person. For another person, the data captured by the bed sensor identified petit-mal seizures at night. | |
As a source of additional information for health-care professionals | A range of data including heart rate was usefully provided for paramedics called out to attend an individual. | |
3. Promote independence | Determine appropriate level of care support | The data demonstrated that one person was capable of preparing their own breakfast, so staff did not need to do this for them. |
Inform calls to response service (alerts) | The sensors showed that an individual was leaving the house in the early hours, which necessitated a call to response services. Such alerts might mean that the person could continue living in their current accommodation (which might not ultimately be possible if the person went out late at night with no alarm raised). However, as mentioned, in most cases the sensors were not stable enough to collect the reliable data needed to inform calls to response services. As the technology evolved to include more health data, there were also challenges associated with care staff not having the necessary medical training to interpret whether the individual needed medical attention (if data seemed to suggest risk of a deterioration in someone’s health). | |
4. Increase reassurance for those whose draw on care and support and their carers | Reassure families as to the safety and well-being of older relatives | There were several examples where staff felt that sensor data were providing reassurance to families – these data were accessed directly by families or used by care providers as evidence of the safety of the individual. However, as mentioned, in most cases the sensors were not stable enough to collect the reliable data needed to reassure families. This also raised issues of what happens in situations where a person does not have family available to take on this role. |
Reassure people drawing on care and support | Several individuals were felt to be drawing reassurance from the fact they were being monitored, albeit remotely. | |
5. Conserve resources and costs | Financial savings from using a unified system | Purchasing the combined AI and sensor technology was felt to be cheaper than purchasing two systems. |
Involve families in monitoring | A family was successfully engaged in monitoring the data, though baulked at paying for the technology when the pilot had concluded. | |
Safely reduce staff contact | One site identified a reduction in hours of direct care needed for those with the technology installed. |
Reflecting back on this with the benefit of hindsight, some participants wondered whether this lack of clarity about desired outcomes was connected to the myriad of different possibilities that new and emerging digital technology seemed to offer.
For one person:
What we should be doing is going on an individual level – what is it that this person can do, what is it that they could do with a little bit more support, maybe through technology… – and then going out to look at a range of different options and technology being a part of that option. So, it might be that there’s people out there that have care needs, so you have to have in-person support, but they might want to control their environment a little bit more, so you might look at environmental control devices or commercially based devices like [Alexa] and smart homes and generating things like that. There might be a cohort of people that you are identifying where there’s a specific need, so then you might want to go ‘OK, so we have this cohort, because they keep telling us that this is a problem that we need to fix, here’s now a technology that can suit that cohort’, so it might be a remote monitoring device or something similar and then you’d go out to procure a lot of those different devices for that cohort.
(CS4 P01)
For another participant:
The problem is people probably want all of it and they can’t have all of it, so you need to decide what’s your highest priority. Is our highest priority to know when somebody’s fallen over so we can go and pick them up and maybe get them to hospital, is that our highest priority? I don’t know… Or is our highest priority to have lots of data about people so when we come to review them or assess them we make better decisions?... Or is our priority something as simple and practical as I want a very good automated meds dispenser for people who are able to take their own medication because overnight that saves me about 500 hours of care a week and, what’s that, a lot of money?
(CS4 P02)
Framing this in a slightly different way, others felt that there was a risk of being seduced by the apparent lure of new technology (especially given the quality of the IndependencePlus ‘pitch’ discussed in Chapter 5):
Where we’d got it wrong was we’d got this shiny bit of kit and we were trying to fit it in somewhere and make it work somewhere. So the big lesson we learnt was actually don’t start off with the kit or the technology, you start off with the people and the need. And so the question that we now ask ourselves when we look at technology in social care is: right, what is the need this person might have; where our traditional social care package is not meeting that need, and in between that gap, is there a piece of technology that could fit that gap? So that’s the big lesson that we learnt in our approach to technology in adult social care which you know, it sounds obvious now, but yes, we’ve learnt not to be taken in by the shininess of a kit or how good a sales pitch might be.
(CS4 P02)
Responses also seemed to suggest that sites were struggling with whether to experiment with new and emerging technology to see what it could do, or whether to start with people’s needs and reflect as to whether new technology might have a role to play. Thus, one site was motivated in part by a curiosity to see what new and emerging technology was capable of, rather than having a pre-planned sense of what might be achieved:
We were all kind of, like, ‘I don’t know!’, like, ‘we don’t know what things will happen, we don’t know what the possibilities are until we have it’. So I think that was part of our interest, was actually like we’re not sure what we’re looking for but we think we’ll know if we find it!
(CS1 P01)
However, another site expressed the opposite view:
I often explain it that we used to, when we were going out for like IndependencePlus and other technology projects in the past, we used to think what technology’s out there and let’s go and buy it and now let’s look at people that we provide care for and fit them to that technology. That is probably the biggest learning, that that’s a mistake. We shouldn’t be doing that.
(CS4 P01)
For several participants, the lack of clarity around outcomes was made worse when the focus of the digital technology changed during lockdown. During the early stages of the COVID pandemic, installation was paused and IndependencePlus made a number of changes to the system before it was reintroduced, so that the focus became monitoring and recording health-related data:
They took away the behavioural notifications that we had started to receive, so we’re no longer getting prompting on this person’s not hydrating as much as they usually do or that they’re not going into the fridge as much as they usually do, all those kind of things. Instead we were getting notifications around heart rate spikes or heart rate drops, with notifications set up for blood SpO2 and temperature.
(CS4 P01)
Hardly surprisingly, this evolution or pivot in the focus of the technology made it difficult to achieve desired outcomes, as the nature of the technology was different at the end of some of the pilots than at the start.
Chapter 5 Findings: Decision-making to adopt AI in social care
Box 9 provides a summary of Chapter 5 of this report.
Summary of key points
-
There appeared to be no systematic decision-making process; in its absence, a number of contextual factors influenced procurement decisions, including perceived pressure from central bodies to invest in technology-enabled care solutions and a more general belief in the capabilities of technology.
-
The identification and exploration of options and alternatives often appeared ad hoc in nature and frequently relied on word of mouth and/or a relatively superficial appraisal of the suitability of the technology, informed by the quality of the sales pitch and the aesthetic characteristics of the hardware.
-
There may be scope for the decision-making process to be more structured and, where broad-ranging changes are envisaged, more strategic in nature.
The social care sector is under growing pressure as financial constraints and workforce shortages are compounded by the increasing demands of an ageing population. In meeting this need, sites described the expectation from central government and national bodies that they would maximise the use of technology to help manage the mismatch between demand and resource:
Politicians having a belief that technology was the magic bullet for social care and it would save millions and, you know, make all the needy people go away, kind of thing… We were under a lot of pressure to innovate and be seen to be forward looking and, you know, all of that stuff.
(CS2 P03)
Perhaps unsurprisingly with approaches that are ‘new and emerging’, sites did not seem to have a systematic process for identifying and appraising the range of technological options available. As the technology provider observed, the typical approach is often to purchase the technology then find a use for it:
I would say probably the most common approach is that we’re aware that technology is moving on and we feel we should adopt some of it, but we don’t know what and we don’t know why, but we just want to see what it is and how it might help.
(CSTP1 P01)
In practice, sites tended to rely on trusted intermediaries such as a local provider or their peers in other organisations, as well as on ‘showcase’ events where a number of technology providers gathered to present their products and provide practical demonstrations to representatives from across health and social care. Arguably the most structured process was discovered at site 2, where a group of ‘Tech-Champions’ was created to explore technology-based care solutions. While this seemed a helpful way of ensuring two-way dialogue between different stakeholders, with scope to test new ways of working in a very flexible manner, the approach seemed to be primarily a pragmatic and opportunistic one, rather than a systematic attempt to identify and weigh all the different options. More commonly, sites recognised that they often lacked the necessary technical and IT expertise to ask the right questions of technology providers during the procurement process:
What we’re experts in is providing social care to people. So having somebody who understands these things and can kind of be a sense-check and go ‘sounds good but can it actually…? Where’s that evidence base for this? Has that company tried this out somewhere? Do they have really good data to back up their claims?’
(CS3 P01)
From another perspective, the technology provider felt that a lack of technical expertise among the sector led to unrealistic assumptions being made about what the technology might accomplish:
They have completely unrealistic expectations of what it can do, like they imagine that it will phone an ambulance for them or, you know, stuff like that. And so they don’t have typically a really kind of realistic view of what is and is not possible in terms of technology. So we spent an awful lot of time working with them to educate them really, I guess, in terms of what the technology does and doesn’t do, what its limitations are, and I would say that they are surprised at what it can do more than they’re disappointed at what it can’t do.
(CSTP1 01)
The pressure to engage with technology-enabled care solutions and the lack of technical expertise were compounded by the need for the public services involved in the pilots to work closely with a commercial company, with mixed reports from our interviews as to the degree of collaboration and trust between social care organisations and IndependencePlus. Some felt the company may have been withholding information about previous experiences of other sites using IndependencePlus or otherwise may have been trying to ‘run before it could walk’ and expanding too quickly. More broadly, these suspicions surfaced as a general mistrust voiced by public sector workers of the motivation of their counterparts in private-sector, profit-driven companies:
IndependencePlus are a private company who have a product that they are trying to sell, you know, so… maybe sometimes it felt like their drivers weren’t about preventing a decline in independence but may be more about, you know, you know – trying to sell their products.
(CS2 P05)
Within this broader context, three factors appeared central to the final decision to procure IndependencePlus, namely: the quality of the pitch; their ability to operate at scale; and an understanding of the potential of sensor-based remote monitoring. These are further explained in Box 10.
-
The quality of the presentation/pitch: At all three of our sites, the persuasiveness of the initial and subsequent presentations by IndependencePlus encouraged its selection ahead of other options in the field.
-
Perceived ability to operate at scale: For one large local authority, it was important that the technology had the ability to be used at scale. Other options in the marketplace seemed to be smaller and from less mature organisations lacking sufficient capacity.
-
Familiarity with sensor-based technology: Two sites recognised the potential advantages of connecting sensor-based technology to an AI system, having previously used a more limited alert-based technology involving a less sensitive set of sensors without the ability to infer specific activities or record baseline data over a period of months.
Consultation with stakeholders
Having seen or heard of new technology that might be promising, sites adopted a range of approaches to consulting with local stakeholders. A local lead at site 3 described a series of meetings and workshops that included their reablement service, their ‘care alarm service’ that monitored alerts, and a number of social care practitioners. At site 2 they adopted a different approach which involved convening a smaller core group who worked closely with the provider, effectively becoming ‘ambassadors’ for the technology and installing the system in a ‘show home’ to demonstrate its capabilities:
The [senior commissioner] gathered a small team around [them] who focused on that piece of work. And the people who worked quite closely with it at that point were very, very enthusiastic and big ambassadors for it and really wanted to try this. And we had things like a kind of demonstration ‘show home’ type of thing that we put in and we showed it to elected members and, you know, a bit of a roadshow around and tah dah! A kind of big fuss!
(CS2 P03)
However, a key issue was the perceived gap between senior decision-making and front-line care delivery (which is presumably a risk in all big organisations). Despite the various workshops and demonstrations, there was little evidence of detailed and two-way consultation with front-line care staff to understand their perspectives, aspirations, anxieties and likely responses before taking the decision to proceed with a local pilot. As one senior manager described:
I think sometimes we have a danger of working in siloes so like the... team who are looking at different types of technologies might go out but they might not necessarily understand what is definitely needed on the front by a practitioner in order to benefit their assessments, and what was really appealing and really exciting and innovative from their perspective isn’t necessarily the same from my perspective.
(CS2 P07)
Only one site described presenting the technology to those drawing on care and support in order to seek their upfront feedback in terms of acceptability. This could sometimes lead to significant issues further down the line – as in one site where it emerged that wheelchair users were asked to wear a device which recorded ‘steps’:
A subtle but relevant point was that the wristwatches on the IndependencePlus system presented data as ‘steps’ rather than movement and some of the users of wheelchairs here were quite affronted, understandably, that ‘this isn’t measuring steps because I’ve not taken a step in my life!’
(CS1 P02)
Decision to stop using IndependencePlus
Interviewees did not provide detailed information regarding the decision to stop using IndependencePlus, although some referred to the disruption caused by the COVID-19 pandemic, and the changes that the company had implemented as a result of the pandemic (see Chapter 9 for details). However, as we discuss throughout this report, the technology had in many cases not lived up to expectations, and there were challenges around the technological capabilities and the ability to use the equipment for multiple patients. Because of these difficulties, sites may have chosen to discontinue their use of the technology regardless of the pandemic.
Chapter 6 Findings: Implementation of AI in social care
Box 11 provides a summary of Chapter 6 of this report.
Summary of key points
-
There was no apparent protocol at any of the sites that described the process of implementing and evaluating new and emerging technologies.
-
Training was provided for staff, though when provided directly by the technology provider there were reports it failed to appropriately engage care staff.
-
Because of the perceived lack of consultation with care staff during procurement, issues arose with finding a suitable cohort to pilot IndependencePlus.
-
Careful consideration is needed in gathering informed consent from people who draw on care and support, especially where people have cognitive impairments (who, ironically, might be the very group that were intended to benefit the most from the technology).
Sites made no mention of a systematic approach to implementation or of any standard operating procedure (SOP) they would ordinarily use for implementing and evaluating service innovations. As a result, there sometimes appeared to be a lack of structure to the way the implementation was managed. While the technology provider was involved in setting up the sensors, there seemed to be little clarity around the respective roles and responsibilities of the provider and sites, either with regard to the ongoing maintenance of the sensors or in terms of sharing risks/benefits. While some sites had engaged senior staff in designing the implementation of IndependencePlus, the lack of a broader consultation process meant that some front-line staff only became aware of the new technology when the implementation started. In one site, the initial stages of implementation entailed installing the kit in two staff members’ homes to demonstrate how it worked and the nature of the outputs it generated. Though this usefully highlighted some of the key issues around digital connectivity and information governance (see below for further discussion), the extent of these issues also seemed to negatively impact on staff buy-in. The early stages of implementation also revealed significant issues with care practitioners’ digital literacy and varying levels of experience of technology within the workforce (e.g. a survey conducted at site 3 found that 57% of the workforce were uncomfortable using technology). As a result, it was perhaps inevitable that some staff were reluctant to engage with technology for fear their lack of technical expertise would be exposed or were otherwise instinctively suspicious of AI-based technologies:
[Some staff would say]… ‘I don’t really understand technology and this is just going to expose me as somebody [who doesn’t understand technology]…’. Maybe they’re worried about what that says about their intelligence and that’s clearly a fear more than a reality. And then I think there was others were thinking ‘this will never work!’ and then, you know, almost – the detractors sometimes are more difficult because it almost becomes a self-fulfilling prophecy in that I don’t think is going to work and so I’m not going to engage with it and therefore it won’t work and therefore I was right all along.
(CS1 P01)
These attitudes highlighted the importance of consulting and engaging with staff destined to use the technology earlier in the decision-making and implementation process. In this way, their concerns could be surfaced and addressed. As a senior decision-maker in one site reflected:
I think first and foremost with any tech, regardless of what it is, I think the first step forward is always talking to the staff, training the staff and giving them that information before you go ahead and start doing anything. Because I think what’s happened here is we’ve put stuff in and then done a little bit of a ‘Look! We’re doing this!’… and it’s proven to be problematic.
(CS1 P03)
Training
Training focused on the interpretation of the dashboard to enable staff to understand the implications of the data for each individual. The majority of this was delivered directly by IndependencePlus, with some delivered in-house. Perhaps indicative of a broader antipathy towards new technology, but also offering a lesson for any external IT training, there is a need to engage more effectively with care staff. In one site, the training conducted by IndependencePlus seemed to antagonise participants:
The social workers absolutely hated it every single time the IndependencePlus people came down to talk to them because they were too slick, they were too techy, they didn’t understand. And equally on the other side, I’m sure they kept looking at us as if to say ‘What is the matter with these people? Just embrace it!’ So, it was just totally different languages and so I think one of the lessons is you need to have that kind of intermediary, that kind of translator, because it just got their backs up so badly… This one poor guy kept turning up and every single time he’d speak to social workers he’d tell them about this slightly fictionalised case where someone had had a urinary tract infection and how that had prevented them from needing a hospital admission and blah, blah, blah, and of course it drove the social workers nuts because they did not care, quite honestly!
(CS2 P04)
Identifying a suitable cohort
Due in part to the top-down nature of the procurement, issues arose as to exactly where IndependencePlus would sit and who it might benefit. As described in Chapter 4, the technology was expected to benefit a range of users and situations. However, in one site it was discovered very late on that the technology was not appropriate for all the intended cohorts. This might be because the sensors were not specific to individuals (so not suited for shared environments, e.g. in a care home kitchen or supported living facility). In other cases, people with dementia who were confused by the technology sometimes tampered with or removed the sensors, while those in the over-85 age group would frequently turn the hub off at bedtime as they were accustomed to unplugging electrical equipment overnight (this issue is also discussed in Chapter 7):
Well, the original thinking was that we were going to deploy IndependencePlus kits around various cohorts… and it soon became apparent – and again we’ve learned from our mistakes which is always good – that we had actually bought a sizeable amount of tech and were then trying to find people to fit the tech… and very quickly the cohorts and the amounts that we could potentially deploy to, the numbers started reducing and reducing.
(CS2 P03)
Only in one site was there a more concerted effort involving multiple staff groups and the technology provider to identify residents who might be appropriate for the pilot. They used both data impact assessments and diversity impact assessments and engaged with individuals to further inform the acceptability and suitability of the technology in each case.
Seeking informed consent
Processes for seeking informed consent often involved very complex judgements and trade-offs, as those who might benefit the most from IndependencePlus were often those less able to provide fully informed consent. This created a series of moral dilemmas for sites:
I think there’s a lot of complexity to unpick with safeguarding in terms of having people who can’t provide consent for themselves and putting these sensors in their living space and then using that information to determine kinds of care decisions. I think there’s a lot of complexity that no one in the organisation was very comfortable with… is it the ethically right thing to do at the expense, which might be kind of discomfort from that person or not ever really getting to a place where they truly understand what all of it is for – is the value there – does the value for us keeping them safe outweigh them not feeling very safe or not understanding kind of the whole context of the supports that they’re getting?
(CS1 P01)
As part of the consent process, a number of queries were raised by potential users. The use of the phrase ‘artificial intelligence’ caused concerns because of potential misconceptions in the public domain as to its safety (e.g. media stories about the risks of AI-driven cars). As a result, some staff talked about becoming more selective in the language they used, referring instead to the digital technology as ‘enablers’ (as an example):
‘Enablers’ is giving people the information to understand what it’s doing. Labelling anything ‘artificial intelligence’ at this moment in time conjures up all sorts of prejudices, cars driving down the motorway with people asleep at the wheel or reading the newspaper.
(CS2 P10)
Similarly, the technology provider reflected how they had begun working more closely with their customers to support these conversations:
We’re increasing the amount of work we do on how do you have the conversation with a prospective subscriber – so it’s one thing for us to train up social workers in what the data looks like and how to use it, but ultimately they’ve got to go out and have a conversation with someone and say, you know, convince them to have it installed in their home, so how do they have that conversation? So we’re doing a lot of work with them on having that at the moment. And one of the ways that we don’t do that is reflected by one of our dear customers, who I shan’t name, who said, ‘Look, we’ve been going out to people, we’ve been asking if they want an AI installed in their home and they say “no”!’ I’m like, well, you know, would you?!
(CSTP 01)
Staff also were called upon to allay concerns over the aesthetic impact of the sensors in people’s homes, using images of the sensors and hub to reassure potential users. Perhaps most significant were the repeated concerns raised not only by people drawing on care and support but also their families that the presence of monitors in bedrooms and bathrooms risked compromising privacy, which staff acknowledged might be considered invasive:
The challenge with this, even as a concept, is this idea of it’s all a bit Big Brother-like, it’s all a bit you know, sort of a bit ‘spying’… The challenge would be to break down some of the stigma that might come with that. I’m not necessarily saying that it’s true, that it’s like a bit Big Brother-like, but I think that is the perception amongst some people who might be resistant to using the technology, you know. If it’s just there forever, recording how many times I use the toilet, you know, it’s uncomfortable.
(CS2 P03)
Chapter 7 Findings: Early experiences of those using AI in social care
Box 12 provides a summary of Chapter 7 of this report.
Summary of key points
-
The instability of the sensors (not producing data on a continuous basis for long enough to be consistently meaningful) meant there was no evidence of the expected cost benefits from a reduction in direct staff contact. That they were single-use further impacted on any potential savings.
-
Though the data gathered by the sensors were potentially useful, question marks remained over their reliability and precision (making it difficult to rely solely on the system for the safety of those who draw on care and support).
-
Several technological aspects of the system reduced its flexibility. For example, it could only be installed by the provider and was perceived as difficult to maintain and update independently. The complexity of the user interface and volume of the data it produced were also felt to overwhelm staff.
-
Given challenges in engaging people drawing on care and support and their carers, we were not able to collect information regarding the experience of these users of IndependencePlus.
The experiences of staff using the digital technology tended to fall into three main areas: cost, utility and usability. As noted in Chapter 2, we were not able to include the experiences of people who draw on care and support or carers in the way we had hoped, so the themes reported here are based on staff perceptions – both of their own experiences and their interpretations of the way in which the technology was experienced by people drawing on care and support/carers.
Cost
Some sites had hoped that IndependencePlus would be cost-saving in the short term, in particular due to its perceived ability to support independent living with a less intensive care package and less staff time. However, participants felt that the instability of the technology (see next page) meant that there could be no reliable evidence of it conserving resources. Another cost implication was the misplaced expectation that the sensors could be reused across multiple locations. However, it transpired during the implementation that the sensors were single-use only:
I think we thought it would be used more like a rolling piece of kit that you would take in and out of people’s homes. I think the original vision was you could just deploy this kit, gather the information you needed, take the kit away and deploy it somewhere else. And like I say, it became more about you know, like, giving somebody a piece of furniture that they have to keep forever and a day and it just continuously records data on them, transmits that data.
(CS2 P05)
Utility
Several participants across two case study sites questioned the usefulness of the data they were presented with, as – in common with other remote sensor-based technology – they could not provide precise information on the activity of the individual. One example quoted was that a sensor indicating that the kettle had been switched on did not necessarily mean that a hot drink had been made and hydration levels were being maintained. Another issue that surfaced was that there seemed to be no way of determining whether the hub had been turned off or whether there was a lack of data on an individual because, for example, someone had fallen and was not moving. Undoubtedly, though, the overriding issue was a reported instability of the sensors (i.e. perceived as not working properly and providing continuous data for long enough to be consistently meaningful):
In terms of useful output of the kit, we were never able to get enough stability from the system to engage staff to look at the data because it was always quite patchy or always empty or things weren’t charged… It has to be 30 to 60 days of all of the sensors stable and staying online and feeding data into the system. And we spent almost a year and were never even able to get behaviour patterns established because we couldn’t keep the sensors online enough.
(CS1 P01)
Usability
The sensors were perceived by staff to be relatively unobtrusive and well designed (although as mentioned above, we do not know if this was also the view of people drawing on care and support of their carers). However, it was a source of frustration for staff that they could not install the new equipment independently as they had done with previous sensor-based technologies. Instead, IndependencePlus technicians were required for installation. Because they were based at distance, they were reluctant to travel until a number of ‘subscribers’ were identified, leading to delays in installation. Sites also experienced practical difficulties in ensuring ‘compliance’ from intended beneficiaries, who might remove sensors or disrupt their functioning by adjusting their position (e.g. the sensors on the bed recording hours of sleep):
They have to wear this and they have to not touch this and they have to put the seatbelt on and this, that and the other – and it sounds very, very complicated, the technology… But for us, the kind of people that we could guarantee that compliance with are most likely not the people that we would want to use it for.
(CS2 P02)
In order to use the data effectively, social care workers described how they needed the data interface to be straightforward to navigate and intuitive to comprehend. Although IndependencePlus adjusted the dashboard at one site in response to feedback, other interviewees reported that the quantity of data being pushed to the dashboard tended to overload staff and hinder comprehension. One example quoted was the way in which time was presented to the nearest second, instead of being rounded up into 15-minute slots. There was also a broader issue related to the technological expertise required to maintain and update the IndependencePlus software:
Technical maintenance was a very technically complex – well it was more technically complex and fiddly than we would have liked – to delete a sensor and add another sensor in meant going into, logging into that user’s portal as an admin person on an Android device. I wasn’t too familiar with Android devices myself – I’m an Apple user – so even that kind of navigation around the devices was a bit of a challenge. And I guess we’ve got to remember as well that within the care home there is nobody technical, there are pockets of accidental technical expertise, but this is really just people like the chef who have an interest, but his job is cooking the food, not supporting the technology!
(CS1 P01)
Faced with these various technical complications, sites concluded that the pilots had not ‘worked’, and by the time of our involvement all had abandoned the experiment. However, as described in Chapter 4, at least some of this may also be linked to a lack of consensus around desired outcomes. If different people expect the technology to achieve different things, then it might prove impossible to design and deliver a new way of working which meets everyone’s aspirations, thereby leading to an inevitable conclusion that the technology has not been successful.
Chapter 8 Findings: Barriers and facilitators to the use of AI in social care
Box 13 provides a summary of Chapter 8 of this report.
Summary of key points
-
Incorporating AI-based technology into existing models of social care provision requires alterations to existing funding models and care pathways, as well as concerted training to increase the digital literacy of the workforce.
-
New and emerging technology-enabled care solutions require a robust digital infrastructure, lacking for many of those who most draw on care and support.
-
Short-term service pressures and a sense of crisis management are not conducive to the kind of culture and approach that might be needed in order to reap the potential longer-term benefits promised by AI-enabled technology.
Beyond the specific issues related to the reliability and stability of the digital technology described in Chapter 7, sites experienced a series of dilemmas as to how to make the technology work in practice.
Funding, training and resources to incorporate new workstreams
There is an assumption that AI-based technology may, in the long term, reduce the need for human analysis (i.e. the technology would identify key patterns in people’s routines and raise an alert if a significant deviation was detected, without a member of staff needing to monitor the data in detail and make this judgement). However, the reality for a number of sites was that IndependencePlus generated a large amount of data which needed significant staff time to monitor and interpret. Current funding models mean that care providers are paid by the hour for the time spent providing in-person care, not to monitor additional data such as those produced by the IndependencePlus sensors. This also raises questions about who is responsible for monitoring the data from the sensors, and some sites seemed uncertain around exactly who was responsible for what in this regard. This was particularly the case when the technology evolved to include more personal health data, leaving some care staff feeling that they may receive medical data which they did not feel qualified to interpret or comfortable receiving:
Our lead domiciliary care provider, they’re not geared up to looking at health data and making health judgements based on that, so quite rightly they were saying ‘we’ve got this thing that says heart rate spike, what does that mean, do we have to contact a GP, what’s going on?’ So there was a lot of confusion around that, and we actually stopped using the system through COVID because of that.
(CS4 P01)
The whole point around these sensors and these monitors is it’s just continuously feeding on data that a client produces, so the challenge to us in social care is, well, is some of this health-related data? In which case, do we have a real remit to be capturing and processing that data?
(CS2 P03)
Because so much of it is health metrics or what would indicate medical problems or something that needs medical attention and support, I think that it needs to be the team that knows the most about that information, it needs to be people who can interpret what normal heart rate data needs to look like and what normal sleep patterns might look like… so it needs people who know what they’re looking at, know how to interpret it and are skilled and already knowledgeable about how to take that medical data and turn it into actions. This is when we need to call the GP. This is when we need to call an ambulance. This is when we need to change these meds. This is when we need to – you know, it needs to be the people who will make those medical decisions who are interpreting those data.
(CS1 P01)
Incorporating AI-based technology into service delivery also requires that workstreams are reconfigured to accommodate sustained data collection. However, the current model of care management in many local authorities requires a social worker to assess need, determine eligibility for care and support services and design an initial care package, with a separate reviewing team (or even the provider itself) carrying out what can be a fairly minimal review at infrequent intervals. This series of relatively short-term contacts with care practitioners means that there may be few, if any, professionals in the person’s life who might be well placed to understand a baseline of someone’s usual routines and respond to sudden variations. Of course, there is no reason in principle why care agencies contracted to provide care could not monitor AI data, if this formed part of an agreed care package, though this needs a fundamental shift in the skill set and expectations for some members of the workforce:
We’re changing the way [we provide] care and that needs some different mindsets and skills or additional to what the care staff have. So the care staff generally have very – they want to be supportive and help people – they’re that kind of character generally – some more than others are comfortable around technology.
(CS1 P02)
Interviewees from regulatory organisations noted that social care, in comparison to health care, has fewer professional regulatory bodies. This may create a risk that training on the use of these AI technologies is not standardised. Fewer professional regulatory bodies might also make it more difficult to collate an evidence base on the efficacy (and cost-effectiveness) of a new technology used in a social care setting.
Digital infrastructure
People who draw on care and support do not always have appropriate digital connectivity to enable or maximise the use of these technologies, especially in rural areas. Organisations providing social care also need to have appropriate and robust IT systems in place whereas, in reality, some reported the lack of Wi-Fi or the absence of a work computer to view the dashboard:
In areas that had poor signal or Wi-Fi connections, you could be thinking that everything’s going alright and then they might be in need of help and you wouldn’t sort of know because the Wi-Fi’s dipped or something like that, you know, so they were the doubts going through my mind at the time.
(CS2 P08)
We didn’t know that, you know, that there would be places that staff don’t know their e-mail addresses and have worked there for 20 years and there’s no staff computer. So we’re telling them ‘you just log into this database with your e-mail address and you can see all this data from this person’ and they’re like ‘we don’t have a computer here’. That is a problem!
(CS1 P01)
This creates challenges as the effective use of AI-based technology requires large amounts of good-quality data to be both collected and shared with the relevant individuals.
Cultural shift towards preventative care
However, perhaps the biggest barrier to successful implementation was to do with a broader cultural shift from one that rewarded short-term gains towards one that recognised the longer-term benefits of a more preventative approach (which might take years before any investment is recouped):
Social care doesn’t have a predictive, preventive culture. It will be great if someday we do, but right now all of the incentives and all of the monitoring and the way sites are graded for quality is not based on preventing issues, it’s based on handling the ones in front of you. So there’s a lot of work, kind of, at a more systems level on how social care is monitored and how quality is assessed.
(CS1 P01)
This is about getting big data and going ‘well actually in 10 years’ time Mr Smith is likely to develop these conditions because people with these sorts of characteristics develop these conditions’. Now I completely and 100% support that as we move towards a place-based system, but councils and CCGs are just not there – and that’s a real challenge for them. They can’t fund stuff that’s going to save money in 10 years’ time. They need to save money tomorrow, and that’s a real challenge for actually doing anything that’s genuinely going to prevent increase in need over time.
(CS3 P02)
Chapter 9 Findings: Impact of COVID-19 on AI in social care
Box 14 provides a summary of Chapter 9 of this report.
Summary of key points
-
The onset of COVID-19 part way through the implementation led to a shift in priorities, with some abandoning the pilot as the focus became managing the effects of the pandemic.
-
For those sites that persevered with the pilot, observing the regulations around social distancing delayed the roll-out of the pilot, predominantly because installation of IndependencePlus was interrupted.
-
The pandemic did prompt some sites to reflect on the potential benefit of such technologies that employed remote monitoring in future pandemics.
The onset of the COVID-19 pandemic affected the pilot implementation in a number of ways. The most significant was the shift in focus and priorities of care organisations to meet the needs of the local population, both in terms of managing those already within their care and in supporting those discharged from hospital:
People’s focus was elsewhere, you know, people had different drivers at that point. You know, a lot of our social work focus was on supporting hospital discharge because at that point it would get, you know, free up beds in the hospital so that COVID patients could go to the hospitals.
(CS2 P05)
More specifically, the change in working practices brought about by the need to socially distance also directly impacted on the installation of the sensor-based technology. One site had to cancel its plans to send staff to IndependencePlus headquarters for training in installation. It also meant that IndependencePlus staff could not visit the sites to install the sensors.
During the pandemic, IndependencePlus also took the opportunity to make changes to the technology, shifting their focus from behavioural monitoring towards physiological measures relevant to monitoring the symptoms of COVID. This led directly to one site ending the pilot, feeling that its staff were not equipped to monitor and respond to health data. However, moving forward, the experience of COVID led some to recognise the potential advantages of this technology in future waves of the pandemic, where its ability to remotely monitor those forced to isolate would offer reassurance to family members and care providers:
It would have been brilliant during COVID, when you think of all the family members who were having to isolate in their own homes, whose family members were hundreds of miles away, just to have that extra reassurance without constantly nagging somebody to check on them for you. So it’s a shame, because I can see it being of much more use during COVID, when you’re trying to limit those human-to-human contacts.
(CS3 P04)
Chapter 10 Discussion and conclusion
Box 15 provides a summary of Chapter 10 of this report.
Summary of key points
-
In analysing and interpreting our findings, we draw on two implementation frameworks: the ‘rational model’ of policy implementation and the NASSS framework (non-adoption, abandonment, scale-up, spread, sustainability). We use these frameworks to set out practical recommendations for implementing new and emerging technology in social care.
-
We identified a number of gaps in the implementation of new and emerging technology in social care, relating to: identifying the problem, establishing/weighing decision criteria, generating/evaluating/choosing the best alternatives, implementing the decision and evaluating the decision. We outline questions to raise when exploring the potential for implementing new and emerging technology in relation to each of these aspects.
-
While both the rational model and the NASSS framework are helpful for structuring and summarising the findings of this evaluation, this should perhaps not be at the expense of room for sites to try out something that might make a difference (without necessarily knowing the expected outcome in advance) and to learn by doing.
Chapters 4–9 set out findings with regard to our first six research questions. We found through our field work that the decision-making processes within social care organisations adopting IndependencePlus tended to be ad hoc rather than strategic, with sites often selecting a technology before identifying the exact need that it would fulfil within their organisations. Decision-makers had expected the technology that they had adopted to achieve a lot of different goals, including: improving care planning and reducing costs for the social care system, aiding in providing preventative care and responding to care needs, supporting independent living and providing reassurance for those who draw on care and support and their carers. However, these expectations were, on the whole, not met within case study sites, both due to the technological capability of the equipment and due to factors such as a lack of capacity among social care staff to analyse data and use it to improve care at scale, and a culture of responding to needs and crises rather than preventative care.
Building on these data, the current chapter seeks to draw these insights together in order to answer our final research question: how can the process of implementing new and emerging digital technology be improved? In the process, we draw on two frameworks from the broader literature around implementation in general, and around the implementation of technology in particular:
-
the ‘rational model’ of policy implementation
-
the NASSS framework.
Using both models, the chapter sets out a series of practical recommendations for local authorities and care providers seeking to implement new and emerging technology in future, and for technology providers/innovators seeking to work with local authorities and care providers. These messages will be fed back to participating sites to contribute to local discussions about current and future strategy. They will also form the basis of more practice-orientated outputs from the study (e.g. a checklist or toolkit that enables local authorities, care providers and technology providers to plan and structure future approaches to implementing new and emerging technology). Finally, we conclude with reflections on the strengths and limitations of this study, and on implications for future research.
To what extent is new and emerging technology implemented in a ‘rational’ way?
A common framework within the implementation literature is the ‘rational model’, usually associated with Simon (1945)57 (see Figure 1). While different versions of this have emerged over time, the model is usually presented as a series of stages to work through, with a clear statement of the problem to be solved, a weighing up of the options available, implementation of the chosen approach and evaluation of the impact/lessons learnt – with insights usually feeding back to inform future implementation. Ever since, this model has been subjected to a number of critiques – not least that it seems not to describe what actually tends to happen in practice (see Powell 201158 for further discussion). As a result, other writers set out more of an ‘incremental model’, usually associated with Lindblom (1959). 59 Here, decision-makers tend to take a small number of incremental steps, but often remain broadly within the status quo – and implementation is more a matter of ‘muddling through’ than it is a neat, linear, transparent and rational process.
However, as Powell (2011, p.14)58 observes, criticisms of the rational model which focus on it not being a good description of what happens in practice perhaps miss the point. In his view, ‘much of this misses the difference between description and prescription … The approach essentially prescribes how policy ought to be made rather than describing how it is actually made in the real world. [Critics of the rational model] conclude that [it] is too idealistic, while the [incremental model] is more realistic but too conservative … In one sense, then, many critiques of the ‘rational’ model often fire blanks at a straw person as they conflate descriptive and prescriptive perspectives. ’ Thus, despite its not describing how implementation actually tends to occur, we have still found this a helpful model for comparing the findings in our study with an ‘ideal type’ of how new ways of working might, could, and even perhaps should be implemented. To emphasise, this is not to criticise sites for not being ‘rational’, but rather an attempt to see if there are stages of the model that might help to reflect on local approaches to implementing IndependencePlus and identify improvements that could be made in future. Moreover, in the case of the present study, an alternative focus on ‘muddling through’ is hardly helpful, either – as sites were seeking to explore the possibilities offered by new and emerging technology precisely because it may hold the potential to be transformative (in one sense, the very opposite of ‘muddling through’).
Using the steps in Figure 1, a number of potential gaps are evident in our study:
-
Identifying the problem: as set out in Chapter 4, different stakeholders were seeking to resolve a number of potentially different problems/seeking a number of different outcomes, with an apparent lack of a shared understanding of what success would look like.
-
Establish/weigh decision criteria: this does not seem to have happened in our case studies, with decisions made on a more ad hoc basis.
-
Generate/evaluate/choose the best alternatives: in our study there does not seem to have been a process for systematically considering different options, with decisions taken on a more pragmatic basis.
-
Implement the decision: this is only one of the stages in the rational model, but in reality, it contains a range of complexities and a number of different issues to consider: from the engagement of staff, people drawing on care and support and carers (in the initial decision to implement and in the implementation itself) to subsequent training; from ethics and consent to the practical/technical functioning of the product; and from responding to the evolution of the technology to being clear about who was responsible for what in terms of the subsequent data.
-
Evaluate the decision: sites sought to gain a number of different insights into how the technology was performing at local level, and decided to take part in this national study in order to reflect on their experiences and share key lessons with others. However, a lack of clarity around desired outcomes makes evaluation difficult (if not impossible), and any formal evaluation might have found it difficult to respond to the rapidly evolving nature of the product and the implications of COVID. Ultimately, however, sites decided that their experiments had not worked in practice, and have all moved on to consider other types of technology and different forms of transformation.
Insights from the NASSS framework
Drawing on insights from a series of apparent ‘failures’ of technology products, and a seeming tendency to overpromise and underdeliver, Greenhalgh et al. ’s (2017) NASSS framework37 draws attention to the factors which can contribute to the adoption/non-adoption, abandonment and challenges to the scale-up, spread, and sustainability of health and social care technologies (see Figure 2).
Reflecting on our findings in relation to the NASSS framework, there are a number of ways in which future processes of implementing new and emerging priorities might be improved:
-
With the benefit of hindsight, there seems to have been a lack of clarity about the condition (or service setting) for which IndependencePlus might be most appropriate – and the change of tack part way through to focus more on monitoring of personal health data can only have compounded this issue.
-
Sites felt that the technology itself simply did not meet their needs and expectations, in terms of either cost, utility or usability – and several people felt that the potential benefits of IndependencePlus had been ‘oversold’. In contrast, IndependencePlus feels that local authorities and care providers can have unrealistic expectations of what is possible from any one product and have reflected on the need for the company to do further work with future sites in order to clarify potential benefits and manage expectations.
-
With a lack of clarity around desired outcomes, a degree of distrust between some participants and the technology company, changes in the nature of the product, practical problems on the ground and the unexpected and unprecedented challenges posed by COVID, the value proposition in our study was problematic and contested. It was also not always clear to us what was the responsibility of the developer and what was the responsibility of the case study sites, and how potential risks and benefits might be shared.
-
With any technological innovation, more work was arguably needed with front-line staff, people who draw on care and support, and carers (adopters) – both when deciding whether new and emerging digital technology might offer a promising way forward and when implementing the subsequent product.
-
The capacity of the organisation to innovate and its readiness for this form of digital, sensor-based technology was an important issue in our study. While all sites had taken a risk to experiment with new and emerging forms of technology (and were both generous and brave to share their learning with others via this evaluation), it seems clear that more work was required in terms of consultation and engagement, practical installation and maintenance of the product and securing buy-in from key stakeholders.
-
Whatever sites had done with regard to the above domains, the wider system was arguably unfavourable to the delivery of some of their initial aspirations – particularly in situations where statutory input is relatively brief (preventing the ability to build up detailed knowledge of people’s routines over time) and/or crisis-focused (hindering more preventative approaches). Key fault lines between health and social care systems meant that care staff were unsure how to proceed when the nature of the product shifted to focus more fully on personal health data, leaving some workers feeling anxious and ill-equipped to respond to what the data may be telling them.
-
Embedding and adapting over time was difficult given the practical difficulties that were encountered with the product itself, the impact of COVID and the frustrations experienced by a number of front-line staff. While all sites are committed to reflecting on their experiences – hence taking part in this study – and to exploring other forms of new and emerging technology in future, this particular experiment was deemed to have been unsuccessful in all three sites, and the pilots were abandoned.
While the NASSS framework was developed from secondary research and a series of empirical case studies concerned with technology implementation, we have reflected on the extent to which our focus in this study (the implementation of new and emerging technology) may potentially be subtly different. Although the framework provides a useful structure for drawing together our data and highlighting scope for improvements in the implementation process, we are also struck by one site’s observation (see Chapter 4) that:
We were all kind of, like, ‘I don’t know!’, like, ‘we don’t know what things will happen, we don’t know what the possibilities are until we have it’. So I think that was part of our interest was actually, like, we’re not sure what we’re looking for but we think we’ll know if we find it!
(CS1 P01)
Although our attempts to utilise the rational model and the NASSS framework imply that sites should have a preconceived notion of what they are trying to achieve and a rationale as to why this was the best way of going about it, it is possible that we are presenting the nature of their experiments the wrong way round. In the above quote, the technology seemed promising and people gave it a go to see what it might achieve on their behalf, rather than knowing (or thinking they knew) what might be possible. Thus, both the rational model and the NASSS framework are helpful for structuring and summarising what we found, and for anticipating potential issues to tackle in advance, but this should perhaps not be at the expense of room for sites to (sometimes) experiment with and test out something that might make a difference and learning by doing.
Implications for future policy and practice
In many ways, our research identifies similar themes to many of the previous studies summarised in Chapter 1 of this report. For example, our findings echo several of the key messages that emerged from the Care City test bed evaluation that explored the implementation of digital innovations and the development of the workforce in health and social care in London. 60 These included the importance of pursuing greater and continuous engagement with the workforce and those intended to benefit from the innovation, careful consideration of information governance, and increasing awareness of the consequences for existing workstreams of incorporating the innovation. With this in mind, our priority is not just to set out our findings via this report, but to use them to help avoid similar dilemmas and potential pitfalls in future.
Building on insights from the two analytical frameworks above, Table 9 sets out a series of implications for future practice. These offer no guarantees of successful implementation – not least because of the severe pressures which social care is facing. Instead, they are intended as questions which local authorities and providers might use as a checklist to reflect on their readiness when exploring future technology-based pilots. We will seek to share these with case study sites in the first instance, as part of feeding back our findings to those who took part. We will also work to develop these initial questions into a toolkit, explore collaborations with national policy and practice partners, and seek to promote this as a practical resource for others wishing to implement new and emerging technology in future.
Elements of the rational model | Elements of the NASSS framework | Theme from our research questions | Possible questions to ask before proceeding |
---|---|---|---|
Identifying the problem | Condition/service setting | Expectations |
|
Establish/weigh decision criteria – generating, evaluating and choosing the best alternative | The technology and the value proposition | Decision-making |
|
Implement the decision | Adopters, capacity of the organisation, wider system | Implementation |
|
Evaluate the decision | Embed and adapt over time | Early experiences, barriers/facilitators |
|
Given that our findings tend to mirror much of the previous literature in terms of common implementation challenges and a tendency of some new technology to seem to ‘overpromise and underdeliver’, it is clear that even more work is needed to embed lessons learnt in policy and practice, hopefully preventing similar issues from arising in future. Crucially, this is not just about the actions of individual social care stakeholders. At a national level, for example, there are geographical disparities in the digital readiness of various local authorities and care providers. These can relate to the availability of reliable digital infrastructure, and a shortage of the necessary digital skills within the workforce. 3 The government’s recent White Paper on social care reform has made commitments to address variations in digital infrastructure, also committing £500M for the necessary training of care staff, with a further £1M earmarked for a new Centre for Assistive and Accessible Technology. 14 While such measures are welcome, some of our participants felt a degree of pressure to be adopting technological solutions to their current pressures, which might lead some organisations to make commitments to pilot interventions they are not yet fully equipped to support. 10
Implications for future research
Our study looked at the implementation of one technology (IndependencePlus) at three case study sites in England. Although we endeavoured to include a broad range of stakeholders, there were limitations (as noted throughout this report) around the inclusion of front-line workers, carers and people who draw on care and support – all of whom should be included in future studies. Further research might usefully explore a number of different AI-based technologies and their introduction across sites representing a broader range of organisations and settings. Complementing the experiences of our case study sites, where the pilots were not felt to have been successful, there may also be scope to learn from implementation that is felt to have been positive, which can be used to expand the practical insights provided in Table 9. The fact that our study largely confirms already known implementation challenges perhaps raises an important question about the capacity of social care organisations to design pilots. Rather than conducting new research, perhaps a priority should be to make implementation support available on a more routine basis to help social care organisations with the design/implementation of new interventions, so that pilots are not ‘set up to fail’ due to known implementation problems that are not addressed at the outset. Of course, evaluation can be particularly challenging when, as in our study, there are competing notions of what success might look like – so practical support to clarify and measure desired outcomes might be useful.
Strengths and limitations
As set out in Chapter 1, the previous literature recognises that technology is not being used to its full potential in social care and identifies a series of gaps in our current knowledge. These include a lack of evidence on the expected or achieved impact of new technologies on people who draw on care and support and on carers, as well as a lack of evidence to help commissioners in implementing new and emerging technologies alongside existing services and understanding the potential impact on future service use and cost-effectiveness. There is also a wide range of technologies available, and little clarity as to how people wanting to explore the potential of these should go about it. We also know relatively little about the attitudes and perspectives of people who draw on care and support, carers and care staff, and about the impact on the wider workforce and services. There has also been more research in health than in social care. As a result of all this, there has often been positive policy rhetoric about the potential of technology, which tends to sit uncomfortably with a more mixed (perhaps even poor) track record in practice.
In seeking to fill some of these gaps, the current study took the decision to focus on the process of making decisions about/implementing new and emerging technologies, feeling that this way of framing the question might be more useful to local authorities and care providers than simply focusing on a piece of equipment/an innovation and asking ‘did it work?’ We believe that this decision was justified by our subsequent findings, which – despite a broadly negative experience in our case study sites – have generated a series of practical lessons to help others in future. We have also been able to gain insights from case study sites implementing a new and emerging technology (home-based sensors with AI capabilities), as well as from technology providers/innovators seeking to develop new ways of working. While there were some differences in interpretation (e.g. some sites feeling as if the technology had been ‘oversold’, and IndependencePlus feeling that some sites had unrealistic expectations), there was also significant agreement (e.g. about a number of the pilots not having a fully shared sense of what they were trying to achieve, or having full and shared clarity around desired outcomes).
At the same time, the challenges that we faced in conducting this research – in terms of the impact of COVID, the time which elapsed between the national prioritisation exercise and the subsequent research, and the myriad of practical difficulties which sites had faced – meant that the study is smaller than we had hoped (in terms of both the depth and breadth of case study sites). This has also meant that our reflections on early experiences of the pilots are based on the perceptions of senior decision-makers and operational staff, who fed back the messages they heard from people who draw on care and support, carers and front-line care staff (rather than us being able to hear these voices directly). While this is a key limitation, we still believe that the insights presented here are helpful to inform future practice, and the research conducted was all that was possible in an incredibly difficult external environment. Had we pushed any harder with already very stressed and inundated case study sites, we would have become just another burden for them at a time of unimaginable service pressure. In our view, this would not have been ethical – and would also have proved completely counterproductive (as sites would simply have withdrawn).
Summary
New and emerging technology, such as home sensors that use AI, has been identified as potentially promising in addressing a series of challenges facing adult social care. However, there is a lack of evidence to support its widespread use, and to support local authorities and care providers making decisions around the implementation of these technologies. Overall, the evidence base is sparse, methodological limitations of available studies are common, and there is a focus on health rather than social care outcomes. In response, this rapid evaluation sought to understand what senior decision-makers are hoping to achieve by adopting new and emerging technology, how they go about deciding which technology to adopt and which partners to engage, how these technologies are implemented, and the early experiences of people drawing on care and support, carers and front-line care staff.
After significant scoping work (a national prioritisation exercise, a review of the literature, engagement with key stakeholders and detailed design discussions with sites piloting this technology), the research went ahead in three case study sites. Due to a series of practical constraints, it drew mainly on the perspectives of senior decision-makers and operational leads, commenting on their own experiences and on those of people who draw on care and support, carers and care staff. This was supplemented by insights from the technology provider, and other key stakeholders involved in the development and delivery of technological innovation in adult social care. Overall, the research confirmed a number of common implementation challenges, but also adds early insights into slightly newer themes, such as the volume/complexity of data and subsequent analytical burden on untrained staff, or the challenges of implementing technology which tries to establish a long-term picture of someone’s routine in a system where interventions can often be short-term and crisis-focused.
Despite significant challenges in conducting such research in such a difficult context, a series of very clear and significant themes emerged, the study remains highly topical, and the approach adopted has helped to produce a series of tangible and significant findings. This has helped to fill some of the key gaps in the literature and to share practical lessons learnt with commissioners, providers, technology providers and policy-makers – especially at a time when technology has been so prominent during the pandemic and in recent government policy. 14 Given that this study confirms so many common implementation barriers, this focus on sharing and embedding lessons learnt in order to help future implementation feels crucial – otherwise we run the risk that future studies of new and emerging technology will simply report similar challenges once again.
Acknowledgements
We are very grateful to all of the participants in this study and to our contacts at each case study site who helped to coordinate our interactions with interviewees. We are also very grateful to everyone who gave their time to participate in the study. We would also like to thank our peer-reviewers of the report, Dr Jo Ellins (BRACE Deputy Director, University of Birmingham) and Prof Iestyn Williams (Head of Research within the School of Social Policy, University of Birmingham).
Patient and public involvement
The impetus for this study was an NIHR-instigated/funded national prioritisation exercise held in October 2019, which included people who draw on care and support and their carers. In the scoping stage of this study, we also worked with people who draw on care and support and carers to confirm the relevance of our research questions and to design the detail of our approach. This engagement occurred through online project design groups, including members of the BRACE Health and Care Panel and the University of Birmingham Social Work Service User and Carer Contributor group.
The initial protocol for this study included further patient and public involvement in the form of interviews and/or focus groups with individuals who draw on care and support and their carers. To facilitate this, a member of the study team with experience in involving people with dementia and their carers (DT) was available to provide support. However, due to challenges in recruitment and engagement, this engagement was not possible.
Ethics
Ethical approval from the University of Birmingham Research Ethics Committee (ERN_13-1085AP41, approved 31 May 2021; ERN_21-0541, approved 5 July 2021; and ERN_21-0541A, approved 8 October 2021).
Funding
This project was funded by the NIHR Health Services and Delivery Research (HS&DR) Programme (HSDR16/138/31). Further information is available at: https://www.birmingham.ac.uk/research/brace/projects/new-and-emerging-technology-for-adult-social-care.aspx
Contributions of authors
Jon Glasby (https://orcid.org/0000-0003-3960-7988) (Professor of Health and Social Care, University of Birmingham) was the Principal Investigator for the study. He contributed to the conception and design of the study, the theoretical framework, and overall data analysis and interpretation, and is a co-author of the final report.
Ian Litchfield (https://orcid.org/0000-0002-1169-5392) (Senior Research Fellow, University of Birmingham) contributed to the theoretical framework, data collection and overall data analysis and interpretation, and is a co-author of the final report.
Sarah Parkinson (https://orcid.org/0000-0002-2858-1842) (Senior Analyst, RAND Europe) was the Project Manager for this study and contributed to the theoretical framework, data collection, and overall data analysis and interpretation, and is a co-author of the final report.
Lucy Hocking (https://orcid.org/0000-0002-8319-962X) (Senior Analyst, RAND Europe) contributed to the theoretical framework, data collection, and overall data analysis and interpretation, and is a co-author of the final report.
Denise Tanner (https://orcid.org/0000-0002-6944-7367) (Associate Professor of Social Work, University of Birmingham) contributed to the conception and design of the study, the theoretical framework, and overall data analysis and interpretation, and is a co-author of the final report. She brought particular expertise in terms of the involvement of people with dementia and their carers.
Bridget Roe (https://orcid.org/0000-0002-2625-9315) (Research Fellow, University of Birmingham) contributed to the theoretical framework, data collection and overall data analysis and interpretation.
Jennifer Bousfield (https://orcid.org/0000-0002-5671-2336) (Analyst, RAND Europe) contributed to the conception and design of the study, and data analysis and interpretation of the scoping exercise.
All authors contributed to integrating the findings of the study. JG made additional critical revisions to the final report and approved the final manuscript. All authors agree to be accountable for all aspects of the work in ensuring that any questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved.
Data-sharing statement
All data requests should be sent to the corresponding author in the first instance. Due to the consent process for data collection at case study sites within this evaluation, not all data can be shared.
Equality, diversity and inclusion
The study has endeavoured to incorporate the views, perspectives and contributions of people from diverse backgrounds and with protected characteristics at different points during the study. At some stages this has been limited by more general difficulties in recruitment (see Chapter 2).
Identification of topic: Prioritisation workshop
The research topic itself – new and emerging technology – was identified as being among a number of priorities for rapid evaluation during an NIHR-instigated/funded national prioritisation exercise held in October 2019. The 23 members of this workshop included people who draw on care and support and their carers, care staff, academics/researchers, commissioners and policy-makers. Care was taken to ensure that membership of the workshop reflected diversity in terms of protected characteristics of ethnicity, age and gender.
Scoping study
The research team undertook scoping work to inform our research design. This comprised key informant interviews and project design groups with stakeholders to explore: the questions we should ask stakeholders; the themes to consider; and methods for collecting data from care providers and commissioners, people who draw on care and support, carers and care staff.
In total, nine key informant interviews were conducted with research experts in the field and with decision-makers and operational leads from local authorities and care providers over several months in 2020. Our approach aimed to be inclusive by offering a choice of phone or online interviews.
The research team also held three online project design groups with stakeholders in October 2020. Design group 1 involved four local authority commissioners and care providers who were decision-makers and operational leads with experience of using IndependencePlus. Design group 2 involved four people who draw on care and support, and carers. Design group 3 involved three people who draw on care and support, and carers. Attendees of the second and third group were from the BRACE Health and Care Panel and from the University of Birmingham Social Work Service User and Carer Contributor group. To avoid exclusion based on an individual’s access to, and proficiency or confidence with technology, individuals were asked if they would prefer to speak over the phone with a member of the research team, rather than attend the design meeting. Furthermore, one carer who was interested in contributing but was unable to attend the dates of the online project design groups was offered an individual interview. For both the key informant interviews and project design groups, it was not possible to extend involvement to face-to-face participation because of the COVID-19 restrictions in place at the time.
Research design
Given that people with cognitive impairment may be among those most likely to benefit from the use of technology to promote independence, we felt it was important that their experiences were included in the study. We aimed to include people who lacked the capacity to consent to participate via the involvement of consultees who could advise about their likely willingness to take part and be interviewed on their behalf if appropriate
Our research design aimed to facilitate and encourage the participation of other people who might face barriers to involvement, such as people with learning difficulties and people with dementia. Researchers were recruited to the study who had specific training, experience and skills in interviewing people with different communication needs. We produced accessible information sheets and a choice of different methods of interview engagement, including face-to-face or online meetings or telephone interviews. Our intention was to seek advice from care providers, carers and, where possible, the individuals using the technology themselves, to agree the most suitable methods of communication for them, including the use of visual communication aids if appropriate. Prior to the interview commencing, our aim was to make sure that interviewees were comfortable and in a suitable private location, and we planned to spend time on building a rapport to put people at ease. In line with ethical considerations, the research team agreed a process of: active listening; attending and responding to participants’ body language and facial expressions; and a continual ‘checking in’ with participants to monitor and assure their well-being.
As mentioned earlier, online project design groups helped us to develop topic guides that were clear, relevant and mindful of accessibility and inclusion.
Research participant characteristics
Decision-makers and operational leads within the case study sites were asked to nominate potentially suitable staff, people drawing on care and support and carers for interviews. As discussed in the Methodology section (see Chapter 2), a number of significant challenges were faced in the recruitment of participants, especially people who draw on care and support and carers. This left little scope for us to be proactive in seeking out and selecting participants with a view to ensuring the inclusion of people from diverse backgrounds and more marginalised communities.
Research findings
Our findings consider the impact of different dimensions of inequality in terms of access to and use of home sensors with AI. For example, our research has highlighted: barriers to technology faced by people with poor internet access (e.g. those living in remote rural locations); and issues of ethics and consent relating to people with cognitive impairments. These are important equality, diversity and inclusion issues for commissioners and providers to consider when planning the use of technology in adult social care.
Outputs and dissemination
Project findings will be shared and discussed at a meeting of the full BRACE Health and Care Panel, which includes eight people drawing on care and support and public members. We will also seek the advice of these panel members in terms of the best ways to communicate findings to people who draw on care and support and public audiences, helping to ensure that dissemination activities have a wide reach and impact.
Research team
The research team consisted of researchers from BRACE rapid evaluation teams and the University of Birmingham, incorporating disciplines such as applied health research, social policy and social work, and with research interests in topics such as use of technology in care settings, commissioning, prevention, dementia and user/carer involvement. The research team had a mix of backgrounds in relation to gender, age and research experience, though not ethnicity to the same extent.
The team held weekly project meetings and weekly data collection and analysis meetings throughout the project to ensure that all members of the team were well supported. All researchers were supported by the Principal Investigator, Project Manager and senior members of the team.
Disclaimers
This report presents independent research funded by the National Institute for Health and Care Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, the HSDR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, the HSDR programme or the Department of Health and Social Care.
References
List of abbreviations
- AI
- artificial intelligence
- BRACE
- Birmingham, RAND and Cambridge Evaluation Centre
- IT
- information technology
- NASSS
- adoption/non-adoption, abandonment, scale-up, spread, sustainability
- NIHR
- National Institute for Health and Care Research
- WSD
- Whole Systems Demonstrator