Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 15/144/51. The contractual start date was in July 2017. The final report began editorial review in August 2020 and was accepted for publication in March 2021. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Disclaimer
This report contains transcripts of interviews conducted in the course of the research, or similar, and contains language which may offend some readers.
Permissions
Copyright statement
Copyright © 2021 Towers et al. This work was produced by Towers et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This is an Open Access publication distributed under the terms of the Creative Commons Attribution CC BY 4.0 licence, which permits unrestricted use, distribution, reproduction and adaption in any medium and for any purpose provided that it is properly attributed. See: https://creativecommons.org/licenses/by/4.0/. For attribution the title, original author(s), the publication source – NIHR Journals Library, and the DOI of the publication must be cited.
2021 Towers et al.
Chapter 1 Introduction
Background and rationale: the importance of measuring the contribution of care homes to the quality-of-life outcomes of older residents
Over 425,000 older people in England live in care homes because they have significant long-term health problems. 1 Many have reduced cognitive functioning, frailty and difficulties with communication (e.g. as a result of dementia) and live with multiple long-term health conditions. 2 Over one-third of those living in care homes will pay for their own care either completely or in part. 3 Otherwise, residents are supported by public funding: half of all care home residents are fully supported through their local authority, and the rest are supported either by NHS continuing care or through some combination of local authority, charity and NHS support. 3 In England, care homes operate in a quasi-market,4 with around 90% provided under contract from private and voluntary sector providers. Although care homes collect and use data about the health and care needs of their residents for their own records and regulatory processes, there is no single, agreed, minimum data set in the UK. 5 Furthermore, owing partly to the distinction between health and social care systems in the UK, data about care home residents held by primary and secondary care are not easily linked with data held by social care providers. Therefore, in such a large and fragmented system, measuring and improving care quality is a challenge.
How well residents’ personal and health-related needs are met is affected by the quality of care provided. A general definition of quality of social care services is that it consists of both quality of care and quality of life aspects. 6,7 Quality of care relates to the technical aspects of caring provided by the service itself, and is largely based on the staff working in the service, such as their level of training and skills, their responsiveness and the continuity of care provided. 7,8 Quality of life is subjective and is often related to residents’ satisfaction with life, including their level of control, privacy, interactions, safety and ability to go about their daily lives. 7
Service users will be involved in the co-production of many services (e.g. washing and dressing), and as a result the definition and measurement of social care quality have been the subject of academic debate for some time. 7 According to the Donabedian model of care,9 to assess quality, we require indicators that are sensitive to variations in quality relating to the structure, process and outcomes of care. Structural quality indicators are organisational characteristics of the care provider, for example the care home environment/building and the staff-to-resident ratios on site. Process quality indicators relate to the way the care is delivered by care workers, for example whether or not staff are caring, timely and skilled. Outcome indicators relate to the results of the care for the service user, for example whether or not the person is clean, dressed and fed, feels in control of their daily life and is able to spend their time doing things they value and enjoy.
The Care Act 201410 emphasises the importance of measuring and improving the well-being of users and their family carers, and previous research has shown that this is highly valued by older people and their families when considering a care home. 11 However, measuring person-centred outcomes is challenging, particularly when trying to assess the quality of life of people with cognitive impairment and communication difficulties. 2 The high prevalence of cognitive impairment in this population2 means that self-report is unlikely to capture the views of residents with both the greatest need and the highest capacity to benefit. Proxy report, for example by staff, rarely agrees strongly with residents’ own views12 and is best considered a different perspective rather than a replacement for a resident’s own voice. This places those with the greatest need at risk of living with under-reported and under-managed health and social care-related quality-of-life outcomes.
The Adult Social Care Outcomes Toolkit: a method for measuring the social care-related quality of life of service users
The Adult Social Care Outcomes Toolkit (ASCOT) was developed to measure the ‘social care-related quality of life’ (SCRQoL) of service users. 13 ASCOT is a preference-weighted utility measure with eight conceptually distinct domains of SCRQoL, which are described in Table 1. Each domain has four response options, reflecting four outcome states (ideal state, no unmet needs, some unmet needs and high unmet needs). Different tools are available, including self-completion questionnaires, interview schedules and a mixed-methods tool for use in care homes. The self-completion questionnaires and interviews are also available in an easy-read format for adults who have developmental and intellectual disabilities (www.pssru.ac.uk/ascot/).
Domain | Definition |
---|---|
Basic domains | |
Food and drink | The service user feels that he/she has a nutritious, varied and culturally appropriate diet with enough food and drink that he/she enjoys at regular and timely intervals |
Accommodation cleanliness and comfort | The service user feels that their home environment, including all the rooms, is clean and comfortable |
Personal cleanliness and comfort | The service user feels that he/she is personally clean and comfortable and looks presentable or, at best, is dressed and groomed in a way that reflects his/her personal preferences |
Personal safety | The service user feels safe and secure. This means being free from fear of abuse, falling or other physical harm |
Higher-order domains | |
Control over daily life | The service user can choose what to do and when to do it, having control over his/her daily life and activities |
Social participation and involvement | The service user is content with their social situation, where social situation is taken to mean the sustenance of meaningful relationships with friends, family and feeling involved or part of a community, should this be important to the service user |
Occupation | The service user is sufficiently occupied in a range of meaningful activities, whether this be formal employment, unpaid work, caring for others or leisure activities |
Dignity | The negative and positive psychological impact of support and care on the service user’s personal sense of significance |
Adult Social Care Outcomes Toolkit scoring
Three scores can be derived from the ASCOT tools: current SCRQoL, expected SCRQoL and SCRQoL gain. All ASCOT tools measure current SCRQoL, which asks about the person’s situation now. The interview and mixed-methods tools also ask what the situation would be if services were not in place to support the person and nobody else stepped in. We call this expected SCRQoL. By taking the expected SCRQoL scores away from the current SCRQoL scores, we can estimate the impact that the service(s) are having on the person’s quality of life, which we call SCRQoL gain:
As ASCOT is a preference-weighted measure, responses to each domain for current and expected SCRQoL are weighted to reflect English population preferences and then entered into an algorithm (Equation 2) to calculate a score ranging from − 0.17 to 1:13
Scores of 1 represent optimum or ‘ideal’ SCRQoL and scores of 0 indicate a state that, according to the preferences exhibited by the general population, is equivalent to being dead. Negative scores indicate a state worse than being dead. 13
Psychometric properties of the Adult Social Care Outcomes Toolkit
Psychometric testing has consistently revealed that ASCOT has acceptable internal reliability. 13,15 Research with a wide range of social care user populations, including older adults, younger adults and adults with physical and sensory impairments, has established its validity, feasibility and reliability. 13,15,16
Expected SCRQoL, which is still relatively uncommon in quality-of-life research, has been shown to be a reliable indicator of social care need, as it is highly correlated with functional ability. 13,17 Recent work exploring the feasibility and validity of the expected SCRQoL questions found the approach both feasible and valid with a sample of social care users, with many people saying that they found the questions easy or very easy to answer. 17 Nonetheless, it is not without its limitations. In particular, the expected SCRQoL questions are not suitable for people with limited or low cognition. 17 Given that up to 80% of care home residents18 are believed to be living with dementia, these questions are entirely unsuitable for use in interviews with most care home residents.
ASCOT-CH4 and the social care-related quality of life of care home residents
To measure the impact of services in this population, we needed to employ a mixed-methods approach. The mixed-methods version for use in care homes is called ASCOT-CH4 (Adult Social Care Outcomes Toolkit Care Homes-four levels; herein referred to as CH4) and was designed for residents who cannot self-report. Methodologically, CH4 involves structured observations of residents, alongside a staff proxy interview (for each resident; see Report Supplementary Material 12), a resident interview (to the extent that they are able; see Report Supplementary Material 11, 13 and 14) and a family proxy interview (where possible; see Report Supplementary Material 12). (See Appendix 1 for a detailed account of this method.) The mixed-methods approach has been found to have excellent face validity and inter-rater reliability. 19
Previous studies taking this approach19–22 have found that residents’ needs are mostly met in the ‘basic’ domains (safety, accommodation, food and drink and personal cleanliness). However, there is evidence of unmet needs in the higher-order domains (Control over daily life, Social participation and engagement in meaningful Occupation), with some residents spending long periods of time disengaged. Staff recognise the importance of being occupied and staying socially active, but note that many residents decline to take part in organised activities or appear too tired and withdrawn. 21 The impact of skilled care and support to help and encourage people to remain active and engaged should not be underestimated. 23,24 However, at the same time, we should not ignore the possible contribution of unmet needs to other areas, including aspects of residents’ health-related quality of life (HRQoL). For example, unrecognised depression, anxiety or pain may lead to unmet needs in these domains.
Health-related quality of life: pain, anxiety and depression in care homes
Pain
Studies estimating the prevalence of pain among the care home population suggest that it is common. Indeed, this is the case for all groups of older people because they experience higher levels of chronic disease than younger age groups. 25 The actual pain prevalence estimates for resident populations vary considerably, although the usual estimate is between 30% and 60%. 26–39 A recent large-scale study using validated pain scales in England suggested that the prevalence of pain in this population is around 35%. 33
Although at least one-third of residents live with and experience pain, the consensus is that pain is often under-recognised and undertreated in care homes. 40–43 This is likely to be attributable to a variety of causal factors, including resource considerations and the attitudes of health-care professionals. 44 However, difficulty in detecting and assessing pain is one of the most widely cited reasons for this under-treatment. 44–46 Pain is a subjective experience47,48 and cannot be measured adequately using an objective diagnostic test, which is why mainstream approaches rely on self-reporting. As McCaffery48 notes:
Pain is whatever the experiencing person says it is, existing whenever he says it does.
McCaffery48
There are, however, a number of challenges to enabling care home residents to self-report and share their subjective experiences of pain. Although impairments more common in later life, such as hearing and sight loss, do present challenges,49 the key challenge to measuring these experiences is that a high proportion of residents live with dementia. 18 The characteristics of the group of conditions classified as dementia mean that the ability to communicate experience may be limited, particularly for those living with more severe dementia, and, thus, either research or diagnostic methods that rely on this may not be particularly appropriate for this group. 46,50–54 It is not, however, just the ability to communicate clearly that is required for accurate self-report. The resident must understand the request, be able to recall pain events – often in a given time frame – and interpret that internal experience with reference to an external framework. 55 All of these can be challenging for people living with dementia.
There is evidence that pain is under-recognised among residents living with dementia in studies comparing the numbers of older adults in care home settings who receive pain medication. Morrison and Sui56 found that, following hip fractures, cognitively intact older people received three times the level of medication received by older people living with dementia. This finding was echoed in studies conducted in Canada57 and the USA,58 both of which concluded that residents living with dementia in nursing homes were less likely to receive pain medication than residents who did not have dementia. This has also been found in other contexts, such as in the treatment of scabies. 59
Pain is associated with decreased functional ability, lower levels of activity and socialising, higher levels of falls, reduced appetite, poorer sleep, greater levels of irritability, aggression, anxiety and depression, and resistance to care. 25,60–66 Unsurprisingly, untreated pain has been found to be associated with poorer overall quality of life. 30,33,64,67,68 Specifically, for older adults living with dementia, untreated pain has been shown to exacerbate the symptoms of dementia, such as impaired cognition. 30,64,69,70
Depression and anxiety
Estimates in studies looking at the prevalence of mental health conditions suggest that large numbers of residents live with a mental health condition. The most common of these conditions is depression. 71–73 However, as with pain, estimates vary, complicated by differences in how studies define and measure depression. At the lower end is an international prevalence estimate of just under 10%; at the upper end this is just over 50%. 74–81 Considering specifically residents with a diagnosis of dementia, recent work in the UK has estimated that the prevalence of depression among this group is at the middle and lower end of this range. One study has estimated that 26.5% of older people living in residential care and 29.6% of older people residing in nursing homes experience depression,82 and another study places this figure at around 10% of the care home population. 77
Anxiety in later life has historically received less attention than depression, despite being quite common. 83–85 A systematic review in 201075 found only three studies that estimated anxiety among older adult care home residents. A review in 2016 found 18 studies,86 a number that, although still very small, suggests that studies focusing on anxiety in care homes are becoming more commonplace. The reviews reported the prevalence of anxiety disorders among residents as somewhere between 3.2% and 20%. However, clinically significant anxiety symptoms were found to be more widespread, with prevalence across studies ranging from 6.5% to 58.4%.
Living with anxiety and/or depression can have a range of negative impacts on an older person’s life, including lower quality of life,68,87–89 lower sociability and greater loneliness,90 and poorer health, including increased pain. 91–95 Anxiety and depression have also been linked to sleep problems and smoking,84,96,97 loss of independence and functional ability,87,98–103 cognitive decline,104–107 behavioural problems,94,108 suicide109 and higher mortality levels. 110–112
Even though the impacts of living with depression and anxiety in later life are, as outlined above, well documented, and the conditions themselves are often treatable,113,114 there is a large body of evidence suggesting that depression, in particular, is both under-recognised and undertreated or poorly managed, particularly in residents who live with dementia. 107,115–122 Less evidence exists around the under-recognition and undertreatment of anxiety in care home residents, but, given the lack of attention that anxiety has received in this setting,86 it is likely that both of these are very common.
As with pain, the assessment of depression and anxiety in care homes is made more challenging by the characteristics and symptoms of dementia. 114,123 However, there are other challenges, including a lack of suitable tools124 (as many of the tools available have been developed for younger populations), lack of training and knowledge on depression and anxiety among care home staff115,125 and the difficulty in isolating symptoms of depression and anxiety from other conditions and somatic complaints. 126
A new approach to measuring pain, anxiety and depression in care homes
Although measures of pain, anxiety or depression have been used with older people and those living in care homes, unlike the ASCOT for SCRQoL, none of them aims to measure the impact of social care services. Existing measures focus on the person’s current situation or diagnosing a condition without any reference to input from services or support and without a mechanism for attributing improvements or change in outcomes to those services. There is also no evidence to suggest that tools have attempted to use innovative methods or adapted forms of communication to support the inclusion of people with cognitive or communication difficulties. This is important in care homes, where many residents live with dementia and have physical and sensory impairments that make self-completion or self-report challenging or impossible19,127 and places care home residents at risk of living with unmet needs in both their health and social care-related quality of life. 87,128–130
Measuring the quality of care homes
In England, the quality of adult social care provision is regulated by the Care Quality Commission (CQC), which conducts inspections and awards publicly available ratings to drive up quality and inform service user choice. Quality ratings are based on an assessment of evidence gathered using five key lines of enquiry (KLOEs): ‘safe’, ‘effective’, ‘caring’, ‘responsive’ and ‘well led’ (Table 2). CQC inspectors draw evidence from four sources of information: CQC’s ongoing relationship with the provider, ongoing local feedback and concerns, pre-inspection planning and evidence-gathering, and the inspection visit. During site visits, the inspector:
. . . speaks with people using the service and their visitors, staff, volunteers and visiting professionals to assess all of the key questions. They also review relevant records and inspect the layout, safety, cleanliness and suitability of the premises, facilities and equipment . . .
Care Quality Commission. 132
An overall rating is aggregated from ratings for each of the five KLOEs, with ratings awarded on a four-point scale: ‘outstanding’, ‘good’, ‘requires improvement’ or ‘inadequate’. 132
KLOE | Definition |
---|---|
Safe | People are protected from abuse and avoidable harm |
Effective | People’s care, treatment and support achieves good outcomes, promotes a good quality of life and is based on the best-available evidence |
Caring | The service involves and treats people with compassion, kindness, dignity and respect |
Responsive | Services meet people’s needs |
Well led | The leadership, management and governance of the organisation assures the delivery of high-quality and person-centred care, supports learning and innovation, and promotes an open and fair culture |
Social care establishments are inspected by the CQC between 6 and 12 months after they start (or resume service) and regularly thereafter. The frequency of inspection depends on the establishment’s rating. Establishments rated ‘good’ and ‘outstanding’ are inspected within 30 months of the last comprehensive inspection report and those rated ‘requires improvement’ are inspected within 12 months of the last inspection report, with establishments rated ‘inadequate’ (or rated overall as ‘requires improvement’, but with at least one KLOE rated ‘inadequate’) inspected within 6 months of the last inspection report. These timescales are maximum time periods during which the CQC will return to inspect, but establishments may be inspected at any time. 132
Staff and quality of care
Quality of social care services varies for many reasons, but the nature and characteristics of the workforce, and their approaches to care, are likely to be major determinants. As noted above, quality of social care services is formed of quality of care, that is, technical aspects of care delivery, and quality of life, that is, individual resident aspects relating to their satisfaction with life and health and social care outcomes. The competency and quality of care home staff will directly influence the quality of care, given their duties and role. Care home staff are also likely to, at least indirectly, influence quality of life. For example, staff and staffing characteristics have been found to have an impact on satisfaction133,134 and the perceived quality of service in social care. 135
Staff in social care tend to be low paid, often at minimum wage, and there is a high staff turnover in social care in England. 136 Despite their low pay, staffing will account for a large proportion of costs in a care home, and in England this is typically 50–60% of revenue. 1 Working in care homes, particularly as a care worker, has often had a negative perception, as the role is seen as low paid and low skilled (i.e. little or no education required) and with little in the way of career progression. 137 However, there is growing recognition of the need to reframe the skills and identity of the social care workforce. 138 Staff working in social care tend to do so for intrinsic reasons and tend to have high levels of informal skills,139 and may accept a low wage because of their caring motive. 140
These workforce employment conditions could have a negative impact on care outcomes, given that issues of pay, training, status, terms and conditions are likely to influence quality. 141–143 More generally, economic theory provides a direct link between training, wages and the productivity of workers. 144,145
A small body of literature exists on the impact of workforce characteristics (e.g. staff turnover) on care home quality. 141,146,147 Much of this literature is US based and focuses on clinical markers of quality or other process measures, not on final outcomes or quality of life. Previous analyses of care homes in England found that quality had a significant positive relationship with staff retention and a significant negative relationship with job vacancies148 and that a lack of staff could lead to closure. 149 However, there is very little statistical evidence in England linking wages and training levels to care quality outcomes. Furthermore, only a very few analyses use appropriate statistical methods to address the potentially complex inter-relationship between quality and staffing. 150,151
Aims and objectives of the study
The overarching aim of the Measuring and Improving Care Home Quality (MiCareHQ) study was to assess how care home quality is affected by the way the care home workforce is organised, supported and managed. We sought to understand the relationship between workforce employment conditions and training, CQC quality ratings and the health- and care-related quality of life of care home residents.
The objectives were to:
-
develop and test measures of pain, anxiety and depression for residents unable to self-report [work packages (WPs) 1 and 2]
-
assess the extent to which CQC quality ratings of a home are consistent with indicators of residents’ quality of life (WPs 2 and 3)
-
assess the relationship between aspects of the staffing of care homes and the quality of care homes (WP3).
Overview of methods
MiCareHQ was a mixed-methods study, involving qualitative fieldwork and quantitative analysis in three interlinked WPs.
Work package 1: measuring health and social care-related quality of life
Aims
This WP aimed to develop and cognitively test new measures of pain, anxiety and depression, which could be used alongside CH4 in care homes with residents who cannot self-report.
Methods
A scoping review of existing measures, with a particular focus on tools that already incorporate observational methods was undertaken to inform the development of draft tools (see Chapter 2). The draft tools were shared with stakeholders (including care home staff) in focus groups in order to explore the face and construct validity of the items. Following revision, the new measures were cognitively tested with a sample of staff and family members of care home residents using a combination of verbal probing techniques and thinking aloud (see Chapter 3). 152 Lay co-researchers were involved in the focus groups and subsequent revisions of the new measures (see Chapter 7).
Work package 2: psychometric testing using the mixed-methods ASCOT with new health-related measures
Aims
This WP aimed to pilot and psychometrically test the new measures of pain, anxiety and low mood. It also collected information about care home residents’ care and HRQoL outcomes that fed into analysis in WP3.
Methods
Primary data collection was undertaken using a cross-sectional design, in which researchers spent time in each care home carrying out observations and interviews with staff, and (where possible) residents and family members using CH4. Additional data about the residents were collected from staff using questionnaires, including demographic information, health status and ability to complete activities of daily living (ADLs).
In total, 182 residents from 20 care homes for older adults (10 nursing and 10 residential) were recruited to the study from four local authorities in South East England. To explore the construct validity of the new health items developed in WP1, questionnaires also contained staff-rated, validated scales relating to these concepts so that we could explore hypothesised relationships with the new attributes in the analysis.
The results of the psychometric testing of the new CH4 items developed in WP1 are in Chapter 4.
Work package 3: care home quality, resident outcomes and care workforce
Aims
This WP aimed to assess the relationship between care home quality and residents’ quality of life, and between care home quality and workforce characteristics and deployment.
Methods
Work package 3 used two approaches. First, resident outcome data were collected from two studies and used to model the relationship between CQC quality ratings and residents’ quality of life (see Chapter 5). Study 1 is the Measuring Outcomes Of Care Homes (MOOCH) project, funded by the National Institute for Health Research (NIHR) School for Social Care (2015–18). 20,153,154 This study recruited 294 older care home residents from 34 care homes (20 nursing and 14 residential) in two local authorities in England. Study 2 collected primary data as part of this research (see Chapter 4). Both studies used a cross-sectional design, in which researchers spent time in each care home carrying out observations and interviews with staff, and (where possible) residents and family members using CH4. Additional data about residents were collected from staff using questionnaires, including demographic information, health status and ability to complete ADLs.
Second, we conducted an analysis of existing data to model the relationship between CQC quality ratings and workforce characteristics, controlling for confounding factors (see Chapter 6). We focused on the association between training provided to staff and staff terms and conditions (e.g. wages and turnover/vacancy rates) and the quality of care homes. All of these variables, which can be affected by policy, were expected to be important determinants of quality. Using data on around 5500 care homes, we used the following econometric methods to assess the relationship between quality and workforce characteristics: longitudinal models using data from 2016 to 2018, multiple imputation to address missing data, and an instrumental variable approach to control for potential endogeneity between quality and workforce characteristics, in particular staff wages.
Public involvement and engagement strategy
Public involvement and engagement was an integral part of the study from the design stage, when the application was reviewed by five members of the public from the Personal Social Services Research Unit’s (PSSRU) Research Advisors Panel. During the study, patient and public involvement (PPI) was delivered in the following ways:
-
Study Steering Committee – two members of the public with experience of social care were recruited as lay representatives to the Study Steering Committee. One was a partly retired care home manager and the other was an informal carer whose relative had lived in a care home. Their role on the Study Steering Committee was to attend meetings, give a public/patient perspective on issues that arose and comment on emerging findings.
-
Co-researchers in WP1 – we recruited three lay co-researchers from the PSSRU’s Research Advisors Panel to assist with the focus groups with staff and contribute to the development of the new measures of pain, anxiety and depression. This is described in detail in Chapter 3.
-
Contributions to study outputs – lay advisors from the Study Steering Group contributed to the drafting of the Plain English summary of this report and reviewed it for written quality and clarity. The co-researchers from WP1 co-authored Chapter 7.
Ethics approvals
Ethics approval was required for the primary data collection undertaken in WP1 and WP2. WP3 drew on both the primary data collection undertaken in WP2 and publicly available data sets. The analysis of publicly available data received a favourable ethics approval by the University of Kent (SRCEA 201).
The primary data collection received a favourable opinion for ethics approval from the Health Research Authority, which governs research ethics in the UK (reference 18/LO/0657). As this was a social care study, approval was sought and received from the Association of Directors of Social Services (ADASS) and by each local authority involved in the study.
The main ethical issues were around participant consent and the participant observations being conducted in communal areas of the care home.
Participant consent
The study consent procedures complied with the Mental Capacity Act155 and Code of Practice. 156 Where residents lacked the capacity to consent, personal consultees were asked for their advice about including the person in the research. Thereafter, consent was treated as a continuous process, with researchers being sensitive to both verbal and non-verbal indications of residents’ unwillingness to participate in the research.
Participant observations
Observations were carried out as unobtrusively as possible in communal areas of the homes (e.g. lounge, dining area, activity areas, gardens and corridors). Informed written consent was sought only from study participants who were the subject of the observations and interviews. As care homes are communal environments, non-participatory residents, staff and visitors were present during the observations. To ensure that observations were conducted with the full knowledge of those living and working in the home, and to confirm that people were happy with our presence, we asked homes to display posters raising awareness of the research, with the names and photographs of the research team, 2–3 weeks before data collection. On the day of fieldwork we were also given a tour of the home and introduced to those present. At the end of the observational period, we said goodbye to those present and notified the staff and manager that we had stopped observing.
Summary
Care homes provide fundamental long-term care and support to around 425,000 older people in England, many of whom have multiple complex conditions, including dementia. They aim to provide high-quality care that improves the health and care-related quality of life of residents. However, in the absence of a minimum data set for care homes, measuring the impact of care is challenging. Regulator quality ratings have an important role to play, but the extent to which these reflect residents’ health and social care-related quality of life is unclear. Furthermore, reliable, robust methods of measuring patient-reported outcomes traditionally rely on self-report, which is unsuitable for many care homes residents.
This study aimed to explore the health and care-related quality of life of care home residents and regulator quality ratings, and their relationship to workforce and job characteristics. To this end, we undertook three interlinked packages of work to (1) develop new measures of pain, anxiety and depression suitable for residents who cannot self-report, (2) validate these measures’ use and psychometric properties alongside the ASCOT and (3) understand how care home quality relates to workforce employment conditions and training and to residents’ care-related quality of life.
Chapter 2 Rapid reviews of tools measuring pain, anxiety and depression
Chapter 1 described how, despite being different conditions, a common picture emerges when exploring pain, anxiety and depression in care home residents. First, they are experienced by a significant number of care home residents. Second, each of them is associated with negative health and quality of life, and finally, it is generally agreed that they are under-recognised and undertreated. One of the barriers is that it is difficult to collect this health-related information from residents using traditional methods of self-report. This is particularly salient when thinking about residents living with dementia or other cognitive impairments that impact on the ability to communicate one’s views and experiences.
Aims and objectives
The overarching aim of WP1 was to develop new items for measuring pain, anxiety and depression, which could be used alongside the ASCOT-CH4 with care home residents who cannot self-report.
This chapter addresses objective one of WP1, to undertake a review of tools measuring pain, anxiety or depression, which were developed, tested or validated for use with older people, people living with dementia or older people living in care homes.
For ease, throughout this chapter we use the term ‘care home’ and ‘residents’ to specifically refer to care homes for older adults (residential and nursing) and the people who live in them.
Methods
Two rapid reviews were conducted. The first focused on pain tools, aiming to:
-
identify pain measures designed for or tested with older adults including those who may have difficulty with self-report, such as people living with dementia and/or
-
identify pain measures designed for or tested with older adults that contain an observational/behavioural element.
The second focused on tools that measure anxiety and depression, again aiming to:
-
identify anxiety and depression measures designed for or tested with older adults including those who may have difficulty with self-report, such as people living with dementia and/or
-
identify anxiety and depression measures designed for or tested with older adults that contain an observational/behavioural element.
The rapid review method is a form of knowledge synthesis. It is a streamlined version of the full systematic review,157 and usually omits and simplifies elements of the systematic review. In this study, the rapid review was adopted as it enables reviews to be completed in a shorter time period than full systematic reviews which often take at least 12 months. 158,159
Search strategy
We searched five databases {MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), the Cochrane Library, PsycInfo® [American Psychological Association, Washington, DC, USA; via EBSCOhost (EBSCO Information Services, Ipswich, MA, USA)] and Abstracts in Social Gerontology}. The searches for the review focusing on pain tools were conducted between mid-November 2017 and mid-December 2017. The searches for the review looking at anxiety and depression tools were carried out in January 2018. We used a combination of search terms related to our population and constructs of interest (see Appendix 2 for the full search strategy). Owing to the large number of papers on pain, anxiety and depression, we focused on original and review articles published in the previous 10 years (from 2007), which tested or reviewed more than one tool with older adults. Only studies published in English were included.
In both reviews, following the literature search and tool identification outlined below, data were extracted on each tool and used to populate a spreadsheet. The data extracted were organised around the following headings, which related to characteristics of the tool:
-
review article identifying the tool
-
name of tool
-
original article outlining tool
-
availability of the tool
-
main population designed/tested with
-
other populations used with
-
aspect(s)/dimensions of pain measured
-
method(s) of data collection
-
details of measurement scale/rating/scoring
-
details of development and testing
-
other (methodological issues, reflections on the tool).
The data were then summarised in chart form (see Appendix 2, Tables 21 and 23).
Findings
Pain tools rapid review
The search strategy (see Appendix 2) yielded a total of 2167 papers. Following a review of the titles, 1865 of these papers were rejected. When duplicates were removed, there were 196 unique papers.
The title and abstract of each of the 196 papers were reviewed for relevance and adherence to the inclusion criteria (see Appendix 2). Twenty-six papers were included in the final rapid review, the majority of papers being rejected as they focused on a single tool (Figure 1). A total of 22 tools for measuring pain in older people, people living in care homes and people living with dementia were identified from these final 26 papers. The identified pain tools are presented in Table 3. Further details of the key characteristics of these tools can be found in Appendix 2, Table 21.
Self-report | Proxy | Observation |
---|---|---|
21-Point Box Scale Color Analog Scale Faces Pain Scale (FPS) Functional Pain Scale Iowa Pain Thermometer McGill Pain Questionnaire Numeric Pain Rating Scale Verbal Descriptor Scale Visual Analogue Scale interRAI Long-Term Care Facilities |
Pain Assessment for the Dementing Elderly Pain Assessment Instrument in Non-Communicative Elderly interRAI Long-Term Care Facilities |
APS CNPI Doloplus-2 DS-DAT MOBID-2 NOPPAIN NPAT Observational Pain Behaviour Assessment Instrument PACSLAC PAINAD interRAI Long-Term Care Facilities |
The tools identified in the review can be categorised by mode of data collection. Three different modes were identified among the tools: self-report, proxy report and observation. Most tools tend to use just one mode of data collection, although the interRAI Long-Term Care Facilities tool161 uses all three. However, it should be noted that many of the observational tools include verbal manifestations of pain, such as saying ‘it hurts’, in their evidence for rating the presence or intensity of pain, for example negative vocalisations in the Pain Assessment in Advanced Dementia (PAINAD)162 or ‘vocal complaints: verbal’ in the Checklist of Nonverbal Pain Indicators (CNPI). 163
Ten tools identified in the review use self-report methods and have been tested or validated for use with older people, people living in care homes or people living with dementia. These tools were usually not developed specifically for older people or people living with dementia and other cognitive impairments. Instead they tend to be tools developed for use either with the general population or with children, such as the Faces Pain Scale (FPS). 164 The self-report tools usually, although not exclusively, focus on the measurement of pain intensity and use a range of approaches to facilitate self-report, including words, numbers and visual representations of pain. Often these different methods are combined in a single tool. For example, the 21-Point Box Scale uses a number scale anchored by words at either end165 and the Iowa Pain Thermometer166 uses a visual representation for pain alongside words to add further clarity to its representation of pain intensity.
The pain tools use a variety of words to describe different pain states. For example, the McGill Pain Questionnaire167 begins with ‘no pain’ and follows with words that represent increased intensity of pain (mild, discomforting, distressing, horrible . . . excruciating). Similar approaches are used in other scales, such as the various versions of the Verbal Descriptive Scale, including the Verbal Descriptive Scale (pain thermometer),166 on which the six-point scale ranges from ‘no pain’ through increasing severity (mild, moderate, severe) up to ‘as bad as it could be’. Words are also used in tools and sections of tools that focus on aspects of pain beyond intensity, such as the experience of pain (e.g. McGill Pain Questionnaire167). Numbers are used by some tools to represent pain intensity, with different scales using sets of numbers. Different versions of the Numeric Pain Rating Scale168 use different scales. The most common was 0–10 or 0–100 points, but other numerical scales are also used (e.g. the 21-Point Box Scale165). On all of the scales, greater intensity of pain is represented by a higher number.
As noted above, visual methods are also employed to represent and capture the intensity of a person’s pain. For example, many of the numerical scales are presented visually on a scale resembling a ruler. Other visual methods include the Visual Analogue Scale and the aforementioned pain thermometer or the Colour Analog Scale,169 on which visually greater levels of pain are represented by the top of the thermometer, where the colouring is more intense. Another form of visual representation used in pain tools is faces. The various versions of the FPS164,170,171 use faces with different expressions to represent different pain states.
Because of the subjective nature of pain, self-report is, quite understandably, viewed as the ‘gold standard’ of pain measurement and diagnosis. 65,172 However, high levels of cognitive impairment in residents of older adult care homes,18 and the associated challenges of recognising and reporting pain, have often demonstrated that self-report pain scales, and, indeed, self-report scales more generally, are inappropriate for use in long-term care settings. 55,173–178 Thus, the focus of pain measurement tools specifically for older people and especially for older people living with dementia has moved to observational methods.
Most reviews of observational-based pain tools agree that, although self-report is preferable, where people are not able to reliably self-report, observational methods are the most appropriate55 approach to the measurement or diagnosis of pain. 46,55,70,179 Moreover, specific observational pain tools are recommended by professional bodies. For example, the Abbey Pain Scale (APS)180 is recommended in the UK by the Royal College of Physicians, the British Pain Society and the British Geriatrics Society. 45 We found 11 pain tools that use observation to investigate, measure or diagnose pain among older adults, people living in older adult care homes and people living with dementia.
Unlike the majority of the self-report tools, these tools were almost exclusively developed for people living with dementia. The most common approach employed by the observational pain tools in this review is the identification of pain behaviours, which in turn populate a pain intensity score, such as in the PAINAD,162 where pain is scored on a range of 0–10. In this tool, 1–3 is a reflection of mild pain, 4–6 is a reflection of moderate pain and 7–10 is a reflection of severe pain. Some tools measure other aspects of pain. For example, the CNPI163 focuses on the presence of pain, and one of the earliest observational tools, the Discomfort Scale in Dementia of the Alzheimer’s Type (DS-DAT),181 measures the frequency and duration of pain. The Mobilisation–Observation–Behaviour–Intensity–Dementia-2 Pain Scale (MOBID-2)182 looks at the location of pain in addition to its intensity. In some tools, such as the APS180 and Doloplus-2,183 the severity behaviours themselves are also rated. The pain behaviour items in the observational tools found in this review are presented in Appendix 2, Table 22.
Many of the observational-based tools in the review were designed to be administered by care workers, nursing assistants, nurses and health professionals rather than by researchers. This is often reflected in the way the tools were designed to be used. Nearly all are short tools – with the exception of the 60-item Pain Assessment Checklist for Seniors with Limited Ability to Communicate (PACSLAC)184 – and developers claim that they can be administered in under 5 minutes, and, in the case of the APS, about 1 minute. 180 However, many of these tools were not designed to provide a one-off snapshot of the pain experienced by a resident. For example, the APS and the Doloplus-2 were designed to be used on multiple occasions to understand a person’s pain. Other tools were also clearly designed for use in practice, as they stipulate that the gathering and rating of pain behaviours should occur during activities. The MOBID-2, for example, sets out five different movements that should be carried out by the person being assessed. These include turning to both sides in bed and opening both hands, one at a time. A different pain assessment is made for each movement. Similarly, the Non-Communicative Patient’s Pain Assessment Instrument (NOPPAIN)185 was designed to be used to assess pain during a range of activities, including dressing and bathing.
A number of the tools listed above have been widely used and are well tested and validated. In particular, the APS, the Doloplus-2, the PACSLAC and the PAINAD tend to perform best in validity, reliability and psychometric testing. 70,172,178
There are, however, a number of caveats and concerns around the use of observational pain measures. One of the most salient concerns about using observational means to assess pain is that they focus inherently on behaviours. Although the behaviours used as indicators of pain are clearly actions and behaviours that people might engage in when in pain, it has been pointed out that many can also be indicative of other states, such as hunger, thirst, delirium, depression or anxiety. 46,55,186 There are also concerns that an observational-based pain tool cannot assess the intensity of a person’s pain, only its presence. 185,187 A number of studies have suggested, however, that this concern has been overstated and note the high level of agreement on pain intensity between self-report and observer-rated pain scores. 70,179,188 In the light of these concerns, many reviews of observational-based pain tools have concluded that best practice should involve attempts to elicit self-report, supplemented, where needed, by observation and other methods to assess pain in older people with cognitive impairments, such as physical examination, medical history and reports from family members. 46,65
Anxiety and depression tools rapid review
The search strategy (see Appendix 2) yielded a total of 805 papers. A screening of the article titles removed 451 papers. When duplicates were removed from the remaining 354, there were 166 unique papers. The title and abstract of each were reviewed for relevance and adherence to the inclusion criteria. This resulted in 20 papers being assessed as eligible for the review. The majority of the excluded items looked at only one tool. During the review itself, two other papers were identified from citations in the review papers. Twenty-two papers were included in the final rapid review (Figure 2).
A total of 20 tools for measuring anxiety and depression in older people, people living in care homes and people living with dementia were identified from the 26 papers. A number of the tools had multiple versions. Some were revisions of a tool over time (e.g. the Beck Depression Inventory), whereas others were different versions of tools that used different methods (e.g. the Patient Health Questionnaire, which is available in self-report and observational versions). Two more tools were included in the review of tools. The Minimum Data Set Depression Rating Scale (MDS-DRS)189 was identified in the pain review, and the Dementia Mood Assessment Scale (DMAS)190,191 was identified while exploring the properties of the Hamilton Depression Scale (HDRS),192,193 the tool on which the DMAS is based.
The identified anxiety and depression tools are presented in Table 4. Further details of their key characteristics can be found in Appendix 2, Table 23.
Name of tool | Symptom measured | Mode |
---|---|---|
Anxiety Disorders Interview Schedule for DSM-IV (ADIS-IV) | Anxiety | Clinical interview |
Beck Anxiety Inventory | Anxiety | Self-report |
Beck Depression Inventory/Beck Depression Inventory II | Depression | Self-report |
Brief Measure of Worry Severity | Anxiety | Self-report |
Centre for Epidemiologic Studies Depression Scale (CES-D) | Depression | Self-report |
Cornell Scale for Depression in Dementia (CSDD) | Depression | Clinician interviews (proxy and patient) |
DMAS | Depression | Clinician interview (input from proxies) |
Evans Liverpool Depression Rating Scale | Depression | Self- and proxy report |
Generalised Anxiety Disorder Questionnaire for DSM-IV | Anxiety | Self-report |
Generalised Anxiety Disorder Severity Scale | Anxiety | Clinician interview |
Geriatric Anxiety Inventory (GAI) | Anxiety | Self-report |
Geriatric Anxiety Scale (GAS) | Anxiety | Self-report |
Geriatric Depression Scale (GDS) | Depression | Self-report (plus proxy) |
Hamilton Anxiety Rating Scale (HAM-A) | Anxiety | Clinician interview |
HDRS | Depression | Clinician interview |
Hospital Anxiety and Depression Scale (HADS) | Depression and anxiety | Self-report |
MDS-DRS | Depression | Observation (staff) |
Montgomery–Åsberg Depression Rating Scale (MADRS) | Depression | Observation and interview by clinical staff |
Patient Health Questionnaire-9 (PHQ-9)/Patient Health Questionnaire-9 Observation Version (PHQ-9 OV) | Depression | Self-report/observation (staff) |
Penn State Worry Questionnaire (PSWQ) | Anxiety | Self-report |
Rating Anxiety and Dementia Scale (RAID) | Anxiety | Clinician interview (patient and proxy) |
State–Trait Anxiety Inventory | Anxiety | Self-report |
Of the tools identified, 11 focus solely on anxiety, 10 focus on depression, and one, the Hospital Anxiety and Depression Scale (HADS),194,195 measures both. Among the tools, four modes of data collection were identified: self-report, proxy report, clinician interview/rating and observation. Although most tools adopt a single approach to gathering information, some do combine methods. For example, the Montgomery–Åsberg Depression Rating Scale (MADRS)196,197 combines observation with clinical interview, and the Patient Health Questionnaire-9 (PHQ-9)198–200 has both a self-completion version and a version based on observations.
By far the most common mode of information collection was self-report. Indeed, even when the tools utilise clinician-based ratings, this is often based on clinical interview with the person reporting their experiences. In total, 13 tools, six solely looking at anxiety, six looking just at depression and one rating both conditions, use self-report. Of these, two, the Evans Liverpool Depression Rating Scale201 and the Geriatric Depression Scale (GDS),202–204 supplement self-reports with proxy reports. Most of the tools were designed to be administered using pencil and paper but a number have been validated for administration in an interview.
Four anxiety and four depression tools use the clinical interview as their source of data. In a number of these tools, the views of a proxy are also sought either as a supplement or, in cases where the patient could not self-report, as a replacement. Depending on the tool, the proxy may be a family member or a caregiver who knows the patient well. Some of the clinical interview tools are, however, closer to self-report tools, in that, although the interview is administered by a clinician, its format is highly structured and the data recorded are essentially self-reported. Tools in this mode include the Cornell Scale for Depression in Dementia (CSDD). 205 In other cases, the clinical interview diverges from direct self-report as the rating of items is completed by clinicians themselves, albeit based on evidence collected in interviews. Examples of this type of tool include the Anxiety Disorders Interview Schedule for DSM-IV (ADIS-IV),206 a semistructured interview tool with prompts to inform the interview, following which the clinician rates the patient’s anxiety disorder, or the HDRS192 and the Hamilton Anxiety Rating Scale (HAM-A),207 which have traditionally used unstructured clinician interviews to provide the evidence for their ratings.
Only three tools explicitly use observation as a method of data collection. All of these tools focus on measuring or diagnosing depression. The review found no tools that took an observational approach to measuring or screening for anxiety. Each was designed for either clinical or care staff to use rather than researchers. For example, the MADRS is a tool designed for use only by clinicians, whereas the MDS-DRS189 rates depression by drawing on the routine, daily observations of care staff. Not surprisingly, the tools based on observational methods were often developed for populations who may struggle with self-completion. The MDS-DRS was developed specifically for use in long-term care settings, as was the observational version of the PHQ-9 (PHQ-9 OV). Although the MADRS was developed for and widely used with the general population, it has been well used and tested in older adult residential and nursing care settings. 115,125 The key observational markers of depression found in these tools can be seen in Appendix 2, Table 24.
Across both the self-report and clinician interview tools, there are those that have been developed for the general population and those developed for older people more specifically, such as the Geriatric Anxiety Scale (GAS)208,209 and the Geriatric Anxiety Inventory (GAI). 210 However, tools using either proxy report or observation are much more likely to have been developed for use with older people living with dementia or other impairments that may make self-report more challenging. The DMAS,190,191 the Rating Anxiety and Dementia Scale (RAID), the CSDD and the Evans Liverpool Depression Rating Scale all combine data collection from the person themselves and a proxy to understand, whether by screening or by measurement, the mental health of a person living with dementia. Three depression tools, the MADRS, the MDS-DRS and the PHQ-9 OV, use observation, at least in part, to inform their ratings of depression. Of these, only the MDS-DRS was designed to be used primarily with older adults in long-term care settings. The other two were designed for use in the general population, but their ability to utilise observational methods, either alongside MADRS or in place of the PHQ-9 OV self-report or clinical interview, means that they have been widely used in older adult care homes. 107,115,120,125,211
The focus of both the anxiety and the depression tools is primarily the symptoms of these conditions. Unlike pain, for which self-report tools reflect the subjectivity of the experience by asking the person to rate their pain on a given scale (e.g. 1–100 or ‘none, mild, moderate, severe’), tools looking at depression and anxiety on the whole do not treat these conditions as something that the person experiencing them can rate directly. Instead, the conditions are broken down into common symptoms that are self-rated, rated by proxy or rated by the investigator (usually a clinician) based on interview or observation. Some tools, especially those used for screening, such as the GAI or the GDS, ask about the presence, usually dichotomously, of each symptom to screen or diagnose. However, many of the tools look to establish a level of severity of either depression or anxiety based on the data collected. Most commonly, this is achieved by one of two – usually, but not always, mutually exclusive – approaches: (1) items that investigate the severity of a symptom or impact of the given condition or (2) items that investigate the frequency of a symptom or impact of the given condition.
The first approach is operationalised by statements, for example in the Beck Depression Scale I/II,212,213 that reflect different states within a symptom, such as appetite, or by a scale, for example in the HAM-A or the HDRS. Each of the included symptoms of anxiety or depression is rated on a five-point scale ranging from ‘not present’ to ‘very severe’. Similarly, the second approach, focusing on frequency, also uses both statements, such as in the HADS, to reflect different frequencies within an item representing a symptom of anxiety or depression, and scales, such as the four-point scale used in the State–Trait Anxiety Inventory,214 which runs from ‘almost never’ to ‘almost always’.
In terms of overall scoring, screening tools (e.g. the GDS and the GAI) are often based on clinically significant scores at which a person might be considered to have anxiety or depression. Tools measuring severity of depression or anxiety usually have several cut-off points on a total score range to indicate various severities of the condition. Some simply suggest that the higher the overall score, the greater the severity of the condition, for example the Centre for Epidemiologic Studies Depression Scale (CES-D). 215,216
Most tools use variations or extensions of mild, moderate and severe to label the severity of the condition. For example, the GAI is scored using the categories low anxiety (0–21), moderate anxiety (22–35) and potentially concerning levels of anxiety (> 36), and the MADRS uses the categories normal (0–6), mild depression (7–19), moderate depression (20–34) and severe depression (> 34). As demonstrated in these two examples, some tools assume the presence of either condition, whereas others, including all those designed to screen, allow for a score that reflects ‘no’ or ‘normal’ levels of anxiety or depression.
Conclusion
This chapter outlines two rapid reviews of tools used to measure pain, anxiety and depression in older people, people living with dementia and people living in older adult care homes. Using our search criteria, we found 22 pain tools and 22 tools that measure either anxiety or depression. These tools utilise a range of methods, including self-report, proxy report, clinician rating and observation. Because of the cognitive and communication difficulties that often characterise dementia, tools developed specifically for people living with dementia tend to avoid relying solely on self-report. In the field of pain, this challenge has been more thoroughly addressed, as evidenced by the range of dementia-specific tools using observational methods. By contrast, in anxiety and depression, tools often rely on the clinical interview to rate, or in many cases diagnose, anxiety or depression.
The fact that the reviews identified a large number of existing tools invites the question of whether or not there was a need for this project to develop new tools. Indeed, some researchers who work on the measurement of pain experienced by older people have called on research to focus its attention on testing and validating existing tools rather than developing new ones. 30 However, this view was not expressed without caveats. The development of new tools is justifiable if, according to Lichtner et al. ,30 the conceptual framework underpinning them is different from those underpinning existing tools.
The new items for measuring pain, anxiety and depression in this study differ from the tools reviewed in a number of important ways, including the conceptual underpinning. Foremost, the differences relate to purpose. The tools reviewed here broadly focus on measuring the severity of, or screening for, the condition (in the case of depression and anxiety) and measuring the intensity of the state (in the case of pain). They were also often designed to be used in diagnostic/clinical practice; indeed, many stipulate that they should be administered only by a clinician. Often they are used to inform treatment. Pain is assessed so that it can be managed, and anxiety or depression are diagnosed in order to facilitate treatment. Treatments often include a pharmacological element,65 the strength of which is often determined by the intensity or severity of a condition. Although these tools could be used, for example over time, to infer the impact of a service or intervention on quality of life, that is not their core focus.
The purpose of ASCOT is very different. Although it can inform practice,217 it is not a diagnostic or clinical tool. Instead, it aims to measure the impact of social care services and interventions on aspects of quality of life. The items for the three new domains should also reflect this focus on quality of life and not replicate screening and clinical tools, of which there are many.
Often, social care services will be required to manage clinical states, sometimes to their elimination. However, it is likely that there will be situational anxiety, episodes of pain, and depressed mood following various kinds of loss, as in everyday life for most people, and, therefore, complete elimination may not always be an appropriate, or achievable, goal in social care. Therefore, the new measurement items should focus on the frequency of the condition, be it pain, anxiety or feelings of depression, rather than on the intensity of the condition.
The review also broadened the conceptualisation of the new measurement items. Our initial proposal suggested creating only two domains: one to focus on pain and the other to focus on anxiety and depression. However, most of the tools identified and reviewed in the second rapid review looking at anxiety and depression focused on either of these conditions rather than trying to measure, screen or diagnose both. The review clearly demonstrated that, although these are related states, they are conceptually different and, for measurement, are best kept separate, particularly when these domains are captured by a single question.
Limitations
This study employed a pragmatic, rapid review methodology so that the reviews could be completed in a timely manner to ensure that they could feed into the development of the three new items reported in Chapter 3. A limitation of this approach is that we focused only on tools designed to measure the constructs of interest and did not include quality-of-life measures containing items relating to the constructs. This meant that some well-known scales, such as the EuroQol-5 Dimensions (EQ-5D),218 which has a proxy version for care staff to use, were not included in the review itself. However, perhaps more appropriately, we did include the EQ-5D in the analysis reported in Chapter 4 to validate the construct validity of the new items.
Chapter 3 Developing new measures of pain, anxiety and depression for older adult care home residents
Introduction
Chapter 2 outlined two rapid reviews, one focused on pain tools, the other on tools measuring anxiety or depression. The reviews, which were the first stage in the process of developing new items to measure pain, anxiety and depression, clarified the focus and, also, the purpose of the proposed new ASCOT-CH4 items. Drawing on the reviews, the new items were to be:
-
divided into three separate constructs (pain, anxiety and depression)
-
focused on frequency rather than intensity of symptoms
-
conceptually distinct from existing measures in that they follow the ASCOT approach of measuring the impact of social care on each attribute
-
designed for a mixed-methods approach to data collection, using observation, interviews with residents (where possible) and interviews with staff and family proxies.
Aims and objectives
The work reported here builds on the reviews reported in Chapter 2. Using the findings as a starting point, this stage of the study aimed to develop beta versions of the new ASCOT-CH4 items of pain, anxiety and depression ready to be piloted with care home residents (see Chapter 4). There were three consecutive stages of activity, each with its own aims.
Stage 1: desk-based drafting of tools for each item
Specifically, the aim of this stage was to develop the following draft tools for each item:
-
resident self-report questions and response options focused on the resident’s current frequency of pain, anxiety and depression
-
proxy questions and response options asking staff and family members about the resident’s ‘current’ and ‘expected’ (see Chapter 1) frequency of pain, anxiety and depression
-
initial observational guidance for each domain, drawing on the observational markers outlined in the tools.
Stage 2: focus groups with care home staff
The aims of the focus groups were to:
-
understand how care home staff recognise when residents are in pain or are feeling anxious or depressed
-
find out which words care home staff use to describe residents’ anxiety, pain and depression.
In addition, the groups were used to provide an initial review of the proxy questions.
Stage 3: cognitive interviews with staff and family members
The aims of the cognitive interviews were to improve, revise and finalise the item questions and response options for the interviews to be used in the data collection outlined in Chapter 4.
This chapter presents each stage of work separately, with its own methods and results.
Stage 1: desk-based drafting of tools for each item
Methods
Informed by the reviews, the research team created draft items for each of the three domains. For the observational guidance, the observable indicators of pain, anxiety and depression identified in the two reviews were collated. The initial draft was just a list of these markers (see Appendix 2, Tables 22 and 24).
For the self-report and proxy questions, a number of different question options were developed. The process involved thinking about frequency statements that reflected the four outcome states of the ASCOT: ideal state, no (unmet) needs, some (unmet) needs and high (unmet) needs (see Chapter 1). A number of different options were created (see Appendix 3, Box 1, for examples). In this stage of the development, the focus was on formulating questions to understand what the resident’s experience was now, while they were living in the care home (with help and support in place). Using ASCOT terminology, these would be the questions relating to ‘current’ frequency of pain, anxiety or depression (see Chapter 1). These were reviewed at a research team meeting, and then revised and reviewed by the team again.
Results
The first team review considered a number of possible question options (see Appendix 3, Box 1, for examples). This led to further refinement of question options and revision of those that remained. Decisions following the team review are outlined below.
Question stems should focus explicitly on frequency
Although all the answer options related to frequency of the state, not all of the different question stems did. Some were more vague, for example ‘which statement best describes how much anxiety you usually feel’ and ‘which of the following statements best describes how you feel?’. The team review concluded that, for clarity, questions that aimed to measure frequency should explicitly state this in the question stem.
Questions should use the same answer option scale across the three domains
Across all the draft questions a number of different scales were used. All questions and response options needed to fit within the ASCOT outcome states framework (see Chapter 1), so the team decided that there should be a common frequency scale across the three items. It was also felt that this would support respondents during completion.
Answer options should not have any reference to daily basis
Although none of the draft scales developed were entirely prescriptive about frequency, some answer options used terms such as ‘on a daily basis’ or ‘daily’. Others were more subjective or open to interpretation, using terms like ‘generally free form’ and ‘sometimes’. The team felt that the four-point scales would have more coherence if they applied a consistent approach across all three items. Moreover, terms like ‘sometimes’ and ‘often’ allowed the respondent (the care home resident or their proxy) some ability to rate the domains subjectively, which is central to the aim of quality-of-life measurement and the ASCOT tools.
The pain question should ask about being ‘in pain’
The pain questions used a number of different descriptors of pain states, including ‘experiencing pain’, ‘feeling pain’, being ‘free of pain’ and ‘pain free’. The team felt that the term ‘in pain’ most closely aligned with how people generally spoke about this. One of the question stem options used the term ‘discomfort’ alongside pain. The team felt that this was a broader term that could be used to describe a number of different states not necessarily linked to pain, for example feeling too hot or too cold. Not only did this have the potential to make the question too vague, it also overlapped with existing ASCOT domains, such as personal cleanliness and comfort, and accommodation cleanliness and comfort.
The pain domain requires additional information to clarify pain management
During the initial review, the research team felt that the draft pain questions needed some additional clarification around pain management. In ASCOT, ‘current’ ratings are what the person’s situation is now, which in care homes means ‘with all the care and support of the home and its staff’ in place. Thus, if medication is given by staff to manage pain, anxiety or depression, then the ‘current’ ratings should be based on what the person’s experience is with that in place. The ‘expected’ rating, conversely, is where we measure what the person’s symptoms or experiences would be without that care and support (e.g. administration of pain medication by care staff).
The anxiety question should ask about feeling ‘worried or anxious’
The draft anxiety questions used a number of different terms to describe the focus of the question, including ‘feel anxious’, ‘worried and concerned’ and ‘free from worry and anxiety’. The term ‘anxious’ on its own was felt to be too close to the clinical category of anxiety. The team concluded that changes were needed to reinforce the question’s quality-of-life focus by placing ‘anxious’ alongside a clearly non-medical term. The aim was to prevent the question being interpreted as requiring a diagnosis or being perceived as a screening tool. After some discussion, the team settled on ‘worried’, as this reflected the way in which many people, including older care home residents, would talk about feeling anxious.
The appropriateness of the ‘never’ or ‘free from’ answer option in the anxiety question
The ideal outcome state in the draft anxiety questions was usually defined by the complete absence of anxiety. A range of different terms were used to note this, including ‘never’ and ‘completely free from’. In the team review, it was suggested that ‘never’ experiencing anxiety was a problematic answer option. Anxiety, it was argued, was a normal state to experience sometimes, and that, clinically, never feeling anxious was often a marker of other psychological issues. Conceptually, however, the team felt that ‘never’ was a closer reflection of the ASCOT ideal state. The team agreed that revisions to the draft questions would explore an alternative scale that was not anchored at ‘never’, or an equivalent term, to be tested in the next stage.
The depression question should ask about feeling ‘down or depressed’
Most of the draft depression questions used the term ‘depressed’. The review concluded that this should be revised to help emphasise that the aim of the question was not to diagnose or ask about clinical depression, but that it was part of a quality-of-life instrument. To achieve this, it was felt that the term ‘feels depressed’ should be replaced with the more colloquial ‘feels down’.
Drawing on the discussion and conclusions of the research team review, two sets of draft questions were developed for testing in the next stage of development. These can be found in Appendix 3, Box 2. In all cases, the question stems incorporated the conclusions from the first team review. Where the two sets of questions differed was in the response option scale. One was anchored to the term ‘never’ and the other was anchored to the term ‘hardly ever’. In both scales these terms reflected the ideal state in ASCOT terminology.
Stage 2: focus groups with care home staff
Methods
Focus groups were conducted with care home staff to elicit a range of views and uncover collective views and meanings. They have been used previously in the development of ASCOT and other research tools and questionnaires. 219–221 Four older adult care homes were recruited to the study, the aim being to hold one focus group in each home. Located in South East England, all of the homes were run by a large national chain and were registered for older adult residential and nursing care. Recruitment to the individual groups was administered by the care home managers, who received guidance from the research team. Managers were asked to recruit between five and eight people to each group222 and to choose a range of staff who had interacted directly with residents, such as care, nursing and activity staff. Some homes invited other employees, such as maintenance staff and receptionists, to the groups as they felt that all staff in the home helped, supported and knew the residents.
The focus groups themselves were dual-moderator groups,223 with the administration shared between an academic researcher and a PPI co-researcher. As well as co-moderating the focus groups, the three project PPI co-researchers were involved in planning the focus groups and analysing the data from the focus group sessions. More information about the PPI co-research can be found in Chapter 7.
The groups consisted of an introduction and three substantive sections reflecting the aims of the group (the focus group topic guide can be found in Appendix 3, Box 3):
-
how staff recognise when residents are in pain, or are feeling anxious or depressed
-
the words staff use to describe residents’ anxiety, pain and depression
-
a review of two draft proxy questions on pain, anxiety and depression.
The academic researcher led the introduction and the final section testing the draft questions, and the PPI co-researcher led the first two substantive sections of the groups. In the findings below, a PPI co-researcher is identified as mod1 and the academic researcher is identified as mod2. With the agreement of those present, all groups were audio-recorded and fully transcribed. The focus groups’ data were organised and analysed using NVivo 12 (QSR International, Warrington, UK). Coding was carried out by a single researcher using a primarily deductive thematic analysis224 focused on the need to develop research tools from this work. However, all PPI co-researchers and another member of the research team familiarised themselves with the focus group transcripts and met to discuss the findings and how these might feed into the development of the pain, anxiety and depression domains. They also helped with generating and reviewing themes and introduced ideas that had come more inductively from the data. The whole research team met to further review the themes and to discuss the findings of this section of the study and how these fed into the development of new ASCOT domains.
The sample
During November and December 2018, focus groups were carried out in three of the four care homes. The fourth home withdrew from the study owing to issues beyond the control of the project. Across the three groups, 22 staff from the three homes participated. Table 5 outlines some basic demographic and role information about the focus groups participants. Although the majority of participants were those who provided direct social or health care, the groups also included some home managers and others who supported the residents of the home.
Characteristic | Group (n) | All (n) | ||
---|---|---|---|---|
1 | 2 | 3 | ||
Gender | ||||
Male | 0 | 3 | 0 | 3 |
Female | 4 | 6 | 9 | 19 |
Ethnicity | ||||
White | 3 | 7 | 8 | 18 |
Ethnic minority | 1 | 2 | 1 | 4 |
Role | ||||
Care worker | 2 | 5 | 4 | 11 |
Nurse | 2 | 2 | 2 | 6 |
Manager/deputy | 0 | 1 | 1 | 2 |
Maintenance | 0 | 1 | 0 | 1 |
Activities co-ordinator | 0 | 0 | 1 | 1 |
Receptionist | 0 | 0 | 1 | 1 |
Total | 4 | 9 | 9 | 22 |
Findings
Signs of pain, anxiety or depression
In the first part of the focus groups, staff were asked to share and discuss how they recognised when the older people they supported were in pain or feeling anxious or depressed. Each group took the three different states in turn. Staff were able to share a range of ways in which they would know that their residents were experiencing pain, anxiety or depression. Not surprisingly, a starting point was often verbal utterances and statements from residents:
How do you recognise if someone is in pain?
. . . sometimes they say verbally.
Female 01, care worker, group 1
Staff also noted that residents may verbalise these states in less direct ways:
Oh, some residents will, resident might say, you know, ‘I’m not happy in here, I want to go home’, you know, then you’ll get, you know, how they’re feeling as well . . .
You also get, rare, but you sometimes also get the residents actually saying, ‘I just want to die, that’s enough now, I just want to die’, and they’ve had enough, that’s when you sort of get the extreme opposite end of it.
Male 02, manager, and male 03, care worker, group 2
However, the majority of the conversation in this section of the focus groups explored non-verbal ways or behaviours that suggested that a person they supported was in pain or feeling anxious or depressed. All of the groups were able to come up with ways in which they recognised each of the states when a person either did not or could not clearly verbalise their experience. For example, in one group the talk around recognising anxiety initially focused on residents missing their families and displaying anxiety around that:
What about anxiety, so someone said agitation, but anxiety, does that look different, could you tell whether someone’s anxious?
Want to see their family and then they become very anxious, they want to get out.
They walk up and down and they can’t remember why they’re here.
Yeah, they look for their children as well, even though they’re grown-up, they still see them as babies, and young children.
And that looks a bit like anxiety to you, does it?
They’re there looking, physically looking for somebody that’s not there, that’s not in the building, that isn’t actually a living person any more.
Female 01, care worker, and female 07, nurse, group 3
In another group, the discussion on recognising depression also moved from residents verbalising their feelings to more behavioural signs:
So how about in the feeling of depression then, how would you observe that?
One lady crying, very much crying, she all the time say she want to die, but you see she don’t want to listen anything what we suggest, yeah, she just all the time crying and . . .
Yeah, but then you will see the changes in your behaviour, like she might not talk to people, or she might not like to participate in activities, or she might not like to eat foods, or you know like you can see the loss of appetite, and yeah, so the changes.
Physical change, like just wanting to stay in bed all day, whereas before she like, she’s very keen on eating lunch in the dining room and suddenly she just wanted to stay in bed, or there’s also like losing appetite, to eat, and lost interest in the things they like to do before, like before she likes to watch [all talking at once]–, . . . examples are 6 o’clock they like to watch news, and suddenly just not interest, just covering her face in the duvet in her bed, or yeah, or not taking any supper at all, lost 4 kilograms, yes.
Or even like washing or dressing, gives us clues.
Declining personal hygiene, yeah.
Female 04, nurse, and female 02, nurse, group 1
The staff in the focus groups offered several different behavioural signs for each condition, and there was often overlap between the signs each group put forward. For example, when talking about how staff in care homes might recognise depression among residents, every group made reference to crying or suggested that crying may indicate that the resident was depressed. A summary of the comprehensive list of behavioural signs staff use to recognise pain, anxiety or depression in their residents can be found in Table 6. Indeed, looking at how the tools outlined in Chapter 2 suggest that we may recognise either pain, anxiety or depression, there is a great deal of overlap. This suggests that staff are, at least in terms of knowledge, aware of how these conditions may present in the people they support. Some are clearly aware of and trained in the use of tools to systematise recognition and even diagnosis of these conditions. This will be expanded on later.
Domain | Signs of . . . |
---|---|
Pain | Aggression, agitation, anger, challenging behaviour, changes in mood, crying, difficulties with mobility, facial expressions (e.g. grimacing), guarding painful area, movements (e.g. twitching or tremors), non-verbal sounds (e.g. screaming, shouting, groaning, moaning), physical changes (e.g. high blood pressure and pulse), reaction to touch, refusal to participate in care or leisure activities, verbal utterances |
Anxiety | Agitation, asking and searching for people (such as family or children), being unsocial, changes in appetite, changes in mood, confusion, crying, distress or being fearful, fidgeting, looking for lost things, low attention span, not wanting to be alone, refusing to do things, repetitive speech, restlessness and not settling, shouting, wanting to get out of the home to see family |
Depression | Aggression, agitation, anger, becoming unsocial, declining personal hygiene, frustration, isolating, loss of appetite, loss of interest in things, missing family members or home, not drinking, refusal to take part in activities and care, restlessness, self-harm, sleep problems, staying in bed, tiredness, verbal utterances (e.g. ‘I feel low’), weight loss, withdrawing |
The other striking feature when reviewing the lists of signs that a person is experiencing pain, anxiety or depression is the similarity of the signs of the three conditions. Signs such as crying and refusal to do activities are linked to each of the conditions, as are variations on agitation, frustration, aggression and anger. Occasionally, participants in the groups noted some overlap between these behavioural signs. For example:
People that are in pain can also feel very low in mood as well, not necessarily depressed, and you need to be able to sort of like tell the difference, ‘cause it can actually be quite subtle.
Female 02, care worker, group 2
Another participant also suggested that the signs of anxiety were remarkably similar to those of depression. However, other participants saw the indications of the two conditions as quite separate, with anxiety characterised by agitation and depression characterised by withdrawal. A couple of participants also noted that the behavioural aspects of pain, anxiety or depression could also be behaviours associated with dementia. When discussions arose regarding how one makes distinctions when symptoms or behaviours may indicate a number of states, care home staff tended to refer to knowing the person and their usual behaviour, diagnosis and history. When people are new to the home, care workers and nurses said that understanding and interpreting their behaviour can be difficult and one solution is to consult the resident’s family:
If there’s a new person coming in you can speak to the families, if they think they’re not the same person, they’re acting different then you can communicate with them and speak to them.
Female 01, care worker, group 3
Words used to describe pain, anxiety and depression
In the second section of the focus group, participants were asked to share the words they used to describe residents’ pain, anxiety and depression. The participants’ responses tended to fall into a couple of categories. To help participants, the focus group moderator often suggested that they think of different aspects of talking about or describing pain, anxiety or depression. When asked about severity or intensity of pain, anxiety or depression, one of the most common ways that focus group members responded was to refer to categories drawn from measurement, screening or diagnosis tools used either by them or by others in the care home. This was most prevalent with regard to pain, but, especially in the first focus group, also to anxiety and depression. Several participants talked of using the terms ‘mild, moderate and severe’ to describe residents’ pain, as well as their anxiety and depression:
Do you have a special word, you know, specific words that you would use to describe the pain and the levels of pain . . .
Mild, moderate and severe.
And for anxiety, do you have something for?
The same, mild, moderate and severe [all laugh].
Female 04, nurse, group 1
The home where this focus group was conducted, like all care homes in this section of the study, used the APS,180 albeit not exclusively, to diagnose and inform the treatment of residents’ pain. On the APS, both individual items and overall scores are rated as no pain, mild pain, moderate pain or severe pain.
Focus group participants also mentioned numerical approaches to understanding and discussing the intensity of pain their residents experienced. Again, this reflected the tools that care home staff encountered in their working environment:
One thing it might be interesting to think about is when you’re thinking about describing someone in pain, is do you have different ways of describing how much pain someone is in?
‘Cause we have two tools that we use, the first one is the numerical pain tool, where the person can express if they’re in pain, so we ask them how painful is it from a scale of 1 to 10.
Female 03, deputy manager, group 3
Although mentioning measures of pain was common, other participants reflected the subjective nature of pain in the way they usually spoke about the pain experienced by those they support. In the following case, the participant is reluctant to label the severity of pain a resident might be experiencing:
If they [the resident] can’t speak, you can’t exactly say, oh he’s in agony, because you don’t know that.
Probably repeat it if that’s what the resident had said, ‘cause it’s in their own words, but if they’re not verbal generally, we just say pain, ‘cause we don’t really know the degree of it.
People have different pain thresholds, I mean my man’s agony may be another man’s paper cut sort of thing, so it’s just, it’s pain until either we can find out or they can tell us how much pain they are in.
Female 02, care worker, and female 03, deputy manager, focus group 3
This example also introduces another approach to talking about pain: using words the residents themselves use. In this discussion, terms such as ‘pain’, ‘discomfort’ and ‘uncomfortable’ were suggested as the sorts that residents may use. This was not restricted to talking about pain. Indeed, staff described using this approach in relation to both anxiety and depression. In the following example, a care worker talks about using the resident’s own words to describe anxiety:
When I am trying to discuss something with a colleague about somebody’s anxiety, I try and use the same words that they used, so that I can go to somebody that I work with, and I can say, ‘This person is this today’, and then like my colleague can respond, ‘OK, she’s always like that, that’s normal’, or, ‘Ooh, that’s a level above’, or below, or anything at all, and if we use their words it comes out better.
Female 04, care worker, group 2
However, it was suggested that often residents did not explicitly verbalise how they felt and that staff usually had to rely on behavioural prompts such as those outlined earlier:
It’s the symptoms more than rather than saying I’m not feeling too happy today, or I’m feeling a bit low, or I’m worried about something, so it’s about us being able to pick on their behaviour and how they are, you know, to know exactly if something is happening.
Male 02, manager, group 2
This idea was also expressed indirectly in other groups when they mentioned using tools and relying more on observing behaviour or sometimes talking to relatives. For example:
. . . generally we have to work it [anxiety] out, we have to work things out from them signs.
Male 03, care worker, group 2
With further probing by the focus group moderator, some participants shared that they did not always use either the terms reflecting the measurement schema of tools or the words that residents necessarily used. This was especially apparent in the case of depression but was also, to a much lesser extent, expressed with regard to anxiety. In one group (3), staff tended to use terms such as ‘low mood’, ‘sad’, ‘feel like crap’ and ‘feeling down’, while in another group (2) they added ‘got out of bed on the wrong side’ to this list.
As the discussion moved to more colloquial terms for depression, concerns about using the term ‘depression’ began to emerge in two groups. In the following quotation, a participant suggests that the older people who live in the home have not been used to using the term depression, or indeed anxiety, during their lifetimes:
I also think ‘cause they’re [the care home residents] at that age where years ago there weren’t such thing as anxiety and depression, it was get on with it, you know what I mean, it was like just shrug it off and just get on with your life sort of thing, so I think it’s different kind of thing you know isn’t it, where you’ve got to sort of realising, they need to realise well that it’s OK to talk about things.
Male 01, maintenance worker, group 2
However, the focus groups suggested that some staff were also uncomfortable using the term depression, albeit for a different reason from that suggested for residents. Several participants saw depression as a label or term that should be used only once a medical professional had diagnosed the condition. Signs such as those outlined in the last section may indicate depression, but several staff felt uncomfortable making what they felt to be a diagnosis:
I think when we’re trying to discuss depression, when we’re communicating it to others we don’t, we would actually use more than the word depression, so it’s actually noting that someone’s behaviour is changing, and just explaining why we feel that that behaviour has changed.
So you’d say something like, I’m a bit concerned so and so is depressed because I’ve seen them not eating, or–,
Yeah, you just kind of say well concerns with resident A because, you know, lack of appetite, isolating themselves in their bedroom. We wouldn’t necessarily diagnose them.
We’d only use that word if it’s actually been confirmed by the doctor, we’d only use like you said the symptoms, that’s with anything, so even with an infection we would never say, although we suspect it’s a urine infection, we would never say it was a urine infection until the doctor confirms and stuff like that.
Female 06, nurse, and female 07, nurse, group 3
Testing of draft proxy questions
The final section of the focus groups involved asking participants to reflect on the draft pain, anxiety and depression questions developed from the rapid review. Although we also developed questions for use with care home residents, those tested here were the proxy questions developed for care home staff and relatives to answer on the resident’s behalf. Two different scales were tested for each domain, giving a total of six questions. Given the limitations of time and participant patience, we split the six questions between the three focus groups. Each group looked at two questions (see Appendix 3, Boxes 2 and 3).
In the focus groups, participants were asked to take each of the two questions assigned to that group and answer each with reference to a person they knew: someone they currently supported or used to provide support for. As this exercise was purely to test our questions, we did not need to know anything about the person they had chosen to answer about.
Each participant in the groups was able to answer both questions, sometimes remarkably quickly:
Was it [the question] easy to answer or did you have to think quite a bit?
No, I didn’t need to really think, you know, it’s not that–, straight away.
Yeah, straight away.
Straight away.
It’s straight away, yeah.
Female 01, care worker, female 02, nurse, and female 04, nurse, group 1
When groups were asked to think about what the questions meant, they tended to answer in ways that suggested they understood the question. Importantly, for ‘current frequency’ of each item, they grasped that the questions were not asking about underlying level of need, but rather were asking what the person’s life was like now, with the help and support provided by the care home in place:
I think it’s [the question] asking us to consider the management, and how does the person react to that management.
Female 03, nurse, group 1
Further evidence that focus group participants understood the questions (for all items and scales) came from discussions of why an individual participant had chosen a particular answer. In the following example, a participant explains why they thought that ‘never feels down or depressed’ was the answer that best reflected the experience of the person about whom they were answering:
. . . so anyone who went for never, happy to share the reasons why?
Well I never see the lady sad, she’s quite chipper, whenever she’s out she’s quite sociable, never see any problems with any mood swings at all, pretty upbeat, so I don’t really think she’s–, I’ve never seen her sad or upset, then again she’s not been with us for a very, very long time.
Female 08, receptionist, group 3
This was also demonstrated in another of the focus groups looking at one of the questions on anxiety:
Anyone else got a reason for picking the ‘sometimes’ that they want to share?
I picked sometimes because it can depend on the time of year, it could be like coming up to an anniversary of a loved one’s death or something, and that can make people feel a bit anxious, and like if it’s Christmas and like they have some kind of stressful event happen around then, that can make them also feel like that.
So was this a person who–,
Yeah, it was in my previous home, there was somebody whose husband died around Christmas time so she would always get a bit anxious and nervous around Christmas time thinking she was going to lose other members of her family.
And how was she at other times of the year?
Other times of the year she was absolutely fine, and she did have the odd anxious moment, especially if like somebody, like if there was a new member of staff coming into the home that didn’t know her routine and things like that, that could make her a little bit anxious and nervous, but once we’d shown the other person what to do she was absolutely fine, if there was somebody else in the room with her that knew her routine and could explain it really well to the new girl, that was fine [laughs].
Female 02, care worker, group 2
Despite these positive responses to the draft questions shared in the focus groups, discussion around the questions revealed several issues pertinent to either the draft questions themselves or using staff report to understand residents’ experience of pain, anxiety or depression. The answers given by focus group participants often tended to avoid the extremes of the scales. In particular, very few participants choose the answer options reflecting ‘never . . .’ or ‘constantly . . .’. This was followed up in the focus groups. Some participants’ accounts suggested that staff felt that a situation where a resident experienced pain constantly, or felt constantly depressed or anxious, was unlikely to occur. This was because that level of severity of any of the conditions would necessitate either intervention by the home or outside support to resolve the problem. This made it unlikely that this answer option would be chosen by a member of staff. This is demonstrated in the following quotation, which refers to anxiety:
So, am I right in thinking no one put constantly? Or I can be wrong?
No.
No, we didn’t put that.
Can anyone imagine putting constantly down, can they think of anyone they’ve ever supported, other companies or perhaps in the past?
No.
No, there’d be something wrong with them, that we’ve had to–, that would be an alarm for us to contact the specialists to–,
Yeah.
Female 01, care worker, female 02, care worker, and female 05, activity co-ordinator, group 3
The necessity of intervention was also expressed with reference to residents experiencing a condition often, in the following quotation depression:
Have we got anyone that went for often?
There’s someone in your unit, often crying and–,
We did have to review her, because we can’t leave somebody often in that kind of state, that as I said when we see somebody like that when we have to do something.
Yeah, and it becomes less.
Yes, we have to like reduce the symptoms, because that will be very important, we can’t just ignore that.
Yeah, sometimes you can never get rid of all the symptoms and, you know, you can’t change people’s lives can you and things that have happened.
But reducing it, so contact would obviously be in for example, first of all the GP, involving the family, then the specialist people, yeah, to try to reduce the symptoms.
So, am I right in thinking that you wouldn’t–, it wouldn’t be that common to come across someone who you feel often?
No
No.
Because then constantly in that state, so you have to look for solutions, like for example trying to get involved with activities all the time, or if they don’t want to do that then maybe as a last resort the medications.
Female 01, care worker, female 02, care worker, and female 07, nurse, group 3
When thinking about the answer option ‘never’, the lack of this answer option being chosen seemed, at least in part, to be an artefact of the testing of process. When questions such as these are used in research or practice, they are about a specific person. In this focus group, participants were asked to answer about a person of their own choosing. As a couple of participants pointed out, they usually, with the odd exception, thought about a person whom they felt experienced the condition to some extent rather than someone whom they felt did not. So, when asked a question regarding pain, and asked to answer about a chosen person, one typically thinks of a person who experiences pain. However, another reason for not choosing the answer option ‘never’ reflected concerns raised in the team review at stage 1. Some participants felt that everybody feels anxiety sometimes. Indeed, one participant referred to it as ‘normal anxious’ (male 01, maintenance worker, group 2). In the following example from the third focus group, the manager of the care home expresses the view, with which others in the group agreed, that anxiety is something everyone experiences:
I believe everyone has anxiety, no matter who you are, or what your abilities are, everybody has anxieties, so no one will ever never have anxiety, I guarantee you coming here this afternoon you had a bit of anxiety, or a bit of [all laugh], an anxiety, so it’s never, there’s never never, the answer there is not a good comparison.
Male 03, manager, group 2
Across the groups, some participants suggested that there were other challenges to answering the draft questions. One participant felt that there was an inherent challenge in answering as a proxy because it was difficult to be sure how someone else feels:
I think it’s quite a difficult question [anxiety], because you’re asking our perception of how somebody else is feeling, and that’s very difficult to us to judge, because even if they’re here for 3 weeks or 4 years we will never actually–, ‘cause nobody is actually 100% honest with anybody all of the time, so I think that question is quite a difficult one to answer.
Female 02, care worker, group 2
In another group, it was felt that these questions would be difficult without specialist professional knowledge. This was first raised with regard to one of the draft depression questions. It was suggested that although the participant – a nurse – had sufficient professional knowledge to make a judgement on the frequency with which the resident felt down or depressed, other staff members, such as care workers, might lack the knowledge to make such a judgement. This point was raised again when discussing the draft pain question. The same participant was concerned that residents and relatives may struggle, or take a long time, to answer the question:
It’s [the pain question] not a straightforward question, if you’re going to ask this to ordinary people, meaning if you’re going to ask a common population, they will take 2 minutes, or 5 minutes to answer it, if you’re going to give it to multidisciplinary team they might answer it more quicker than anyone, so like residents and relatives, yeah.
Female 04, nurse, group 1
Stage 3: cognitive interviews with staff and relatives of residents
Initial testing of the structured proxy questions in the focus groups suggested that nursing, care and other staff in the home were able to answer the questions without too much difficulty. Their experience of working with residents gave them the insight needed to talk about their residents’ experiences of pain, anxiety and depression. The testing also indicated that, at least in a broad sense, the questions were interpreted and understood as had been intended. However, this very brief initial testing highlighted some aspects of the questions that might require further thought and possible revision. This included concerns about the scale used for the response options and concerns about the term depression. Rather than revise the questions solely on evidence from the focus groups, the research team decided that further testing was required in the cognitive interviews.
Methods
Cognitive interviewing, drawing on experimental psychology, is increasingly seen as a way of improving the quality and accuracy of survey tools. 225–227 It has also been used to help inform the development of other ASCOT tools. 219,228–231 Cognitive interviews are an attempt to use qualitative interview methods and evidence to critically evaluate whether or not a survey question ‘fulfils its intended purpose’. 226 This is achieved by conducting an interview where the focus is on the thought processes involved in answering a survey question rather than, as is usual, on the answer provided. More specifically, the cognitive interview may explore people’s comprehension, recall, judgement and estimation, and response processes for a given question. 232 The cognitive interviewee is presented with a survey question and, through the use of two techniques, probing and thinking aloud,226,227,233 the aim of the interview is to establish the thought processes that occur during the answer of the survey question. Thinking aloud is a technique whereby the person is asked to verbalise their thoughts while attempting to answer a question. The other technique, probing, involves the interviewer actively asking specific questions about the thought processes behind answering a question. Examples of probing in a cognitive interview could include asking the participant why they chose a specific answer, or how they interpreted certain phrases or words in a question. Both techniques can be employed in the same interview; for example, thinking aloud can be followed up by probing to explore aspects of the thought process that the participant did not verbalise.
During January and February 2019, 37 cognitive interviews were conducted with both care home staff and relatives/friends of care home residents. They were organised into three phases, with reflection on the previous phase informing revisions to the draft questions subsequently tested in the next phase. Although the final tool also included structured questions that could be used with care home residents themselves, we did not include care home residents in this phase of testing. It was felt that the cognitive burden associated with participating in a cognitive interview, particularly thinking aloud,226 would be too great for many older adult care home residents. Staff and relatives were recruited from four care homes in South East England and given a gift voucher to thank them for their time. One-to-one interviews were carried out in a quiet and private room by one of three researchers.
There were three iterations, or phases, of cognitive testing. The first two focused on testing the ‘current’ questions (see Appendix 3, Boxes 2 and 4) and the final phase included the ‘expected’ equivalent for each item (see Appendix 3, Box 5). Each cognitive interview tested six questions. Phase 1 interviews tested two versions of the four-point response option scale for each item developed from the desk-based review. In phase 2, one of the four-point scales was replaced with a three-point scale, described in detail below. In the third and final phase, we tested one four-point ‘current’ scale and the equivalent ‘expected’ scale for each item and finalised the wording for piloting.
Question ordering was varied to reduce bias in feedback due to ordering effects. Staff were asked to think of a person they supported when answering the test questions, whereas relatives were asked to think about their family member while answering the questions. An example of the phase 2 staff interview schedule can be found in Appendix 3, Box 6.
Each of the interviews was audio-recorded and transcribed verbatim. NVivo 12 was used to organise and analyse the data. Although cognitive testing is now one of the most common ways to pre-test survey questions, there is no agreement on how data created in the cognitive interview should be analysed. 227,234,235 There are some formal methods that aim to systemise the analysis of cognitive interviews, but analysis in these modes often requires a great deal of time and resource. 235 They also often take a very theoretical approach, drawing directly on the thought processes that lie behind answering a question. Willis’227 review of these more formal methods suggests that adopting them is not justified given the time and resources they require. Our analysis also rejected these formal methods, not only because of resource and time implications, but because our interest in the cognitive interviewing data was in how they could inform the development of our new HRQoL questions, rather than to explore the cognitive processes in depth. The analysis and subsequent inductive coding took a practical approach and focused on identifying issues and problems with the survey questions and how interview participants understood and interpreted the questions. This was carried out by a single researcher, but analysis and findings were shared with the project team in a series of meetings.
The sample
In total, 21 staff and 16 relatives of residents participated in the cognitive interviews. All but one of the staff members were white, and most were female (with one male staff member and four male family members). Participants were spread across the adult age range, but, overall, relatives tended to be > 55 years old, whereas staff were often younger.
Findings
To understand both how participants responded to and interpreted questions and how this informed the development of the questions across the three stages of interviews, this section will focus on two issues: problems with and changes to question stems, and problems with and changes to answer response scales.
Problems with and changes to the question stems
Although the interviews in phase 1 tested two different questions for each HRQoL domain, the two questions shared the same question stem:
-
Which of the following statements best describes how often you think [the resident/your relative] is in pain?
-
Which of the following statements best describes how often you think [the resident/your relative] feels worried or anxious?
-
Which of the following statements best describes how often you think [the resident/your relative] feels down or depressed?
The main finding was that, rather like in the focus groups, most participants interpreted the questions in a way that was consistent with our intention. This was demonstrated in two different ways: first, when participants explained, by thinking aloud as they answered a question or when probed by the interviewer, why they had chosen an answer option, and, second, by responding to probes from the interviewer about their understanding of the question and the concepts it contained.
An example of the first way in which participants demonstrated an understanding of the question is a staff member thinking aloud while attempting to answer one of the anxiety questions. In the following quotation, they think through some anxiety behaviours they have seen in the person they are answering the question about. This demonstrates their understanding of our anxiety question and how it matches what we intended when developing it:
So, if you can tell me what’s going through your head right now.
I’m trying to think, would that be anxious? Yes, it would be anxious ‘cause they’re constantly–, they’re worried about the time of breakfast, and lunch, and if they’ve got to go out, and what time bed is. Is it time for them to get up? So, they’re constantly asking the time because they’re worried in case they might miss breakfast.
Care home staff 02
The following example reflects the second way of asking questions in a cognitive interview: responding to interviewer probes about the meaning of the question or terms used in the question. This example relates to a pain question, and the interviewer asks the participant how they interpreted the term ‘pain’ in the question:
What does the term pain mean to you?
A physical pain. When you’ve hurt yourself or a headache or a stomach pain, something like that.
Relative 09
For pain in particular, testing revealed the importance of helping respondents focus on rating ‘current’ frequency of symptoms with symptom ‘management’ in place (e.g. staff giving medication). Participants' responses suggested that their understanding of the term ‘well managed’ reflected our intended meaning:
Well managed, erm, whether the correct amount of medication has been given. That if there’s exercises that’s going to help it that they’re being dealt with, you know. Well, that’s what I would call being well managed.
Relative 05
Moreover, some participants demonstrated an awareness of the interaction between management of and experience of pain:
Yes. I’m kind of assuming well managed, if it was well managed she wouldn’t be in a lot of pain most of the time. So, because she’s sometimes in pain it’s not terribly well managed. Sorry, that doesn’t make sense does it?
That’s OK.
If it’s well managed, she shouldn’t be in pain very much at all. If it’s not well managed at all then she’d be in a lot of pain, but because it’s kind of well managed sometimes she’s sometimes in pain. So, I think that’s quite a difficult one. But it also depends on the intervention of somebody else.
Relative 02
Whereas participants’ understanding of the anxiety and pain questions broadly reflected our intentions when developing them, participants’ understanding of the depression question did not always align with our intended meaning. As outlined in Chapter 2, the new items developed in this study were designed to sit alongside the ASCOT-CH4, a quality-of-life measure, rather than a clinical or diagnostic tool. The intention was to develop an item that reflected generally feeling down, feeling depressed or having a low mood, as opposed to diagnosing or screening for clinical depression. The question used in the focus groups and the first phase of the cognitive testing avoided directly mentioning depression and instead talked about feeling down or depressed (see Appendix 3, Box 2). Although all participants were able to provide an answer and understood the meaning of individual terms used in the question, a number struggled with the use of both ‘feeling down’ and ‘feeling depressed’ in the same question. They found the use of the two terms together contradictory, as these were perceived as different points on a spectrum of feelings:
And what do you think the question means by the terms feeling down or depressed?
Well down, just having a low day, got out of the wrong side of the bed, don’t know why you feel down, you just, you know, it might be the weather, could be a host of different things that make you feel down.
Uh-huh. What about depressed?
Depressed? That’s a different kettle of fish, isn’t it, depression?
Relative 07
Where this was the case, participants usually gave an answer based on ‘feeling down’, as they felt that this was most appropriate to the experience of the resident on whose behalf they were answering. They tended to take a little more time to work through the question, and reported feeling less confident in their answers. In one case, ‘feeling depressed’ caused the participant to struggle to provide an answer:
For me to judge somebody, even my mother, she’s depressed.
OK.
I don’t think it’s, you know it’s down to me to judge that really.
And if it was just about depression how would you answer?
I couldn’t answer it really.
OK.
To be honest with you, I don’t think I’m . . . capable of judging that you know.
Relative 03
This response to the question mirrors our findings in the staff focus groups, where at least one person felt that they could not answer the question because they felt that specific medical knowledge was required to be able to judge if a person had depression.
Consequently, our review of the first stage of the cognitive testing concluded that the wording of this question needed to be revised to ensure that it reflected a non-medical, non-diagnostic approach to understanding how care home residents felt. To reflect quality of life, the term ‘feeling depressed’ was replaced by ‘low mood’ and the domain was renamed low mood. The new question, which was tested in phases 2 and 3 of cognitive testing, was ‘Which of the following statements best describes how often you think [the resident/your relative] feels down or has a low mood?’.
The use of the new question stem in the second and third phases of the cognitive testing seemed to address the problem. Participants viewed ‘feeling down’ and ‘low mood’ as related states, reflecting broadly the same thing:
So, thinking about the terms feeling down or low mood, what do you think we mean by those terms, feeling down, shall we do that one first.
Yeah, feeling down is we – we can’t feel their feelings, but you can – we can see their expression, like face and like frustration, and sometimes they pull hairs, like and put their hands on their head and yeah, and we can see there is some sort of low mood.
OK, and what do you think we mean by low mood?
Low mood means sad probably, yeah, sad, and missing something, kind
Care home staff 20
A similar point was made by a relative of a care home resident. They also talked about how the terms used in our question had not made her think that it was about whether or not her relative had a diagnosis of depression:
OK, what I’m going to ask you is what do you think the question meant by feeling down or has a low mood? What kind of things do you think–, well what kind of things did you think about when you saw those?
I thought about feeling a little bit depressed about your surroundings and about where you are and feeling down in terms of being in the care home, her situation. Not whether she had clinical depression or anything like that, just whether does she feel–, feeling down to me is a bit–, it’s very different than feeling depressed but I–, I focused on the feeling down bit. And I don’t think she was ever depressed but I do think she felt very down that her life had come to that conclusion that she needed to be in a care home and she was very aware that she was a single lady all her life and that now she was reliant on being in a care home and reliant on other people, so I saw that as did she feel down in her situation and her surroundings?
And what about the bit that says and–, or has a low mood? Did you–, did you think about that while answering the question?
Yeah. I feel down and low mood are the same thing to me.
Relative 12
In the above example, the term ‘depressed’ is used, but the respondent makes clear that this is not the same as a diagnosis of depression, which here is referred to as ‘clinical depression’. This term came up in other responses during the second and third phases of cognitive testing, but again it was not used in the medical sense to denote a diagnosis:
I’m going to ask you to kind of explain to me why you went for that one [answer option] as opposed to any of the others?
Occasionally because if she doesn’t see her niece and that, she will feel a bit low and a bit depressed, a bit looking depressed because she hasn’t seen her niece.
And this question uses the term and asks you to think about feels down or has a low mood, and what did those terms mean to you, when you were looking at this question?
Just feeling a bit depressed and lonely because she hasn’t seen her relative and that and she feels that she’s been left here and forgotten about.
Care home staff 13
The explanations of the (now) low mood questions given by participants in the second and third stages of cognitive testing clearly suggested that understanding of the question and terms used matched our intending meaning. This is not to say that every participant found it easy to answer the low mood question. Indeed, a couple of participants noted that it could be hard to know what another person, be it a relative or a person they support, was thinking and feeling, which made answering with confidence challenging. Moreover, some found the scale used for the response options difficult, as we discuss in the next section.
Problems with and changes to answer options scales.
Some staff in the focus groups felt that the ‘never’ anxiety rating option was inappropriate for that domain as anxiety was something that everybody experienced at one time or another. Therefore, in their view, the ‘never feels worried or anxious’ rating could never be chosen. The first phase of cognitive interviews tested two different scales for the response option. For each item, the first response scale was ‘never’, ‘rarely’, ‘sometimes’ and ‘often’, and the second was ‘hardly ever’, ‘occasionally’, ‘often’ and ‘constantly’. See Appendix 3, Box 2, for full phase 1 response options.
In the first stage of testing, one participant (a relative) chose the ‘never’ option for the anxiety question. When probed about their answer, they explained that their relative never ‘feels worried or anxious’ (relative 02). However, in other interviews no participants picked ‘never’ for the anxiety domain. As this may have been an artefact of the testing process (e.g. nobody felt that this outcome state applied to the resident they were thinking of), interviewers asked participants to reflect specifically on the idea of rating ‘never’ for anxiety. The response was almost entirely negative; they felt that ‘never’ was not something that could rate a person as in the anxiety domain. In most cases, this reflected what we found in the focus groups:
Do you think there could be a situation with a resident feeling never worried or anxious?
No.
Any reason why?
Yeah. No, I don’t think I’ve ever met one resident that at some point has never felt a little bit worried or a little bit anxious, or a little bit unsure . . . It’s a scary place in here sometimes . . . It’s a lot to ta–, especially when you’re coming in from somewhere else or from home. Yeah, so, I don’t think so.
Care home staff 01
Most people have like even if it’s once a year they might feel worried or anxious, so I feel that incorporates a normal amount of worried or anxiousness that people might have in everyday life without having to go up to the next level.
Relative 01
Even though initial concern was about including the ‘never’ rating in the anxiety domains, we also tested it in the other two domains. Probes were used to ask participants to consider this rating in the pain and depression domains. Again, some participants suggested that the ‘never’ option was not relevant, as they felt that most people experienced these symptoms sometimes.
One participant offered a further reason for not choosing the ‘never’ option. This relative answering the anxiety question chose ‘rarely feels worried or anxious’ as the option best reflecting the life of the person they were answering about. During further probing about why that option had been chosen rather than ‘never’, the participant suggested there was slight uncertainly about how her relative felt. She had never really seen any signs of anxiety or worry, but felt that:
. . . there is always that possibility that she might be [worried or anxious].
Relative 07
In this example, answering on behalf of someone else meant that there was some uncertainty around the answer.
Following the phase 1 cognitive interviews, the research team reviewed the evidence on the response option scales, and decided to drop the response option scale anchored by ‘never’ in all of the domains. In the second phase, two response option scales were tested: the one anchored to ‘hardly ever’, used in phase 1, and a new scale aimed at addressing findings to date. This was a three-point scale combining different states into a single answer option at each end of the scale to capture the range of feelings and symptoms that people talk about: ‘never or hardly ever’, ‘sometimes’ and ‘often or constantly’.
In phase 2, all participants were able to answer all of the questions being tested. As in the first stage, the four-point scale anchored at ‘hardly ever’ performed well, and was broadly interpreted as intended. During the interviews, only a couple of minor issues were raised. For example, one participant, a friend of a care home resident, suggested that they would like an additional answer option in the low mood question, as they felt that there was space between ‘often’ and ‘constantly’:
Often feels a bit like it’s, kind of, half the time. So, I would tend to put in very often, if I had a chance.
Relative 13
In only one other interview, with a member of staff (care home staff 07), was the four-point scale referred to as problematic. The participant remarked that the three-level pain question was preferable to the four-level, because, in the latter, two options, ‘occasionally’ and ‘often’, essentially meant the same thing. However, this did not cause the participant problems when answering the question, and they chose ‘hardly ever’. Probing clarified that both the question and the answer option chosen had been interpreted in the manner consistent with the intended meaning.
The three-point scale elicited a number of different responses. A couple of participants liked the combined answer options as these fitted with their uncertainty around the answer they wanted to give. In the following quotation, a participant talks about rating pain on behalf of their relative:
How often she was in pain. I think for this one, just for her as a person, as I said, I could never fully pinpoint how often she was in pain and how–, she was a loner, she loved people to feel sorry for her, she loved attention. So, you never knew how much to buy into her pain levels or was she genuinely, constantly in pain? So, for this one, it’s easy for me because I would say often or constantly ‘cause I don’t actually know if there was often or con–, no, I definitely know it was often, I don’t know if it was true that it was constant. I find for this one [the three-point scale], just for her, it’s easier ‘cause I could definitely tick number three [often or constantly in pain] knowing that’s absolutely correct whereas the previous question [the four-point scale] I would tick number three [often] but be thinking she would say constantly, is that true? So, the condensed categories works very well for her for this particular question.
Relative 12
However, others found it difficult to understand. Some participants noted that the two terms were not synonymous, which caused them to struggle to choose an option and spend quite a lot time working through their answer:
I think to say often or constantly, they’re different things. And if someone’s constantly in pain it may be that pain is very, very difficult to manage. Often implies, well, what makes the difference to not being in pain to being in pain? They’re different. I would separate them.
Relative 15
Interestingly, one participant’s response to the three-point scale varied across the three items. In the following quotation, the participant, who liked the breadth that the ‘often’ and ‘constant’ responses provided for pain (see above), found the two terms in the answer option problematic with regard to low mood:
I don’t like often and constantly in the same question. I would say often is different than constantly, so I would say–, I could answer it as she often had a low mood and she often feels down . . . for the reasons that I discussed earlier really but I wouldn’t say she constantly–, if you had to say often or constantly then I guess I would still answer that question because it was more than sometimes.
Uh-huh, OK. So, what did you think that bottom level was getting at then?
All the time and I think often is not quite all the time, it’s–, it’s probably every–, I would say often is every day but there are moments in those days, there are hours in those days when you are fine and that’s what she was like. Constantly, in my head, just means it’s just a constant–, you’re immersed in feeling down, depressed, you can’t get out of that. But, as I said, if I had to pick one–, one of those three I would still pick three because it was more often than not.
Relative 12
Following phase 2, the research team again reviewed the two sets of draft questions. Drawing on the findings reported here, for the final phase of cognitive testing the team opted to focus on the four-point scale anchored at ‘hardly ever’. Evidence from this testing suggested that this was the easiest response option for participants to use and was interpreted as the research team intended.
In the final phase of cognitive testing, the ‘current’ questions for each item were supplemented with matching ‘expected’ questions for each item, using the same four-point scale (see Appendix 3, Box 5). The interviews confirmed the findings of the previous stages and no significant problems with interpreting or answering the questions were uncovered.
Discussion
This chapter presents the development of the new items of pain, anxiety and low mood that were to be used alongside the existing care home version of ASCOT. In stage 1 we developed draft items, following the two reviews outlined in Chapter 2. For each new item the following were developed:
-
self-report questions and response options that were focused on the resident’s current frequency of pain, anxiety and depression
-
proxy questions and response options asking staff and family members about the resident’s current and expected frequency of pain, anxiety and depression
-
initial observational guidance for each domain drawing on the observational markers outlined in the tools.
The final structured resident and proxy questions are shown in Figures 3 and 4. Because of their size, the rest of the final tool, the ratings document with observational guidance and the resident interview prompts can be found in Appendix 3 (see Figures 16 and 17).
In stage 2, we conducted the focus groups with care home staff. The groups began with a discussion of how staff recognise that the older people they support are feeling in pain, anxious or depressed. Unsurprisingly, staff noted that some residents will just directly tell staff that they are, for example, experiencing pain. However, they also made clear that many residents struggle or are unable to clearly verbalise how they feel, and that a large part of understanding their residents’ feelings and experience is recognising behavioural signs of pain, anxiety and depression. Staff in the focus groups were able to identify a large number of these behaviours during the focus groups. The behavioural signs that staff reported mirrored and reinforced behavioural signs of pain, anxiety and depression identified in the literature reviews in Chapter 2. Together, these two sets of behavioural signs were used to inform our observational/rating guidance. The literature review findings fed into the first draft of this guidance. This guidance was added to and revised slightly in response to the focus groups’ findings. It was also amended so that it was no longer just a list of observational markers but reflected its purpose as guidance for rating evidence. The resulting observational/rating guidance can be found in Appendix 3, Figure 16. During piloting, researchers used this to help them recognise and rate pain, anxiety and low mood (see Chapter 4).
One of the striking features of the observational indicators identified in both the focus groups and the reviews was the remarkable similarities between the observational signs of the different conditions. In other words, a single behavioural marker, such as anger, could be indicative of any of the three domains. This naturally has consequences for how we interpret observations and challenges the appropriateness of relying solely on observation to understand physical and psychological health outcomes. In the focus groups, some staff suggested that the key to understanding behavioural signs is knowing the resident, a point reinforced in work looking at the challenges of identifying pain in people living with dementia. 54 This point emphasises the importance of supplementing observational data with insights from those who know the resident when making ratings. In other words, it supports the use of a multimethods approach to understanding and measuring the quality of life of people who may not be able to easily share their experiences directly. This is why ASCOT-CH4 recommends always including a staff proxy interview.
The rest of the focus groups and the cognitive testing informed the structured questions and response options to the interviews conducted alongside the observational element. We expanded more explicitly on these in the findings section, so here we give a brief summary.
The last section of the focus groups suggested that, broadly, staff understood and were able to answer the questions about the new items. However, there were some issues with the draft wording centred around two points. First, the term ‘feeling depressed’ was often interpreted to mean clinical depression, whereas the question was intended to reflect a wider sense of ‘feeling down’. Second, the use of the word ‘never’ was seen as unrealistic and, therefore, not something people would ‘choose’ because everyone has times when they experience feelings of anxiety, pain or low mood. The first phase of the cognitive testing supported these concerns, and prompted changes in the response scale, with the term ‘never’ being dropped. Moreover, for greater clarity the depression item was renamed low mood. These were the last changes to the items, bar the testing of an alternative three-point scale that was ultimately rejected.
In the final phase of cognitive testing, the expected versions of the proxy questions were successfully tested. Following this testing, the structured proxy questions were reworded to create a version of the measure that could be used with residents. A set of qualitative prompts was also developed. These items, along with the observational guidance mentioned above, provided the mechanism for collecting the data that researchers used to rate the new ASCOT-CH4 items of pain, anxiety and low mood. These items can be found in Appendix 3, Figures 16 and 17.
Limitations
A limitation of our sampling procedure was that the majority of our sample in both the focus groups and the cognitive testing were white British. How other ethnicities and cultures describe and conceptualise pain, anxiety and low mood might differ, and this would need to be explored in future research.
Chapter 4 Exploring the psychometric properties of the ASCOT-CH4 and the new items of pain, anxiety and low mood
The overall aim of care services is to improve or maintain individual quality of life. It is, therefore, important to understand the impact of services on individual quality of life if we are to establish the quality and effectiveness of services. Intermediate indicators of the quality of care delivery (i.e. how the service provides care) may provide useful insights. As quality of care is largely dependent on the staff working in the service, such as staff’s level of training and skills, staff’s responsiveness and the continuity of care provision,7,8 it may be possible to use such staff-related indicators to determine the quality of service delivery.
However, to evaluate the overall quality and effectiveness of care services, we need to have insight into the outcomes of care, that is, the impact of care services on aspects of individual quality of life, such as safety, control over daily life, privacy and respect. To measure the impact of services on outcomes, we need rigorously tested, valid tools and methods of data collection. Chapters 2 and 3 described the development of the new measures of pain, anxiety and low mood. This chapter describes how we tested and validated these measures with older care home residents who cannot self-report.
Aims and objectives
Work package 2 aimed to establish the psychometric properties of the ASCOT-CH4 and the new domains of pain, anxiety and low mood in a population of care home residents who could not self-report.
Specifically, we sought to:
-
evaluate the construct validity of the new items, as well as the internal reliability and structural validity of the ASCOT-CH4 with the new domains of pain, anxiety and low mood
-
explore the feasibility of and justification for the mixed-methods approach by examining missing data from the different sources and exploring divergence between the final ratings and the perspectives of those interviewed.
Data and methods
The study had a cross-sectional design.
Recruitment and sampling
In February 2018, an initial scoping questionnaire was sent to all independent sector care homes on the CQC register in six local authorities in South East England that offered care to those aged > 65 years and/or living with dementia (n = 1242). Within the questionnaire, care homes could express an interest in being contacted about participating in future research.
Of the 112 homes that expressed an interest in future research, we removed those that had been involved in a previous study (n = 10), those with 10 or fewer beds (n = 7) and those that offered care primarily for people with physical disabilities (n = 1). A further six homes from two local authorities were later removed for efficiency reasons (i.e. the time taken to obtain additional research governance approvals).
Owing to high staff turnover within the sector, we also checked the contact details of the managers of these homes and removed any homes that had since closed or changed both ownership and registered manager (n = 3). Three homes subsequently expressed interest in the study by word of mouth. This resulted in a sample of 88 homes. We sent letters to each care home (including an information sheet for the care home manager; see Report Supplementary Material 1), and followed up by telephone and/or face-to-face meetings to explain in detail what participation involved. We also used the meeting to provide all documentation for recruitment, including a research flow chart of the process (see Report Supplementary Material 5). Of the 88 homes invited to take part, 20 agreed to participate in the study (10 residential care homes and 10 nursing care homes).
Care home managers oversaw resident recruitment. Managers of homes with ≤ 40 residents were asked to invite all residents to participate, including those lacking capacity (see Data collection for full details of the resident consent process). We asked homes with > 40 residents to randomly select 20. The research team gave the following instructions: ‘Please list the permanent residents (long-stay) in your home in alphabetical order. Now, find the nth person on the list and select them as person number one. From that person, please select every x person, and continue this while you move down the list until you have reached 20 residents. These are the residents who have been randomly selected to take part’. The nth starting number was randomly assigned by the research team. The researchers calculated the x number in proportion to the size of the home. The team repeated this process until a minimum of five residents had consented in each home. Posters were provided that managers could place in their care home to make residents and visitors aware of the research (see Report Supplementary Material 6).
Data collection
The study received a favourable opinion for ethics approval by the Health Research Authority (reference 18/LO/0657) and approval from ADASS. Research governance was sought from and approved by the four local authorities involved in the study.
We sought advice from staff and managers regarding each resident’s capacity to consent. Those who could consent for themselves received the information to do so (see Report Supplementary Material 3 and 7). In accordance with the Mental Capacity Act,155 residents lacking the capacity to consent were included in the study only if a personal consultee (e.g. unpaid carer, friend or relative) advised us that they thought the resident would like to participate (see Report Supplementary Material 10). Family carer information sheets were provided to inform residents’ family members of the research (see Report Supplementary Material 4), along with consent forms for those taking part in an interview about the resident (see Report Supplementary Material 8). During data collection in each home, consent was viewed as a continuous process, and researchers monitored this, taking into consideration the advice from staff and consultees. Before beginning observations or interviews, researchers explained the study and checked the resident’s willingness to participate. Information sheets were provided for staff (see Report Supplementary Material 2), as were consent forms for staff who took part in interviews about a resident (see Report Supplementary Material 9).
Between June and December 2019, researchers spent between 1 and 3 days collecting data in each care home using the mixed-methods ASCOT-CH4 approach described briefly in Chapter 1 and in detail in Appendix 1.
The number of residents recruited determined the exact period of time spent in each home. For example, with five residents, we completed 2 hours of observation, with the remaining time used to conduct interviews and allow staff to complete questionnaires on residents’ demographics, dependency and health.
Questionnaires
Social care-related quality of life
The ASCOT-CH4 was used to collect data on each resident’s SCRQoL. The ASCOT-CH4 focuses on eight domains of quality of life that are sensitive to the impact of social care (see Table 1). 13 A detailed account of the mixed-methods approach and what it involves appears in Appendix 1.
Care home data
We collected information about each care home, including registration type (nursing or residential) and the CQC quality rating closest to the date of fieldwork in that home.
User characteristic questionnaire
We asked staff to complete a user characteristic questionnaire to gather demographic information about each resident (e.g. age, gender). We also collected data on weekly fees for each resident, specifying whether these included nursing fees and how they were funded (e.g. local authority-funded, self-funded, or with the resident ‘topping up’ what the local authority was willing to contribute).
We assessed residents’ ability to carry out the following eight everyday tasks (ADLs): feeding, transfers, mobility, getting up or down stairs, getting dressed and undressed, bathing, grooming and use of toilet. Each item was scored as 0 for ‘independent’ and as 1 if the resident could complete the task ‘with help’ or ‘with significant help’. The overall score was a sum of the score for each item from 0 to 8, with a lower score indicating greater ability to complete daily tasks independently.
Residents’ level of cognitive impairment was measured using the Minimum Data Set Cognitive Performance Scale (MDSCPS). The MDSCPS comprises five items: dementia diagnosis, short-term memory problems, cognitive skills, ability to communicate and whether or not the person can eat and drink independently. 236 Each MDSCPS item has different response options. Scoring follows a set of rules based on each item response to form an overall score from 0 to 6; the lower the score, the more intact the level of cognition.
Pain, anxiety and depression/low mood
To evaluate the construct validity of three new health-related items to be used alongside the ASCOT-CH4 (pain, anxiety and low mood), we included a number of additional measures in the user characteristic questionnaire. We used the interRAI Pain Scale237 to collect information on items that capture the intensity and frequency of pain. The pain intensity item has four response levels: no pain, mild pain, moderate pain and severe pain. The pain frequency item has four response levels: no pain, pain present but not exhibited in last 3 days, exhibited on 1 or 2 days in the last 3 days and exhibited daily in last 3 days. The response options to these items convert into a value from 0 to 3; the higher the score, the more intense or frequent the pain.
The questionnaire also included the Generalised Anxiety Disorder 2-item (GAD-2). 238 The GAD-2 is a brief screening instrument for generalised anxiety disorder. It includes two items asking if the person has been bothered by the following problems over the previous 2 weeks: (1) feeling nervous, anxious or on edge and (2) not being able to stop worrying. Both items use the same rating scale: not at all, several days, more than half the days and nearly every day. The response items are scored from 0 to 3. These scores are summed to create an overall score from 0 to 6. A higher score represents higher levels of anxiety, with a cut-off score of ≥ 3 indicating clinically significant symptoms of generalised anxiety disorder.
Clinical symptoms of depression were measured using the interRAI Depression Rating Scale. 189 This is a seven-item instrument developed for use in nursing homes as a clinical screen for depression. It asks the respondent to rate these seven depressive symptoms: a sad, pained, worried facial expression; tearfulness; expression of negative statements; unrealistic fears; repetitive health complaints; repetitive anxious non-health-related complaints; and persistent anger. The items are measured on the same four-point scale described for the interRAI Pain Scale, but the first option is ‘not present’ rather than ‘no pain’. Each item is recoded from level 2 to level 1 and from level 3 to level 2. These ratings are then summed to create a score out of 14. The higher the score, the more symptoms were present in the last 3 days. A score of > 3 is an indicator of major or minor depressive disorders.
In the User Characteristic Questionnaire (UCQ), staff respondents were asked to fill out the EuroQol-5 Dimensions, five-level version (EQ-5D-5L), proxy version 1,239 about each resident. The proxy version 1 asks the caregiver to rate the patient’s HRQoL based on the proxy’s opinion of the person’s HRQoL. The EQ-5D-5L is a five-item instrument measuring HRQoL. The measure captures five health attributes: mobility, self-care, usual activities, pain/discomfort and anxiety/depression. Each item has five levels of response: no, slightly, moderately, severely and extreme. For this study, the ratings of two of the items (pain/discomfort and anxiety/depression) were considered in the construct validity analysis.
Methods: statistical analysis
The Rasch analysis was conducted in WINSTEPS® version 3.92.1 (www.winsteps.com). All other statistical analyses were conducted in Stata version 16.
Feasibility
Feasibility was assessed using the frequency of missing values for the overall rating by domain. Missing data for each source of evidence (resident, family and staff interviews) were also considered.
In coming to a final rating, the rater used different sources of evidence. The rating of each ASCOT-CH4 domain required the rater to judge the person’s likely view of an aspect of quality of life based on a combination of their own judgement based on observation, the ratings of staff and family (where available), and evidence from the resident interview.
A particular challenge with the three new domains (pain, anxiety and low mood) is that it is difficult to infer the person’s experience of pain or mood states by external observation, especially by researchers who have not developed in-depth interpersonal knowledge of and rapport with residents (see Chapter 3). We expected the ratings for these three new domains to rely more on staff ratings, and, therefore, be more strongly related to the staff ratings than the ASCOT-CH4 domains. We tested this hypothesis using chi-squared analysis and Cramér’s V for effect sizes. We also calculated the frequency of absolute correspondence of staff and the overall ratings for each item.
Construct validity
Construct validity assesses the extent to which an instrument measures what it is supposed to. This is usually done by testing hypothesised relationships between the measure and other measures that assess a similar construct (convergent construct validity). Constructs may also be evaluated by testing hypotheses based on the measure not being correlated to another measure of a conceptually different concept (divergent construct validity). Here, we are interested in the relationships between the constructs of study and other measures of similar or different constructs.
To evaluate construct validity, the hypothesised relationships between the new domains (current quality of life, unless otherwise stated) and other scales or items were assessed using chi-squared tests for categorical variables and one-way analysis of variance (ANOVA) for continuous variables (Table 7). A p-value of ≤ 0.05 was taken to indicate a statistically significant association between the new item and the other scale or item. The effect sizes for ANOVA were evaluated using omega-squared (ω2), with the interpretation heuristics of < 0.06 (small), 0.06–0.14 (moderate) and > 0.14 (large). The effect sizes for chi-squared analysis were evaluated using Cramér’s V, with the following interpretation: for 3 degrees of freedom (df), 0.06 (small), 0.17 (medium), 0.29 (large); and for 9 df, 0.03 (small), 0.1 (medium), 0.17 (large).
Variables | Expected associations |
---|---|
Health and disability | |
ADL count |
There is evidence that greater levels of ADL dependency are related to low mood,101,102 anxiety98,100 and pain25,60,61,63 As current quality of life represents the person’s outcome state with care, support or treatment, we would not expect to find a significant association between current rating of quality of life for the three new domains and ADL dependency Instead, we would expect there to be a significant small to moderate relationship between the level of ADL dependency and expected rating of quality of life for the three domains, which represents the quality-of-life outcome state without treatment, care or support (i.e. underlying ‘need’ in that domain without treatment) |
EQ-5D-5L anxiety and depression item | We expected to find a strong association between the anxiety/depression attribute in EQ-5D-5L and the new domains of anxiety and low mood |
Pain | |
interRAI pain items | We expected a strong association between interRAI pain frequency item and the new pain domain. We expected a moderate to strong association between pain intensity item and the new pain domain, as these are related but distinct concepts |
EQ-5D-5L pain item | We expected to find a strong association between the EQ-5D-5L pain item and the new pain item |
Anxiety | |
GAD-2 |
We expected a strong positive association between the GAD-2 score and the anxiety domain, as both measure frequency of feelings of anxiety We expected a weak to moderate positive relationship between the GAD-2 score and the low mood domain, as depression and anxiety are related but distinct concepts We expected to find a weak positive relationship with the pain domain, because pain has been found in other studies to be associated with anxiety/depression240 |
Low mood | |
interRAI Depression Rating Scale |
We expected a significant moderate positive association between this score and low mood because these both measure similar constructs, recognising that not everyone who displays symptoms of low mood will be depressed We expected a significant weak positive relationship between the interRAI Depression Rating Scale and the new pain domain because there is evidence that pain is associated with anxiety/depression240 |
Exploratory factor analysis
To explore the dimensionality of the ASCOT-CH4 and the three new items, exploratory factor analysis was used with principal axis factoring. Suitability for factor analysis was assessed using the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy (> 0.5) and Bartlett’s test of sphericity (p < 0.001); Bartlett’s test of sphericity indicates whether the items are sufficiently correlated for factor analysis to be conducted. 241 Retention of factors was informed by the Jolliffe criterion (eigenvalue of ≥ 0.70), with visual inspection of the scree plot for the point of inflection.
Rasch analysis
Some argue that item response theory techniques, such as Rasch analysis, ought to supersede classical test theory methods, such as exploratory factor analysis (EFA), in the psychometric evaluation of health and care-related outcome measures. 242 However, the use of a dual item response theory/classical test theory approach is accepted in the development of self-report outcome instruments. 243 An advantage of Rasch analysis is that it is less affected by sample size than EFA. Indeed, it is recommended that a minimum sample size of 300 cases is required for EFA,244 which is smaller than the sample achieved in this study. By contrast, the sample size is adequate for Rasch analysis. 245 Therefore, Rasch analysis was also conducted here to verify the results of the EFA.
Rasch analysis is a logit modelling technique that converts the ordinal scores for scale items into a continuous latent scale (log-odd units), with individual responses positioned on that scale. 243,246 This logarithmic transformation estimates each person’s level on the scale, as well as the difficulty of each item on a calibrated equilibrium. When observed item responses fit the expected values in the Rasch model, they exhibit four properties referred to as criterion-related construct validity: unidimensionality (i.e. all of the items relate to a single underlying construct), monotonicity, local independence, and lack of differential item functioning (DIF) among the subgroups in the sample. 247 On this basis, Rasch analysis is an accepted method of evaluating the construct and structural validity of quality-of-life instruments. 242,248
As the EFA results indicated a two-factor solution (see Table 13), the Rasch analysis was conducted separately for (1) the ASCOT-CH4 items and (2) the three new HRQoL items. All of these items have four response options: ideal state, no needs, some needs and high-level needs. Therefore, we selected a polytomous version of the Rasch model for the analysis. As the distance between thresholds is not assumed to be equal across items,242 we applied the partial credit model249 rather than the Andrich rating scale model. 250
Item goodness-of-fit was assessed using Infit statistics, as these are considered more informative than Outfit statistics. 251,252 (Infit statistics consider the difference between observed and expected responses for only those items with a difficulty level close to the person’s ability, rather than for all items, as with Outfit statistics.) Both mean-square and z-statistics were reported; however, the mean-square statistic is the preferred criterion for goodness-of-fit for polytomous data because it is less sensitive to sample size. 253 Item goodness-of-fit was evaluated against the criterion of a critical range (i.e. fit) of Infit mean-square values from 0.7 to 1.3. 253
Unidimensionality was examined through a principal components analysis (PCA) of the residuals, with the criteria of at least 50% of the variance explained by the first dimension and any additional factors accounting for less than 5% of the remaining variance. 254 Once the Rasch factor was taken into account, any eigenvalues of ≤ 2.0 for other contrasts were applied as an indicator of no secondary dimensions beyond random associations. 255,256 The category structure of the polytomous scale was evaluated by visual inspection of the category probability curves, with an ordered set of response thresholds for each item taken to indicate that item responses were consistent with the metric estimate of the underlying construct.
Finally, we explored whether or not there was evidence of DIF. DIF indicates bias in responses by different groups in the sample. There are two types of DIF: (1) uniform, which is consistent across the whole range of measurement, and (2) non-uniform, where the difference varies across levels of the attribute. 242 Here, we considered uniform DIF by the key demographic characteristics of age, gender and dementia diagnosis. The magnitude of uniform DIF was evaluated by the Mantel–Haenszel statistic for polytomous scales using log-odd estimators. 257,258
Internal reliability
Cronbach’s alpha coefficient was used to assess internal reliability. A criterion of ≥ 0.7 was applied to indicate an acceptable level of internal consistency. 259
Findings
The characteristics of participants in the sample are shown in Table 8. We aimed to recruit 250 residents from 25–30 care homes and achieved a final sample of 182 residents from 20 care homes. The main challenge was the time needed to recruit and support homes to participate in the study. Given that we had adequate sample sizes to conduct our analysis, we agreed with our Study Steering Committee and funder to stop recruitment at 182 participants, rather than extend the fieldwork timeline.
Variable | Category | Frequency, n (%) |
---|---|---|
Sex | Male | 58 (31.9) |
Female | 121 (66.5) | |
Missing data | 3 (1.6) | |
Age group (years) | 50–59 | 4 (2.2) |
60–69 | 10 (5.5) | |
70–79 | 34 (18.7) | |
80–89 | 78 (42.9) | |
≥ 90 | 52 (28.5) | |
Missing data | 4 (2.2) | |
Had a diagnosis of dementia? | No | 89 (48.9) |
Yes | 87 (47.8) | |
Missing data | 6 (3.3) | |
Capacity to consent to research? | No | 61 (33.5) |
Yes | 117 (64.3) | |
Missing data | 4 (2.2) | |
Is the resident publicly funded? | Yes, fully | 49 (26.9) |
Yes, partially | 22 (12.1) | |
No, self-funded | 88 (48.4) | |
Missing data | 23 (12.6) | |
Residential home CQC rating | Requires improvement | 25 (13.7) |
Good | 132 (72.5) | |
Outstanding | 25 (13.7) | |
Nursing or residential | Nursing | 98 (53.8) |
Residential | 84 (46.2) | |
Pain frequency score | No pain | 95 (52.2) |
Less than daily pain | 42 (23.1) | |
Daily pain but not severe | 16 (8.8) | |
Daily severe pain | 28 (15.4) | |
Missing data | 1 (0.5) | |
Pain intensity score | No pain | 84 (46.2) |
Less than daily pain | 58 (31.9) | |
Daily pain but not severe | 31 (17.0) | |
Daily severe pain | 9 (4.9) | |
Missing data | 0 (0.0) | |
EQ-5D-5L pain/discomfort | No pain/discomfort | 90 (49.5) |
Slight pain/discomfort | 58 (31.9) | |
Moderate pain/discomfort | 19 (10.4) | |
Severe pain/discomfort | 11 (6.0) | |
Extreme pain/discomfort | 0 (0.0) | |
Missing data | 4 (2.2) | |
EQ-5D-5L anxiety/depression | Not anxious or depressed | 95 (52.2) |
Slightly anxious or depressed | 56 (30.8) | |
Moderately anxious or depressed | 17 (9.3) | |
Severely anxious or depressed | 7 (3.9) | |
Extremely anxious or depressed | 5 (2.7) | |
Missing data | 2 (1.1) | |
ASCOT-CH4 index score, mean (SD, n) | Current SCRQoL | 0.736 (0.176, 182) |
Expected SCRQoL | 0.110 (0.198, 182) | |
SCRQoL gain | 0.626 (0.194, 182) | |
ADL count, mean (SD, n) | 5.316 (2.600, 171) | |
MDSCPS, mean (SD, n) | 2.09 (1.92, 175) | |
interRAI depression score, mean (SD, n) | 1.94 (2.93, 179) | |
GAD-2 score, mean (SD, n) | 1.23 (1.90, 179) | |
Weekly fee (£), mean (SD, n) | 899.64 (281.39, 172) |
The majority of residents in the sample were female. Over 70% of participants were > 80 years old and approximately half had a diagnosis of dementia (47.8%). This is broadly comparable with what is known about the care home population in England as a whole from past resident censuses. 1 The majority of residents were either fully (48.4%) or partially self-funding (12.1%) their stay in the care home. This is a much larger proportion of self-funders than in the South East region or in England as a whole. 3 The sample included participants living in nursing homes (53.8%) and residential homes (46.2%). The majority of participants resided in a home rated by the CQC as good (72.5%), although the sample also included residents of homes rated ‘outstanding’ or ‘requires improvement’ (13.7% each) (see Chapter 1 for further information on care home quality ratings).
We summarise the current SCRQoL ratings in Figure 5. The most frequently rated state was ‘no needs’ for the three new items. Only approximately 25% of respondents were rated as at the ideal state for anxiety and low mood; 40% were rated as at the ideal state for pain.
The frequency of rating high-level needs for the three new domains was higher (4–5%) than for the basic ASCOT-CH4 domains (food and drink, accommodation, personal cleanliness and safety) and dignity (≤ 1%). The rating of high-level needs for the three new items was similar to the ratings of high-level needs for control over daily life and social participation (5.5%) and lower than for occupation (12.6%).
Figure 6 shows the rating of expected SCRQoL for each item. These ratings are estimates of the person’s quality of life without care and support. The ratings may be understood to indicate the person’s underlying social care outcome need. 13
In considering high-level and some needs together for the eight ASCOT-CH4 items, 84.1% (personal comfort and cleanliness) to 95.1% (food and drink) of the sample had some level of social care need by domain. This reflects the high level of need of the care home population in England.
The rating of social care needs (high-level or some) were lower for the three new items than for the eight ASCOT-CH4 items. The lowest percentage of residents rated with needs was found for pain (52.2%), followed by low mood (77.5%) and anxiety (80.2%).
The ‘gain’ is a measure of the impact of care and support on SCRQoL. The gain score is estimated as the difference between current and expected ratings of SCRQoL. These gain scores are shown for the three new items in Table 9.
Item | ‘Gain’a | Frequency (%) |
---|---|---|
Anxiety | Negative | 3 (1.6) |
0 | 38 (20.9) | |
1 | 82 (45.1) | |
2 | 48 (26.4) | |
3 | 11 (6.0) | |
Low mood | Negative | 3 (1.6) |
0 | 44 (24.2) | |
1 | 87 (47.8) | |
2 | 42 (23.1) | |
3 | 6 (3.3) | |
Pain | Negative | 0 (0) |
0 | 93 (51.2) | |
1 | 55 (30.2) | |
2 | 31 (17.0) | |
3 | 3 (1.6) |
There were three cases of negative gain for low mood and anxiety. This indicates that the residential care provided by the home had a negative impact on the person’s quality of life in that quality-of-life domain. These cases were recoded as zero (0) in the analyses presented in this report.
In terms of anxiety and low mood, there was no effect, or a negative effect, of care for approximately one-quarter of the sample. By contrast, over half of the sample (51.2%) experienced a negative or no impact of care and support on pain. Of these cases of no or negative impact, half of the sample for pain and 24% and 19% of the sample for anxiety and low mood, respectively, was due to no capacity to benefit (i.e. expected quality of life, without services, was rated at the ideal state). There was, however, unmet need at either some or high need for 42–43% of the sample for low mood and anxiety, and 16% of the sample for pain.
Feasibility
There were no missing data for the overall rating for each item. These final scores are a combined rating made by researchers based on a range of evidence sources.
Observations by researchers were completed for all residents. The evidence gathered by observation was supplemented by that from the qualitative and/or quantitative interviews with the residents, their family and care staff (Table 10). In the majority of cases (≥ 87%), data were missing for family interviews. There were also missing data for resident interviews in 39–57% of cases; the majority of complete cases were represented by qualitative interview only, without a rating of quality-of-life score for each domain. The missing values for staff interviews were low (≤ 1%) for all items.
Item | Overall rating (% missing) | Observational (% missing) | Resident interview: qualitative and quantitative (% missing) | Resident interview: quantitative (% missing) | Family interview (% missing) | Staff interview (% missing) |
---|---|---|---|---|---|---|
ASCOT-CH4 | ||||||
Food and drink | 0 | 0 | 39.0 | 76.4 | 87.4 | 0.5 |
Accommodation | 0 | 0 | 39.6 | 77.5 | 87.4 | 0.5 |
Personal comfort/cleanliness | 0 | 0 | 40.1 | 80.8 | 87.4 | 0 |
Social participation | 0 | 0 | 40.1 | 81.9 | 87.4 | 0 |
Occupation | 0 | 0 | 40.1 | 83.0 | 87.9 | 0 |
Control over daily life | 0 | 0 | 50.0 | 83.0 | 87.9 | 0.5 |
Personal safety | 0 | 0 | 40.1 | 78.6 | 87.4 | 0.5 |
Dignitya | 0 | 0 | 57.1 | 84.6 | 90.7 | N/A |
New items | ||||||
Anxiety | 0 | 0 | 40.7 | 80.2 | 87.4 | 0 |
Low mood | 0 | 0 | 42.9 | 76.9 | 87.4 | 0 |
Pain | 0 | 0 | 42.3 | 78.0 | 87.9 | 0 |
For resident, family and staff interviews, the proportion of missing values for the three new items (anxiety, low mood and pain) were within the same range as for the ASCOT-CH4 items.
The absence of missing data for the overall ratings indicates that the ASCOT-CH4 method is feasible in terms of the researcher judging and rating an overall score for the eight ASCOT items and also in terms of the three new items. However, it is important to note that the majority of ratings were based on quantitative evidence from staff interview and observations only. This is because of the challenges and barriers associated with conducting interviews with resident or family, for example if the resident does not have close family.
Table 11 shows the association between staff and overall ratings (quantitative interviews only). There were moderate to strong associations between staff and overall ratings for all ASCOT-CH4 items (Cramér’s V of 0.25–0.5). As hypothesised, the associations were stronger for the three new items (Cramér’s V of ≥ 0.5).
Item | Association between staff and overall rating | Exact correspondence of staff and overall rating, frequency (%) | ||||
---|---|---|---|---|---|---|
n | df | χ2 | p-value | Cramér’s V | ||
ASCOT-CH4 | ||||||
Food and drink | 179 | 6 | 51.7 | < 0.001 | 0.38 | 97 (54.2) |
Accommodation | 178 | 4 | 33.2 | < 0.001 | 0.31 | 116 (65.2) |
Personal comfort and cleanliness | 180 | 6 | 88.9 | < 0.001 | 0.50 | 126 (70.0) |
Social participation | 176 | 9 | 69.3 | < 0.001 | 0.36 | 82 (46.6) |
Occupation | 176 | 9 | 86.6 | < 0.001 | 0.41 | 78 (44.3) |
Control over daily life | 176 | 9 | 85.4 | < 0.001 | 0.40 | 80 (45.5) |
Personal safety | 180 | 6 | 26.7 | < 0.001 | 0.27 | 131 (72.8) |
Dignitya | N/A | N/A | N/A | N/A | N/A | N/A |
New items | ||||||
Anxiety | 179 | 9 | 192.7 | < 0.001 | 0.60 | 124 (68.1) |
Low mood | 177 | 9 | 142.4 | < 0.001 | 0.52 | 122 (67.0) |
Pain | 179 | 9 | 206.1 | < 0.001 | 0.62 | 138 (75.8) |
There was exact correspondence of staff and overall rating of quality of life for 44–76% of the ratings. Where there was not exact correspondence, the majority of discrepant responses resulted from researchers rating the quality-of-life state as one lower than the staff rating. Owing to researchers’ reliance on staff report to judge the appropriate rating for items related to internal health states, the highest correspondence was for the rating of pain (75.8%), with a high degree of correspondence also observed for low mood and anxiety. The level of correspondence was, however, also high for the ASCOT domains of personal comfort and cleanliness and personal safety. These are quality-of-life domains for which care staff would be expected to make care-related judgements on a routine basis, and is evidence of reliance on staff ratings for these items and/or that they used similar external indicators to observe and form the same judgement. The level of correspondence of staff to overall ratings was lowest (44–47%) for the higher-order ASCOT domains of social participation, occupation and control over daily life.
Construct validity
Table 12 shows the results of the construct validity analysis by hypothesis testing. There is good overall evidence of the construct validity of the new items, which performed as hypothesised in terms of relationship to measures of similar or overlapping constructs (see Table 12).
One-way ANOVA | Anxiety | Low mood | Pain | Hypothesis accepted? | ||||
---|---|---|---|---|---|---|---|---|
p-value | ω2 | p-value | ω2 | p-value | ω2 | |||
ADL dependency (current quality of life) | 0.45 | < –0.01 | 0.86 | –0.01 | 0.29 | < 0.01 | Yes | |
ADL dependency (expected quality of life) | < 0.01 | 0.05 | < 0.01 | 0.06 | < 0.01 | 0.06 | Yes | |
GAD-2 | < 0.01 | 0.15 | < 0.01 | 0.06 | 0.02 | 0.04 | Yes | |
interRAI Depression Rating Scale | < 0.01 | 0.12 | < 0.01 | 0.12 | < 0.01 | 0.05 | Yes | |
Chi-squared test | df | Anxiety | Low mood | Pain | Hypothesis accepted? | |||
p-value | Cramér’s V | p-value | Cramér’s V | p-value | Cramér’s V | |||
EQ-5D-5L anxiety/depression | 12 | < 0.01 | 0.26 | < 0.01 | 0.23 | 0.05 | 0.20 | Yes |
EQ-5D-5L pain | 9 | 0.15 | 0.16 | 0.40 | 0.13 | < 0.01 | 0.37 | Yes |
interRAI pain frequency | 9 | 0.20 | 0.15 | 0.19 | 0.15 | < 0.01 | 0.36 | Yes |
interRAI pain intensity | 9 | 0.26 | 0.14 | 0.20 | 0.15 | < 0.01 | 0.34 | Yes |
All three items’ expected ratings of quality of life (i.e. the underlying need, without care), but not current quality of life (i.e. with the compensatory action of care), are associated with ADL dependency. The hypothesised significant positive relationship was found between the new pain item and another item designed to capture pain frequency, as well as with two items designed to capture the related construct of pain intensity (i.e. interRAI pain intensity, EQ-5D-5L pain item). As hypothesised, the new anxiety and low mood items, which capture the frequency of anxiety and low mood, were associated with measures designed to capture the similar constructs of clinical diagnosis of generalised anxiety disorder, clinical depression and/or the intensity of anxiety and depression.
Exploratory factor analysis
The EFA of the ASCOT-CH4 items and the three new items is shown in Table 13. The KMO measure of sampling adequacy (KMO = 0.79) and Bartlett’s test of sphericity (p < 0.001) indicated that the variables were adequately related for factor analysis. The items were analysed with principal factors. The eigenvalues for the first four factors extracted were 3.46, 0.98, 0.37 and 0.20, respectively, which indicates a two-factor solution if Jolliffe’s criterion is applied (≥ 0.70). This was verified by visual inspection of the scree plot. Oblique oblimin rotation of the factors was applied to facilitate interpretation of the two-factor model. 13
Item | Factor 1 loadings | Factor 2 loadings | Uniquenessa |
---|---|---|---|
ASCOT-CH4 | |||
Food and drink | 0.37 | 0.84 | |
Accommodation | 0.49 | 0.72 | |
Personal comfort and cleanliness | 0.62 | 0.53 | |
Social participation | 0.61 | 0.61 | |
Occupation | 0.79 | 0.41 | |
Control over daily life | 0.78 | 0.42 | |
Personal safety | 0.34 | 0.40 | 0.60 |
Dignity | 0.53 | 0.73 | |
New items | |||
Anxiety | 0.75 | 0.43 | |
Low mood | 0.74 | 0.45 | |
Pain | 0.45 | 0.83 | |
Eigenvalue | 3.46 | 0.98 | |
% of total variance | 79.8 | 22.6 |
All the factors loaded above the threshold of ≥ 0.40 for reliability of interpretation,260 except food and drink and personal safety for factor 1. The first factor included the eight ASCOT-CH4 items, although personal safety loaded onto factor 2 with a slightly higher value than factor 1. This is consistent with other studies of the structural characteristics of ASCOT. 13,16 The second factor included the three new items, as well as, potentially, personal safety. The two factors accounted for 79.8% and 22.6% of the variance, respectively.
The uniqueness is the proportion of variance not explained by the common factor. Six of the 11 items had a uniqueness of ≥ 0.60, which indicates that these variables may not be well explained by the two factors and that the variables are also only weakly related to each other. This is consistent with the evidence for the eight items in the ASCOT. 13,15,16
Rasch analysis
Table 14 shows the Infit statistics for the two Rasch analyses with the (1) ASCOT items and (2) the three new items. They were in the acceptable range of 0.7–1.3 for the eight ASCOT items and also the three new items, as well as overall for each scale. The z-standardised probability statistics were within the acceptable range of ± 2.0 for all items, except the new item for pain. This indicates that the data for this item do not fit the Rasch model.
Item | Infit mean square | Infit z-standardised probability |
---|---|---|
ASCOT-CH4 | 1.00 | –0.1 |
Food and drink | 1.23 | 2.0 |
Accommodation | 1.04 | 0.5 |
Personal comfort and cleanliness | 0.88 | –1.2 |
Social participation | 1.01 | 0.2 |
Occupation | 0.85 | –1.5 |
Control over daily life | 0.81 | –2.0 |
Personal safety | 1.05 | 0.4 |
Dignity | 1.11 | 1.1 |
New items | 0.99 | –0.3 |
Pain | 1.3 | 2.3 |
Anxiety | 0.83 | –1.7 |
Low mood | 0.86 | –1.3 |
The category response curves shown in Figures 7 and 8 indicate that there were no disordered category thresholds for the new items of pain, anxiety and low mood. There were no ratings in the study sample of high-level needs for three of the eight ASCOT items (personal comfort and cleanliness, accommodation comfort and cleanliness and dignity), so it was not possible to evaluate the full range of category thresholds using the sample data. There was, however, evidence of a disordered category threshold for personal safety, which could be addressed by collapsing together the ratings for some needs and high-level needs into a single category to improve model fit.
In principal components analysis of the residuals, the Rasch models explained 53.7% (ASCOT-CH4) and 49.9% (new items) of the variance. In the Rasch model with the ASCOT-CH4 items, unexplained residual variance in the second, third and fourth contrasts was 8.1%, 7.0% and 6.2%, respectively. In the Rasch model with the new items, the second contrast had 20.6% unexplained residual variance, with < 0.5% for other contrasts. Although the criteria of ≤ 5% unexplained residual variance of additional factors was not met for either of the Rasch models, the eigenvalues for the subsequent contrasts were ≤ 2.0. This indicates that there were likely to be no secondary dimensions beyond random noise.
The Mantel–Haenszel statistic for all comparisons did not reach significance when adjusted for multiple comparisons. Therefore, there was no evidence of uniform DIF by gender, age or dementia diagnosis.
Internal consistency
As the structural validity analysis indicated that the eight ASCOT-CH4 items and the three new items load onto two separate factors, Cronbach’s alpha was calculated separately for each.
Cronbach’s alpha was 0.76 (eight items) and 0.61 (three items), respectively. Unlike the eight ASCOT-CH4 items, the three new items do not meet the criterion value of ≥ 0.70 for acceptable internal consistency.
Discussion
Feasibility of using the CH4 method
Based on the combined rating made by the researchers, there were no missing scores (current and expected) for SCRQoL or the new items. However, as expected, there were high levels of missing information from resident self-report interviews. Of the 197 residents included in the analysis, around 60% could tell us something about their SCRQoL in a semistructured, conversational way, but only 15% (dignity) to 24% (food and drink) could give an answer to the conventional structured interview, in which a response option must be chosen. The easiest SCRQoL domains for residents to rate were food and drink, accommodation and personal safety, which are all very tangible concepts for residents to grasp. The most difficult SCRQoL domain was dignity (nearly 85% of resident interview data were missing), which is more cognitively challenging, asking residents to reflect on the impact of the way they are treated by staff. The pattern was similar for the new items, with around 58–60% being able to tell us something about their pain, anxiety and low mood in a conversational or semistructured way but fewer than one-quarter responding to the structured interviews.
Thus, the majority of residents in our sample were not able to self-report and, of those who could, many were able to give only partial responses and not consistently in all domains. This is despite under half having a diagnosis of dementia and around one-third lacking the capacity to consent. Thus, inability to self-report is likely to be due to the multiple and complex needs in this population, including cognitive impairment, physical frailty and sensory impairments.
Staff interviews were conducted for all residents and mostly staff felt able to respond to every domain. No data were missing for the new items, indicating that it is certainly feasible to gather this information from care workers. Family interviews were much harder to obtain, which is why such large numbers of missing data were reported. These figures represent low response rates. When family members were interviewed, they actually answered most questions. It is currently recommended that family interviews be completed face to face, which is one of the main barriers to including family members in interviews. Future research may want to explore the feasibility of conducting telephone or video call interviews to increase response rates.
Therefore, with the exception of the family interviews, which had low recruitment rates, the ASCOT-CH4 was a feasible method of data collection. Self-report alone, however, was not. The majority of residents were unable to complete a full, structured interview. Although staff proxy interviews were helpful in informing final ratings, agreement between staff and final ratings ranged from 44% (occupation) to 75% (pain). Where ratings diverged, researchers’ ratings were usually one outcome state lower than staff’s (e.g. ‘ideal’ to ‘no needs’). Thus, the mixed-methods approach added value to a purely proxy approach by giving residents a voice in the final ratings of their own SCRQoL, even if only partially or through the evidence collected in observations. Without this, residents’ outcomes in these domains would sometimes have been overestimated by staff.
Psychometric properties of the new items
There is good evidence of the construct validity of the three new items, both current and expected ratings. EFA indicates that the three new items do not load onto a single factor with the eight ASCOT-CH4 items. This is expected, as the concepts captured by the new domains (i.e. pain, low mood and anxiety) relate to aspects of HRQoL, which is distinct from the ASCOT construct of SCRQoL (i.e. aspects of quality of life that may be improved by social care support).
The structural validity analysis using classical test theory (EFA) provides very tentative support for using the three new items as a subscale alongside ASCOT-CH4, which employs the same methodology and approach, to allow consistency for comparison. The item response theory findings, combined with the high uniqueness of this item in the EFA, indicate that pain may not fit into a measurement scale alongside low mood and anxiety. This is also supported by the inadequate internal consistency between these three items using Cronbach’s alpha. As such, the evidence that the three items form a separate measurement scale is tentative. It may be better to conceptualise these three items as separate ‘modules’ that relate to the concepts of psychological health and pain that may be added flexibly alongside ASCOT-CH4 (as a separate scale), with low mood and anxiety combined together and pain standing alone.
Conclusions
The ASCOT-CH4 offers a robust methodology for measuring the social care outcomes of care home residents who cannot self-report. Previous work had already established excellent inter-rater reliability of the domains of SCRQoL. 19,20 We applied this methodology to the new domains of pain, anxiety and low mood and found that the mixed-methods approach could be applied to these domains. The three additional domains could be used alongside the ASCOT in the future to capture the impact of care homes on these important health-related constructs.
Limitations
The following limitations are noted:
-
We recruited fewer care homes and residents than planned. The sample size was adequate for item response theory analyses, but classical test theory analyses require larger sample sizes.
-
Family member proxy responses on behalf of residents had low response rates. An alternative methodology to face-to-face interview should be piloted in the future to see if it enables the inclusion of more family members.
Chapter 5 Care home quality and residents’ social care-related quality of life
The CQC regulates the quality of social care services in England. Quality ratings are based on an assessment of evidence gathered using five KLOEs: ‘safe’, ‘effective’, ‘caring’, ‘responsive’ and ‘well led’ (see Chapter 1). An overall rating, ranging from ‘outstanding’ to ‘inadequate’, is then awarded based on the evidence collected. Services are required to display their ratings on their premises. When choosing a home, CQC quality ratings and reports are a key source of information for members of the public, prospective residents and their families. However, there has been very little research to establish how well quality ratings reflect residents’ care-related quality of life.
Aims and objectives
The aim of this study was to extend the existing literature and assess the extent to which CQC quality ratings are consistent with indicators of residents’ SCRQoL using a large sample of care homes (see Chapter 1, Objective 2).
The research questions are as follows:
-
Are overall CQC care home quality ratings or particular KLOEs reflected in residents’ care outcomes as measured using ASCOT SCRQoL?
-
Which residents (by level of care needs) benefit more from services of higher-rated homes?
Data and methods
This section of the report draws on data collected from two studies. The first is the MOOCH project, funded by the NIHR School for Social Care (2015–19). 20,153,154 The second is this study, MiCareHQ, using data collected in WP2 (2017–20). Both studies used a cross-sectional design, in which researchers spent time in each care home, using the ASCOT-CH4 toolkit (herein referred to as CH4) to carry out observations and interviews with staff and (where possible) residents and family members.
The aim of the MOOCH study was to explore the relationship between care home residents’ SCRQoL and CQC ratings. The study received ethics approval from the National Social Care Research Ethics Committee (15-IEC08_0061). Research governance was sought and granted in two local authorities in South East England. Data collection took place between June and December 2017. Two hundred and ninety-three residents from 34 care homes (of which 20 were nursing homes) were recruited to the study. Following the approach described in Chapter 4, data were collected on residents’ SCRQoL using the CH4, with additional data about the residents collected from staff by questionnaire, including demographic information, health status and ability to complete ADLs. The study found that residents living in ‘good/outstanding’ homes have significantly better quality of life than those living in homes ‘requiring improvement’, after controlling for individual and home-level characteristics. 20 Unlike previous research conducted under the previous regulator,22 the result held for nursing and residential care homes. However, the sample size was moderate and all 34 participating care homes were based in only two local authorities in South East England.
The data collection for the MiCareHQ study was undertaken in WP2 and is described in detail in Chapter 4. Data from this study were merged with MOOCH data to create a larger sample of residents and homes.
Dependent variable
For the purpose of this analysis, care home residents’ outcomes are measured with the ASCOT SCRQoL using the mixed-methods approach (CH4). We assessed the impact of CQC quality rating on both the current (overall) SCRQoL and the eight ASCOT domains (see Table 1).
Independent variables: Care Quality Commission quality ratings
Our main variable of interest was CQC care home quality ratings (see Chapter 1), either overall or on one of the five KLOEs. The ratings considered were from the inspection report closest to the date of fieldwork; see Chapter 1 for a summary of frequency of inspections by CQC.
Independent variables: confounding factors
People’s quality of life is not affected only by the quality of care services; it is also highly correlated with individual and other characteristics. 261 Probably the most important confounders, as shown by previous studies, are the levels of physical and cognitive impairment, which are negatively correlated with quality of life. 22,262,263 If we did not control for impairment in the analysis, and if those with higher care need were more likely to reside in care homes of lower quality, then the association between CQC quality ratings and SCRQoL would be upwards biased. We included the following measures of impairment in the analysis: the count of ADLs a resident manages independently and level of cognitive impairment as measured by the MDSCPS. Previous research has shown that the expected SCRQoL score (i.e. the resident’s hypothetical SCRQoL if services were not in place to support the person and nobody else stepped in; see Chapter 1) is sometimes better at capturing impairment and social care needs than ADLs. 261 We therefore included the expected SCRQoL score instead of ADLs in different specifications. Other individual characteristics used as controls were gender, ethnicity and age group. We also included the source of funding (i.e. private or public) as a control in one specification to capture whether or not care homes are providing better care to their private-paying residents.
At the care home level, we included only contextual factors (i.e. factors not related to care quality improvement through the care homes) as independent variables: type of care home registration (i.e. residential or nursing), sector (i.e. private or voluntary) and care home capacity (i.e. number of beds). The quality improving aspects (e.g. staffing, training, pay, management style) should be captured by the quality ratings.
At the local area level, the main factors that could affect both care home quality and residents’ outcomes are the local authority social care policy and commissioning practices (which can affect care home revenue) and the income and wealth of the local population with care needs (which may determine the proportion of self-funding residents and, ultimately, care home revenue). Further local area factors could be the level of competition between social care providers and the availability and quality of the local workforce. To capture the potential effects of all these local area-level factors, we included dummy variables of the local authority district where the care home was located.
To ensure consistency of fieldworker ratings, anyone using the ASCOT mixed-methods toolkit is required to attend training by the ASCOT team. A hierarchy of evidence is used to resolve any conflicts in evidence and establish inter-rater reliability (see Appendix 1). For this study, the fieldworkers collecting the data were the researchers involved in the original ASCOT tool development, with excellent inter-rater reliability having already been established between them in previous studies. 19,20 Nonetheless, controls for the study (i.e. MOOCH or MiCareHQ) and fieldworker were included to capture potential subjective differences in which the residents were rated with respect to SCRQoL.
Statistical methods
The model used to assess the relationship between the SCRQoL of resident i in care home j (Qij) and the CQC quality rating of care home j (Rj) is:
where Xij are individual, care home and local area characteristics of resident i in care home j, and ε1ij is the error term. Everything else being equal, the coefficient β1 captures the effect on SCRQoL from a change from a lower to a higher care home rating for a resident with characteristics at the mean of the sample.
Residents with different (initial) care needs might have a different capacity to benefit from higher-quality care services. To capture the differential benefit in SCRQoL by level of care need from higher- compared with lower-rated homes, we also estimated a model in which we interacted the resident’s care need level (Nij) with the quality rating of the care home (Rj):
where Nij is a dummy variable equal to ‘0’ if the resident has a high care need (i.e. has an expected SCRQoL score in the bottom half of the sample) and equal to ‘1’ if the resident has a low care need (i.e. has an expected SCRQoL score in the top half of the sample). The coefficient β22 captures the effect on SCRQoL for a high-needs resident from an increase from a lower to a higher care home rating, and β23 captures the difference in the effect on SCRQoL between low-need and high-need residents from an increase from a lower to a higher care home rating. The effect on the SCRQoL of a low-needs resident from a change from a lower to a higher care home rating is given by β22 + β23.
Multilevel estimators can be advantageous in studies where the observed individuals are ‘nested’ into groups. However, for this analysis there was a rather small number of residents per care home, with an average of 8.8 and a minimum of 4. Moreover, a likelihood ratio test comparing the multilevel estimation results with ordinary least squares (OLS) results for Equation 4 was insignificant (p = 0.2701), showing that the OLS estimation results are not statistically different from those of the multilevel estimation. We therefore used OLS for the regression analysis.
Empirical findings
The study employed a cross-sectional analysis of older care home residents (i.e. those aged ≥ 50 years), with pooled data from the two data collections (i.e. MOOCH and MiCareHQ).
Descriptive statistics
The distribution of the current SCRQoL score for the 475 care home residents in the sample is negatively skewed (mean 0.75, standard deviation 0.17, range 0.24–1.00, skewness –0.53 and kurtosis 2.53), but it is similar to that of other ASCOT measures in samples of older and younger adult social care service users16,264 and is not greatly different from the normal distribution (Figure 9). The current (and individual domain) SCRQoL scores were therefore used as dependent variables in their untransformed form.
Of the 54 care homes included in the sample, 13 (24%) had a CQC rating of ‘requires improvement’, 36 (67%) were rated ‘good’ and five (9%) were rated ‘outstanding’. No care homes were rated ‘inadequate’ in our sample and only a small number of homes had an ‘outstanding’ rating (Figure 10). We therefore grouped ‘good’ and ‘outstanding’ homes together in the analysis.
The national distribution of CQC ratings among care homes for older people as of September 2018 (i.e. half way between the two data collection points) was 2% ‘inadequate’, 21.5% ‘requires improvement’, 74% ‘good’ and 2.5% ‘outstanding’ (proportions were calculated from the CQC care directory data). Therefore, the proportion of ‘good/outstanding’ homes in our sample (74%) was very close to the national proportion (74.5%). For the quality ratings distribution in our sample of care homes to be even closer to the national distribution of care home quality ratings, we would have needed one home in our sample to be rated ‘inadequate’. However, homes rated ‘inadequate’ are very difficult to recruit as (1) they are very few and (2) they are usually in this category for only a short period (i.e. a care home rated ‘inadequate’ for one of the five key questions has to improve quickly and is reinspected within 6 months). 132 A home rated ‘inadequate’ was, in fact, recruited in the MOOCH study. However, by the time of data collection, that home had improved its rating to ‘requires improvement’ and was reported accordingly.
Figure 11 illustrates the equally weighted expected SCRQoL score, SCRQoL score gain (i.e. difference between the current SCRQoL score and the expected SCRQoL score) and unmet need (i.e. difference between the ideal state and the current SCRQoL score) in each ASCOT domain by the level of residents’ care needs and overall CQC care home rating. By definition, care residents with high care need (i.e. those in the bottom half of the expected SCRQoL scores) have a relatively lower expected SCRQoL score. The dignity domain captures ‘the negative and positive psychological impact of support and care on the service user’s personal sense of significance’13 (see Table 1) and, therefore, it does not make sense to ask the expected question for this domain (i.e. hypothetical ratings in absence of support). Instead, ASCOT uses a dummy code for the expected SCRQoL score for dignity, which assumes that all residents would have no (unmet) needs (i.e. a score of 0.67).
We observed that high-need residents generally have a bigger gain on all ASCOT quality-of-life domains than low-need residents, confirming findings from previous studies that service users with higher-level impairment have a greater capacity to benefit from social care services. 13,265 We also observe, mainly for high-need residents, a higher SCRQoL score gain (and a lower unmet need) in ‘good/outstanding’ than in ‘requires improvement’ homes. With the exception of personal cleanliness, for all other domains, high-need residents have a higher SCRQoL score gain in ‘good/outstanding’ homes of about 0.07–0.10 (or 10–24%).
For low-need residents, we also find a relatively lower unmet need with respect to most ASCOT quality-of-life domains in ‘good/outstanding’ homes.
Table 15 presents a comparison of resident and care home characteristics by CQC care home quality rating. Most of the individual characteristics (e.g. age, gender and ethnicity, as well as impairment as measured by the number of ADLs that the resident manages independently) were not statistically different between homes with different CQC quality ratings. However, residents in ‘good/outstanding’ homes had a significantly higher mean current SCRQoL score, a higher proportion were self-funders (i.e. they funded their care wholly with their own funds), a higher proportion had significantly lower care needs initially (as measured using the mean expected SCRQoL score) and a higher proportion were cognitively intact. In terms of care home characteristics, compared with ‘requires improvement’ homes, a significantly smaller proportion of ‘good/outstanding’ homes offered residential care services with nursing, and ‘good/outstanding’ homes had, on average, a significantly smaller capacity.
Variable | CQC rating: requires improvement | CQC rating: good/outstanding | Difference | ||||
---|---|---|---|---|---|---|---|
Observations | Mean | SD | Observations | Mean | SD | ||
Current SCRQoL score | 118 | 0.709 | 0.183 | 357 | 0.770 | 0.162 | 0.060*** |
Age group: 50–79 years | 115 | 0.252 | 0.436 | 337 | 0.234 | 0.424 | –0.018 |
Age group: 80–89 years | 115 | 0.496 | 0.502 | 337 | 0.448 | 0.498 | –0.048 |
Age group: ≥ 90 years | 115 | 0.252 | 0.436 | 337 | 0.318 | 0.466 | 0.065 |
Gender: female | 118 | 0.686 | 0.466 | 354 | 0.669 | 0.471 | –0.017 |
Gender: male | 118 | 0.314 | 0.466 | 354 | 0.331 | 0.471 | 0.017 |
Ethnicity: white | 116 | 0.983 | 0.131 | 341 | 0.979 | 0.142 | –0.003 |
Ethnicity: non-white | 116 | 0.017 | 0.131 | 341 | 0.021 | 0.142 | 0.003 |
Funding: wholly public | 96 | 0.479 | 0.502 | 307 | 0.371 | 0.484 | –0.108* |
Funding: part private, part public | 96 | 0.146 | 0.355 | 307 | 0.098 | 0.297 | –0.048 |
Funding: wholly private | 96 | 0.375 | 0.487 | 307 | 0.531 | 0.500 | 0.156*** |
Count of independent ADLs | 112 | 3.009 | 2.874 | 330 | 3.161 | 2.630 | 0.152 |
Expected SCRQoL score | 118 | 0.061 | 0.158 | 357 | 0.116 | 0.213 | 0.055*** |
MDSCPS: intact | 110 | 0.209 | 0.409 | 329 | 0.334 | 0.472 | 0.125** |
MDSCPS: borderline/mild impact | 110 | 0.364 | 0.427 | 329 | 0.340 | 0.408 | –0.023 |
MDSCPS: moderate mild/moderate/severe impact | 110 | 0.218 | 0.363 | 329 | 0.185 | 0.338 | –0.033 |
MDSCPS: severe/very severe impact | 110 | 0.209 | 0.335 | 329 | 0.140 | 0.255 | –0.069* |
Registration: residential | 118 | 0.203 | 0.404 | 357 | 0.457 | 0.499 | 0.253*** |
Registration: nursing | 118 | 0.797 | 0.404 | 357 | 0.543 | 0.499 | –0.253*** |
Sector: private | 118 | 0.881 | 0.030 | 375 | 0.882 | 0.017 | 0.001 |
Sector: voluntary | 118 | 0.119 | 0.030 | 375 | 0.118 | 0.017 | –0.001 |
Capacity (i.e. beds) | 118 | 53.15 | 17.26 | 357 | 47.34 | 29.20 | –5.808** |
Multivariate regression analysis
The statistical analysis was conducted using Stata® version 16 (StataCorp LLC, College Station, TX, USA). To account for potentially correlated errors due to unobserved care home characteristics, we estimated fully robust standard errors by clustering at the care home level.
Table 16 includes OLS estimation results for five models of current SCRQoL score. Model (a) includes the number of ADLs a resident manages independently as a measure of impairment, as in a previous analysis using the MOOCH data. 20 Similar to this previous study, we found a positive relationship between residents’ quality of life and a ‘good/outstanding’ versus ‘requires improvement’ CQC rating; as well as positive relationships with being female and being able to perform more ADLs independently, and a negative relationship with cognitive impairment.
Variable | Model | ||||
---|---|---|---|---|---|
(a) | (b) | (c) | (d) | (e) | |
Age group: 80–89 years | 0.006 (0.018) | 0.005 (0.018) | 0.006 (0.019) | 0.000 (0.021) | 0.004 (0.019) |
Age group: ≥ 90 years | –0.025 (0.018) | –0.015 (0.018) | –0.020 (0.019) | –0.022 (0.020) | –0.020 (0.019) |
Gender: female | 0.058*** (0.017) | 0.050*** (0.017) | 0.055*** (0.017) | 0.053*** (0.018) | 0.056*** (0.017) |
Ethnicity: white | 0.025 (0.076) | 0.007 (0.076) | –0.000 (0.073) | 0.012 (0.089) | 0.006 (0.073) |
Funding: part private, part public | –0.046 (0.039) | ||||
Funding: wholly private | –0.019 (0.024) | ||||
Count of independent ADLs | 0.010*** (0.003) | ||||
Expected SCRQoL score | 0.235*** (0.045) | 0.585*** (0.105) | 0.548*** (0.118) | 0.521*** (0.148) | |
Expected SCRQoL score (squared) | –1.365*** (0.390) | –1.297*** (0.472) | –1.240*** (0.436) | ||
Expected SCRQoL score (cubed) | 1.058*** (0.357) | 1.033** (0.443) | 0.997** (0.379) | ||
MDSCPS: borderline/mild impairment | –0.058*** (0.018) | –0.032* (0.018) | –0.029 (0.018) | –0.036** (0.018) | –0.029* (0.017) |
MDSCPS: moderate mild/moderate severe impairment | –0.075*** (0.022) | –0.039* (0.023) | –0.024 (0.022) | –0.031 (0.024) | –0.024 (0.022) |
MDSCPS: severe/very severe impairment | –0.138*** (0.032) | –0.104*** (0.034) | –0.077** (0.033) | –0.094** (0.036) | –0.075** (0.033) |
Registration: nursing | 0.028 (0.034) | 0.031 (0.030) | 0.032 (0.030) | 0.022 (0.031) | 0.031 (0.030) |
Sector: voluntary | 0.041 (0.041) | 0.046 (0.031) | 0.052 (0.033) | 0.049 (0.037) | 0.043 (0.034) |
Capacity (i.e. beds) (log) | –0.021 (0.027) | –0.016 (0.028) | –0.023 (0.027) | –0.022 (0.029) | –0.020 (0.027) |
Low care needs (i.e. top half of expected SCRQoL score) | 0.059* (0.033) | ||||
Overall CQC rating: good/outstanding | 0.090** (0.035) | 0.080** (0.036) | 0.064* (0.035) | 0.062* (0.034) | 0.091** (0.040) |
Overall CQC rating: good/outstanding × low care needs | –0.057 (0.034) | ||||
Local authority district FE | Yes | Yes | Yes | Yes | Yes |
Study FE | Yes | Yes | Yes | Yes | Yes |
Fieldworker FE | Yes | Yes | Yes | Yes | Yes |
Constant | 0.911*** (0.168) | 0.825*** (0.166) | 0.845*** (0.159) | 0.890*** (0.168) | 0.820*** (0.157) |
Observations | 419 | 431 | 431 | 386 | 431 |
R 2 | 0.343 | 0.360 | 0.383 | 0.375 | 0.389 |
Previous research has shown that the expected SCRQoL score is sometimes better at capturing impairment and social care needs. 261 Our estimation results confirm this. When ADL count was replaced with expected SCRQoL [models (b) and (c)], the coefficients for the cognitive impairment measure (i.e. MDSCPS) became smaller (and some of them became statistically insignificant). This shows that the expected SCRQoL is relatively better at capturing aspects of cognitive impairment. Moreover, we also noticed that the coefficient of the overall CQC rating became smaller (and significant only at the 10% level), which may be a result of expected SCRQoL covering aspects of care need (e.g. lack of occupation or social participation) that are not captured by either ADL count or MDSCPS.
In a further model we tested whether funding source for social care services has an effect on the resident’s SCRQoL; see model (d). In other words, we tested whether self-funders (through higher fees) are buying better services and, therefore, have better care outcomes. We did not find any evidence of a significant relationship between a resident’s funding source and SCRQoL. This suggests that residents receive similar levels of care and attention regardless of the funding source and fees paid.
Finally, in model (e) we tested whether residents with different (initial) care needs might have a different capacity to benefit from higher-quality care services (see also Equation 4) and found that high-need residents had more capacity to benefit. A high-need resident with characteristics at the sample average would have a current SCRQoL score 0.091 (p = 0.028) higher (equivalent to 12% of the average quality of life in the sample) if their care home is rated ‘good’ or ‘outstanding’ than if it is rated ‘requires improvement’. On the other hand, a low-need resident would have only a 0.034 (p = 0.362, i.e. not statistically different from zero) higher current SCRQoL score if her or his care home is rated ‘good/outstanding’ compared with if it is rated ‘requires improvement’.
Table 17 presents the summary of marginal effects of CQC quality ratings (both overall and for the five KLOEs) on current SCRQoL and (weighted) ASCOT domain scores by level of residents’ care needs for models based on Equation 4; the coefficient in row 1, column 1, is from Table 16, model (e).
Variable | Current SCRQoL | ASCOT basic domains | ASCOT higher-order domains | ||||||
---|---|---|---|---|---|---|---|---|---|
Personal cleanliness | Food and drink | Accommodation | Safe | Control | Social | Occupation | Dignity | ||
Overall (G/O vs. RI) × high need | 0.091** (0.040) | –0.015 (0.034) | 0.029 (0.023) | 0.071 (0.045) | 0.072 (0.047) | 0.119*** (0.043) | 0.094*** (0.035) | 0.096* (0.053) | –0.019 (0.047) |
Overall (G/O vs. RI) × low need | 0.034 (0.036) | –0.033 (0.026) | 0.000 (0.027) | 0.055* (0.029) | –0.013 (0.051) | 0.085* (0.050) | 0.063** (0.031) | 0.052 (0.053) | –0.043 (0.046) |
Safe (G/O vs. RI) × high need | 0.020 (0.045) | 0.005 (0.027) | 0.025 (0.021) | 0.037 (0.041) | –0.023 (0.055) | 0.004 (0.056) | 0.046 (0.037) | 0.045 (0.054) | –0.042 (0.040) |
Safe (G/O vs. RI) × low need | 0.004 (0.040) | 0.005 (0.026) | –0.011 (0.030) | 0.036 (0.026) | –0.071 (0.058) | 0.024 (0.057) | 0.048 (0.039) | –0.016 (0.060) | 0.005 (0.038) |
Effective (G/O vs. RI) × high need | 0.049 (0.046) | 0.003 (0.033) | –0.001 (0.030) | –0.022 (0.022) | 0.002 (0.057) | 0.087* (0.050) | 0.086** (0.035) | 0.064 (0.059) | 0.023 (0.034) |
Effective (G/O vs. RI) × low need | –0.009 (0.042) | –0.033 (0.025) | –0.042 (0.030) | –0.017 (0.023) | –0.100 (0.060) | 0.094* (0.048) | 0.008 (0.035) | 0.059 (0.051) | –0.012 (0.036) |
Caring (G/O vs. RI) × high need | 0.087 (0.055) | 0.062 (0.042) | 0.016 (0.035) | –0.002 (0.039) | –0.009 (0.080) | 0.134*** (0.045) | 0.094* (0.048) | 0.055 (0.094) | 0.077** (0.033) |
Caring (G/O vs. RI) × low need | 0.105** (0.044) | –0.019 (0.034) | 0.039 (0.037) | 0.004 (0.036) | –0.020 (0.066) | 0.174*** (0.057) | 0.081** (0.038) | 0.177** (0.071) | 0.082*** (0.028) |
Responsive (G/O vs. RI) × high need | 0.068* (0.036) | –0.004 (0.036) | 0.007 (0.024) | 0.019 (0.028) | 0.055 (0.044) | 0.100**(0.045) | 0.081** (0.031) | 0.029 (0.058) | 0.047 (0.030) |
Responsive (G/O vs. RI) × low need | 0.006 (0.039) | –0.051 (0.031) | –0.006 (0.023) | 0.000 (0.019) | –0.023 (0.042) | 0.038 (0.044) | 0.007 (0.033) | 0.037 (0.051) | 0.025 (0.027) |
Well led (G/O vs. RI) × high need | 0.097*** (0.033) | 0.008 (0.030) | 0.041** (0.020) | 0.052 (0.033) | 0.104** (0.045) | 0.114*** (0.038) | 0.090*** (0.029) | 0.052 (0.048) | 0.013 (0.031) |
Well led (G/O vs. RI) × low need | 0.044 (0.034) | 0.013 (0.030) | 0.045* (0.026) | 0.034* (0.020) | 0.013 (0.042) | 0.052 (0.041) | 0.071** (0.027) | 0.022 (0.054) | –0.034 (0.030) |
We found no statistically significant relationship between the overall CQC rating and ASCOT basic quality-of-life domains (i.e. personal cleanliness and comfort, food and drink, accommodation cleanliness and comfort, and personal safety), indicating that all care homes managed to meet residents’ basic quality-of-life needs well. However, the differences between ‘good/outstanding’ and ‘requires improvement’ homes were evident with respect to outcomes on ASCOT higher-order quality-of-life domains, in particular control over daily life and social participation and involvement. Both high-and low-need residents had a higher quality-of-life score with respect to control over daily life and social participation and involvement in better-rated homes, but high-need residents benefited more once again.
In terms of individual CQC KLOEs, ‘caring’ and ‘well led’ related more to ASCOT quality-of-life scores. Homes rated ‘good/outstanding’ with respect to ‘caring’ (i.e. care homes in which staff involved and treated residents with extra compassion, kindness, dignity and respect) were most strongly related to ASCOT higher-order domain scores. Compared with high-needs residents, the relationship was relatively stronger for low-need residents on all four higher domains (i.e. control over daily life, social participation and involvement, occupation, and dignity). This is likely to be because residents with lower needs are more physically and cognitively able to benefit from increased choice (and to have those choices respected), meaningful occupation and social activities. Conversely, high-need residents benefited significantly more from care homes with better leadership, management and governance (i.e. ‘good/outstanding’ rating with respect to ‘well led’). Good management is important for generating better care outcomes in those with the highest care needs. This may be because well-managed homes have more effective working environments and better skill development opportunities for care staff, which in turn is associated with better-quality care (see Chapter 6). ‘Well led’ is the only KLOE that was related not only to higher ASCOT domains, but also to basic domains, in particular food and drink and personal safety.
Sensitivity analysis
As there were no ‘inadequate’ rated care homes in the sample, an important question is how this might have affected the research findings. As mentioned, according to national distribution of care home quality ratings, we should have had one of the 54 sample care homes that was rated as ‘inadequate’ rather than ‘requires improvement’. Therefore, the potential selection bias is rather low. Moreover, we performed a sensitivity analysis that found that the size of positive association between CQC quality ratings and residents’ quality of life found in this study might have been underestimated. Specifically, we excluded care homes with the highest quality rating (i.e. ‘outstanding’) from the estimation of model (e) in Table 16. When doing this, the coefficient for CQC quality rating became smaller (i.e. 0.081; p = 0.066). This means that residents of the ‘outstanding’ homes in the sample have on average (and everything else being equal) relatively higher SCRQoL than residents of ‘good’ homes. Assuming that the opposite is true for residents of ‘inadequate’ homes (i.e. everything else being equal, they have lower SCRQoL than residents of ‘requires improvement’ homes), the true relationship between CQC quality rating and residents’ SCRQoL would be stronger (i.e. greater coefficient).
Discussion
In line with previous research,20 the analysis presented here found that care home residents’ SCRQoL was significantly associated with CQC quality ratings, even after controlling for other characteristics. By combining the data from the two studies (i.e. MOOCH and MiCareHQ), we were also able to extend the previous research by examining whether this relationship varied according to residents’ funding source and needs, and by each KLOE.
The main individual factors significantly associated with higher SCRQoL scores were being female and having lower levels of physical or cognitive impairment. There was, however, no evidence that self-funders (who pay higher fees) had significantly better SCRQoL. In line with this, there was clearly an ‘ethical wall’ between funding arrangements and front-line care in homes. When collecting data about the residents’ funding arrangements, this information had to be completed by office or managerial staff, as front-line care workers were not aware how each resident was funded or, indeed, how much they were paying. Although this is reassuring and we would not want to see differences in social care outcomes related to care home residents’ wealth, it raises the question of value for money for self-funders and the justification for them paying more than their publicly funded neighbours, for the same quality of ‘service’. 266
A key finding from the multivariate analysis was that the quality rating of the home only had a significant impact on the overall SCRQoL of high-need residents, or those with greatest capacity to benefit from the service. Unpicking this finding further, we found that the relationship between CQC rating and ASCOT varied by domains of SCRQoL. All homes were meeting residents’ needs in the basic domains (safety, accommodation, personal cleanliness, and food and drink) but residents in ‘good/outstanding’ homes had better quality of life with respect to the higher-order ASCOT domains, particularly control over daily life and social participation and involvement. Although this finding was true of all residents, high-need residents benefited the most.
Previous research has found that the most dependent care home residents require skilled staff support to be able to make choices, participate in activities and engage in social interactions. 21 It is therefore likely that the mechanism through which better-quality homes improve the SCRQoL of high-need residents is a care culture that gives staff the skills and time to provide this skilled support and that prioritises residents’ experiences and outcomes. An examination of individual KLOEs revealed that a home being ‘caring’ and ‘well led’ had the biggest impact on residents’ SCRQoL, particularly in the higher-order domains. Closer examination of prompts for the KLOEs and a reflection of which aspects of quality they measure, from a Donabedian perspective,9 can help us unpick why this might be.
Compared with ‘caring’, the KLOEs ‘safe’ and ‘effective’ have a relatively greater focus on adherence to care processes and standards, legislation and monitoring. 131 For example, many prompts relate to what Donabedian would call structural quality (e.g. environment, equipment, technology and staff ratios) or process quality (e.g. staff training, care processes and records, adherence to best practice guidelines, and legislation). Where prompts do refer to residents’ experiences or outcomes, they tend to focus more on the basic order domains of the ASCOT, such as food and drink (KLOE E3) and accommodation (KLOE E6).
The KLOE for ‘responsive’ is shorter than that for ‘effective’ and ‘safe’ (only three sections for ‘responsive’, compared with six for ‘safe’ and seven for ‘effective’) and only one of its three sections closely relates to the ASCOT (R1: How do people receive personalised care that is responsive to their needs?). Even here, many of the prompts relate to care planning process, with only two (R1.3 and R1.4) focused on aspects of social care-related quality of life, such as occupation and social relationships. The ‘caring’ KLOE, however, refers specifically to users’ feelings, experiences and outcomes and puts users at the centre of their care:
By caring, we mean that the service involves and treats people with compassion, kindness dignity and respect.
CQC. 131
Prompts relate directly to the ASCOT domains of dignity, control over daily life, social participation and occupation, with a clear link to resident experience or outcomes:
Can people be as independent as they want to be?
C3.5, CQC131
The ‘well led’ KLOE defines well led as:
the leadership, management and governance of the organisation assures the delivery of high-quality and person-centred care, supports learning and innovation, and promotes an open and fair culture.
CQC. 131
This overlaps considerably with the principles of the ASCOT and in particular the ‘ideal’ outcome state and higher-order domains. Closer reading of the individual prompts indicates that well-led services can evidence continuous quality improvement, and a focus on outstanding practice, transparency and involvement. Systems are in place to support this at the organisational level.
Conclusion
Better-quality care homes, and particularly those highly rated as ‘well led’, significantly improve the quality of life of high-need residents. On the other hand, the quality of life of low-need residents is mostly improved by homes highly rated with respect to ‘caring’. Although care homes generally meet the needs of residents in basic aspects of care, ‘outstanding’ and ‘good’ homes make a meaningful difference to those aspects of residents’ lives that add quality to their days: feeling in control, being engaged in activities, being socially fulfilled and being treated with dignity. Examination of the KLOEs revealed that a possible mechanism for this is strong leadership, a focus on continued quality improvement and a culture of care that supports staff to be innovative, giving them time to listen to residents and meet their needs with compassion. This is revisited again in Chapter 6, which specifically looks at the relationship between workforce and job characteristics and care home quality.
Limitations
Although this study increased the number of local authorities included in the analysis, it remained restricted to South East England, meaning that the findings may not be nationally representative. This is discussed further in Chapter 8.
Chapter 6 Care home quality and the care workforce
Staff play a vital role in the delivery of social care, acting with the service user in the co-production of care. Although the quality of a care home will vary for many reasons, staff are likely to be an important factor, influencing both the quality of care (i.e. through the technical aspects of care delivery) and the quality of life of those they are supporting. Front-line care workers are low paid, often at minimum wage. In addition, the social care sector has consistently high levels of staff turnover and job vacancies. However, there is little evidence of the effect that staffing conditions, such as pay and training, have on the quality of care provided in care homes in England.
Aims and objectives
The aim of this study in WP3 was to assess the relationship between aspects of the staffing of care homes and their quality (see Chapter 1, Aims and objectives of the study, objective 3).
The research questions were:
-
To what extent do the wages and training of staff influence the quality of English care homes?
-
To what extent do the level of staff turnover and job vacancies influence the quality of English care homes?
Data and methods
The research questions were addressed using quantitative analysis of a large, national data set of English care homes and their staffing for the years 2016 to 2018. In particular, for information on care home staffing, we used the Adult Social Care Workforce Data Set (ASC-WDS) provided by Skills for Care, a database of staffing at employee and provider level, and the main source of social care workforce intelligence for England. The ASC-WDS included data on more than half (54.3%) of social care providers registered with the CQC in 2018. 136 We used the provider-level database matched to quality indicators (see below), for October 2016–18, to construct a panel of care home observations over time. For each wave, we included only care homes that had updated their information in the previous 6 months.
Sample
We created an unbalanced panel of 5555 independent sector care homes observed over the 3 years, with a total of 12,052 observations. In total, 2540 of the homes were observed across all 3 years. For each year there are approximately 4000 care home observations, representing more than one-third of the more than 11,000 care homes for older people registered with the CQC at the time. Representativeness of the sample was assessed by comparing to the national database of care home providers across the 3 years. Nursing homes represented 38.3% of the sample and the voluntary sector represented 11.8%, and the average size of a care home was about 41 beds. These were in line with national figures and we proceeded on the basis that the sample was representative of all care homes in England.
Dependent variable: quality
We measured quality using the CQC’s quality rating system, which assesses people’s experiences of care and is centred on an inspection of the care home (see Chapter 1). Because of the small number of homes that had either the lowest (i.e. ‘inadequate’) or the highest rating (i.e. ‘outstanding’), we used a binary variable indicating high or low quality (i.e. 0 if a home was rated ‘inadequate’ or ‘requires improvement’ and 1 if a home was rated ‘good’ or ‘outstanding’).
Independent variables: staffing measures
Wage
Average hourly wages of care workers at provider level were calculated using wage data from the employee-level database of ASC-WDS. When hourly wages were not in the data, they were calculated for full-time staff from annual or weekly salaries, where available, assuming 52 weeks or 37 hours per week of work, respectively. We treated the provider-level average hourly wage data as missing if the average wage reported was below the National Minimum Wage, if the average wage was in the top 1% of wages for the given year, and if a care home reported wages for < 90% of staff. The average hourly care worker wage for each care home was weighted by the Consumer Price Index to October 2018 prices, and we used the natural logarithm of this value in the analysis to reduce skew in the wage data.
Training
We included two measures of training calculated at care home level from data in the employee-level database: the proportion of staff who received training in person-centred care and/or dignity and the proportion of staff who received dementia training.
Job vacancies/staff turnover
We measured job vacancies as the number of current vacancies to level of current staff. Staff turnover was measured as the proportion of staff who left during the previous 12 months.
Independent variables: covariates
We included in the model a number of confounding variables at care home and local area level (postcode district, i.e. the first half of a UK postcode) that are likely to influence quality and wages. We list these measures and their data sources in Table 18. We further included training incidence, job vacancies and staff turnover as covariates in the first-stage wage equation.
Variable | Data source |
---|---|
Care home registration (residential or nursing) | ASC-WDS267 |
Sector (private or voluntary) | ASC-WDS267 |
Size (number of beds, log) | ASC-WDS267 |
Supports clients living with dementia (yes/no) | ASC-WDS267 |
Staff to service user ratio | ASC-WDS267 |
Proportion of female staff (%) | ASC-WDS267 |
Jobseeker’s Allowance: female claimants (%) | Office for National Statistics268 |
Attendance Allowance: older people claimants (%) | Office for National Statistics268 |
Pension Credit: older people claimants (%) | Office for National Statistics268 |
Average house price (£, log) | Her Majesty’s Land Registry269 |
Care home competition (distance-weighted Herfindahl–Hirschman Index) | Own calculations from CQC national database of registered health and social care providers270 |
Region | ASC-WDS267 |
Year | ASC-WDS267 |
Statistical methods: multiple imputation
The staffing data contain missing information (Table 19). Given the richness of the ASC-WDS, we employed multiple imputation techniques, assuming that the data are missing at random (i.e. independent of unobservable parameters). We generated 50 imputations using chained imputation methods with ordered logit (overall quality rating) and predictive mean matching specifications (staffing measures). We assessed the size of the Monte Carlo error generated from the multiple imputation process and found this to be small enough to mean that the effect of staff measures on quality ratings would be unlikely to change if the multiple imputation process were repeated. 271
Observations, n (%) | Care homes, n (%) | Observations per care home | Mean | Range | SD | |||
---|---|---|---|---|---|---|---|---|
Overall | Between | Within | ||||||
Staffing measures | ||||||||
Mean hourly wage (£) | 7705 (63.9) | 3835 (69.0) | 2.009 | 7.853 | 5.512–14.234 | 0.647 | 0.618 | 0.198 |
Dementia-trained staff proportion | 8810 (73.1) | 4381 (78.9) | 2.011 | 0.281 | 0–1 | 0.332 | 0.315 | 0.091 |
Dignity-/person-centred care trained staff proportion | 8810 (73.1) | 4381 (78.9) | 2.011 | 0.127 | 0–1 | 0.235 | 0.220 | 0.073 |
Staff turnover rate | 9025 (74.9) | 4348 (78.3) | 2.076 | 0.289 | 0–1.47 | 0.281 | 0.246 | 0.127 |
Job vacancy rate | 8480 (70.4) | 4129 (74.3) | 2.054 | 0.041 | 0–0.324 | 0.065 | 0.060 | 0.029 |
Care home characteristics | ||||||||
Quality | 11,003 (91.3) | 5298 (95.4) | 2.077 | 0.755 | 0–1 | 0.430 | 0.382 | 0.243 |
Sector: voluntary | 12,053 (100) | 5555 (100) | 2.170 | 0.117 | 0–1 | 0.322 | 0.318 | 0.019 |
Registration: nursing | 12,052 (100) | 5555 (100) | 2.170 | 0.383 | 0–1 | 0.486 | 0.484 | 0.039 |
Clients: living with dementia | 12,052 (100) | 5555 (100) | 2.170 | 0.701 | 0–1 | 0.458 | 0.450 | 0.066 |
Size (beds) | 12,052 (100) | 5555 (100) | 2.170 | 40.62 | 1–236 | 24.18 | 23.80 | 2.947 |
Competition (HHI) | 12,041 (99.9) | 5550 (99.9) | 2.170 | 0.056 | 0.007–1 | 0.073 | 0.072 | 0.004 |
Direct care worker to service user ratio | 10,901 (90.4) | 5097 (91.8) | 2.139 | 0.861 | 0.421–2 | 0.287 | 0.286 | 0.088 |
Female employee proportion | 8790 (72.9) | 4372 (78.7) | 2.011 | 0.866 | 0–1 | 0.099 | 0.102 | 0.025 |
Local area characteristics (postcode district) | ||||||||
Female Jobseekers’ Allowance claimant percentage | 12,052 (100) | 5555 (100) | 2.170 | 0.757 | 0–4.273 | 0.567 | 0.545 | 0.177 |
Average house price (£) | 12,052 (100) | 5555 (100) | 2.170 | 238,865 | 50,576–1,613,813 | 124,403 | 122,244 | 10,823 |
Attendance Allowance uptake percentage | 12,052 (100) | 5555 (100) | 2.170 | 12.35 | 6.496–22.31 | 2.140 | 2.122 | 0.317 |
Pension Credit uptake percentage | 12,052 (100) | 5555 (100) | 2.170 | 15.21 | 3.178–64.47 | 7.277 | 7.286 | 0.961 |
Instrument | ||||||||
Proportion of employees below future minimum wage | 6878 (57.1) | 3720 (67.0) | 1.849 | 0.508 | 0–1 | 0.246 | 0.238 | 0.079 |
Statistical methods: instrument for wage
It is likely that the staffing measures are endogenous in a model of care home quality. 150 To assess the true effect of an endogenous variable, we required an instrument that is correlated with the endogenous variable (staffing measures) but is not directly related to the dependent variable (quality). Following previous studies, we used exogenous increases to the minimum wage floor as an instrument for wage, specifically the proportion of workers employed by provider j being paid less than the future National Living Wage rate. 272
We assessed for the endogeneity of wage and exogeneity of the instrument using relevant variable addition tests. 273 We also assessed the strength of the instrument using weak and overidentification tests, the latter when including a second instrument, a spatial lag of Pension Credit uptake, in a complete-case regression of quality ratings.
Statistical methods: model
Given the panel nature of the data and the use of instrumental variables, our analysis used a linear probability model of quality ratings:273
where the observed level of quality (i.e. the rating), R, for care home j in time t depends on the vector of staffing measures of interest, S, other exogenous care home characteristics, local demand and supply factors, X, and a random error, ε. We first estimated this model using pooled observations over time and assuming that the staffing measures (i.e. training incidence, turnover, vacancies and wages) are exogenous. We then used the panel nature of the data to estimate linear probability models of quality using both random-effects and fixed-effects specifications, assuming that the random-error term εjt is composed of both a home-specific error term, hj, that does not vary over time, and a time-variant error term, ujt. Stata version 16 was used for the analysis, specifically using the reg, ivreg2 and xtivreg2 commands, and we clustered standard errors by care home to account for heteroscedasticity.
Statistical methods: robustness checks
We further assessed the results by estimating random-effect probit specifications, given the binary nature of the dependent variable, and by running the regression models on quality ratings for the five KLOEs.
Empirical findings
The study employed a longitudinal analysis of a large data set of English care homes and data on their staff and employment conditions, including wages, to analyse the effect of staffing on quality.
Descriptive statistics
Descriptive statistics for the complete cases are presented in Table 19, along with information on numbers of missing data. About three-quarters of care homes in our sample had a quality rating of ‘good’ or ‘outstanding’. Voluntary sector care homes represented 12% of care home observations, whereas nursing homes represented 38%. Seventy per cent of the observations were for homes providing services to those living with dementia, and average care home size was around 41 beds.
Figure 12 illustrates that voluntary (i.e. not-for-profit) care homes have better quality ratings than private (i.e. for-profit) ones and that care homes without nursing were also slightly better rated than care homes with nursing.
In terms of staffing measures, the average hourly wage for care workers in the sample of care homes was £7.85; 28.1% of staff had received dementia training; 12.7% had received training for person-centred care/dignity; and the average annual staff turnover and current job vacancy rates were 28.9% and 4.1%, respectively (see Table 19).
Figure 13 shows that care worker wages are slightly higher in care homes rated ‘good’ or ‘outstanding’ than in homes rated ‘inadequate’ or ‘requires improvement’, and that they are also higher in voluntary-sector care homes than in private-sector care homes. Figure 14 shows that care worker average hourly wage (left side) had a positive skew and was leptokurtic. The transformed natural logarithm of wage reduced both skewness and outliers in the distribution.
Multivariate regression analysis
Table 20 presents the results from estimating the random-effects linear probability models of care home quality ratings based on Equation 5. A Mundlak test, whereby the mean of controls that vary over time are included in a random-effects model, showed that time-invariant unobservables (hi) are not related to the regressors (joint F-test statistic of mean of time-varying controls of 15.84; ρ = 0.359) and that the random-effects model with instrumental variables (REIV) estimation is most appropriate. 274
Variables | (1) Quality rating (RE) | (2) Quality rating (REIV) | (3) Quality rating (RE) | (4) Quality rating (RE) | (5) Quality rating (RE) | (6) Quality rating (RE) | (7) Quality rating (REIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.163** (0.079) | 0.719*** (0.126) | – | – | – | – | 0.709*** (0.125) |
Dementia trained (%) | – | – | 0.090*** (0.016) | – | – | – | 0.098*** (0.018) |
Person-centred care or dignity trained (%) | – | – | – | 0.064*** (0.022) | – | – | 0.005 (0.025) |
Staff turnover rate | – | – | – | – | –0.039** (0.018) | – | –0.022 (0.020) |
Job vacancy rate | – | – | – | – | – | –0.315*** (0.083) | –0.317*** (0.089) |
Care home controls | |||||||
Registration: nursing home | –0.049*** (0.011) | –0.046*** (0.011) | –0.042*** (0.011) | –0.048*** (0.011) | –0.048*** (0.011) | –0.048*** (0.011) | –0.036*** (0.011) |
Sector: voluntary | 0.074*** (0.015) | 0.038** (0.016) | 0.091*** (0.014) | 0.088*** (0.014) | 0.085*** (0.014) | 0.087*** (0.014) | 0.048*** (0.016) |
Clients: living with dementia | –0.050*** (0.010) | –0.044*** (0.010) | –0.059*** (0.010) | –0.053*** (0.010) | –0.050*** (0.010) | –0.053*** (0.010) | –0.054*** (0.010) |
Care home competition (HHI) | 0.230*** (0.063) | 0.220*** (0.063) | 0.243*** (0.063) | 0.238*** (0.063) | 0.235*** (0.063) | 0.231*** (0.063) | 0.229*** (0.063) |
Size (beds, log) | –0.036*** (0.008) | –0.036*** (0.008) | –0.035*** (0.008) | –0.035*** (0.008) | –0.036*** (0.008) | –0.034*** (0.008) | –0.032*** (0.008) |
Staff-to-resident ratio | 0.037 (0.068) | 0.040 (0.069) | 0.051 (0.068) | 0.039 (0.068) | 0.031 (0.068) | 0.025 (0.069) | 0.041 (0.068) |
Staff-to-resident ratio (squared) | –0.023 (0.031) | –0.030 (0.031) | –0.022 (0.030) | –0.021 (0.030) | –0.019 (0.031) | –0.017 (0.031) | –0.026 (0.031) |
Female staff (%) | 0.098 (0.061) | 0.109* (0.062) | 0.078 (0.061) | 0.094 (0.061) | 0.091 (0.061) | 0.081 (0.062) | 0.075 (0.062) |
Local area controls | |||||||
Attendance Allowance (%) | 0.003 (0.004) | 0.004 (0.004) | 0.003 (0.004) | 0.003 (0.004) | 0.003 (0.004) | 0.002 (0.004) | 0.003 (0.004) |
Pension Credit (%) | –0.002 (0.001) | –0.002* (0.001) | –0.001 (0.001) | –0.002 (0.001) | –0.002 (0.001) | –0.001 (0.001) | –0.002 (0.001) |
Jobseeker’s Allowance (%) | 0.004 (0.013) | 0.007 (0.013) | 0.004 (0.013) | 0.004 (0.013) | 0.003 (0.013) | 0.002 (0.013) | 0.007 (0.013) |
Average house price (log) | 0.042* (0.021) | 0.017 (0.022) | 0.050** (0.021) | 0.051** (0.021) | 0.049** (0.021) | 0.054** (0.021) | 0.023 (0.022) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –0.071 (0.318) | –0.907*** (0.352) | 0.132 (0.295) | 0.133 (0.297) | 0.184 (0.296) | 0.149 (0.296) | –0.963*** (0.349) |
Observations | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 |
Care homes | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 |
Imputations | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
Average RVI | 0.165 | 0.164 | 0.156 | 0.152 | 0.150 | 0.156 | 0.226 |
Largest FMI | 0.467 | 0.448 | 0.470 | 0.467 | 0.469 | 0.471 | 0.451 |
Wage
Columns 1 and 2 of Table 20 present the random-effects estimations results without (random effects model) and with instrumenting for the average hourly wage (REIV). Weak identification tests showed that the instrument strongly explains the average hourly wage, and there is no evidence of overidentification when including a second instrument (i.e. spatial lag of Pension Credit uptake) in a complete-case OLS model. Furthermore, inclusion of the 1-year lead of the excluded instrument is not statistically significant at 5% level in either the REIV or the fixed-effects model with instrumental variables estimations. This suggests that the instrument is strictly exogenous (i.e. not correlated over time with the error term in Equation 5).
The impact of hourly wage on quality ratings is significant and positive. Based on the preferred REIV estimation, a 10% rise in average hourly wage would increase the likelihood of a care home being rated ‘good’ or ‘outstanding’ by 7.2%, all other things being equal. The effect size found when including all staffing measures in the regression, which includes using training and staff turnover and job vacancies as covariates in the first-stage wage equation, is very similar (see Table 20, column 7). Overall, there is significant evidence of the endogeneity of average hourly wage and therefore the non-instrumented results indicate a substantial downwards bias of the wage effect on quality.
Training
We found that each of the two types of training has a direct relationship with quality rating when included individually in the model (see Table 20, columns 3 and 4). When controlling for wage (see Table 20, column 7), only dementia training has a direct association with quality ratings. A one percentage point increase in the number of dementia-trained staff increases the probability of a ‘good’ or ‘outstanding’ quality rating by 1%, all other things being equal.
Job vacancies/staff turnover
We found that vacancies and turnover both had a significant negative association with the likelihood of a care home having a ‘good’ or ‘outstanding’ quality rating when included separately in the quality ratings model (see Table 20, columns 5 and 6). When controlling for wage (see Table 20, column 7), the significant negative relationship with job vacancies remained, with a one percentage point increase in the job vacancy rate decreasing the likelihood of a ‘good’ or ‘outstanding’ rating by 0.3%.
Other findings
The relationship between other care home-level factors and quality are consistent with previous research on the English care homes’ market. We find that nursing homes, larger homes, homes facing greater levels of local competition and homes that primarily support those living with dementia are of significantly lower quality, whereas voluntary sector care homes are of significantly higher quality.
The results of the overall quality rating are also predominantly the same when looking at the ratings for the five KLOEs (see Appendix 4, Table 30). Voluntary sector care homes have significantly higher likelihood of a ‘good’ or ‘outstanding’ quality rating for three of the five KLOEs (‘well led’, ‘safe’ and ‘responsive’).
Robustness of findings
Tables 25 and 26 in Appendix 4 present the results of estimating pooled linear probability multiple imputation and fixed-effects linear probability multiple imputation specifications. Tables 27 and 28 in Appendix 4 present random effects and pooled specifications of quality ratings using complete cases (i.e. no multiple imputation). Finally, Appendix 4, Table 29, presents the findings of a random-effects probit multiple imputation specification of quality ratings. All specifications find similar results to the main findings.
We further estimated quality ratings models for the five KLOEs used to assess care home quality (‘safe’, ‘effective’, ‘caring’, ‘well led’ and ‘responsive’) using a REIV specification (see Appendix 4, Table 30). We found for all a significant positive impact of average hourly wages, with marginal effects from 0.260 (‘caring’) to 0.634 (‘well led’).
Adequacy of instrument
Appendix 4, Table 31, presents the manual first-stage regressions of wage, the complete-case two-stage least squares models of quality ratings and the relevant tests for the adequacy of the instrument. Wage is endogenous to quality and so the use of an instrument is appropriate. The results of weak identification, overidentification and strict exogeneity tests suggest that the proportion of staff paid below the future minimum wage both strongly explained wage and was a suitable instrument with little relationship to quality ratings.
Discussion
Staff play a crucial role in the aspects of quality of care provided in a care home, and form relationships with those for whom they care. 9,139,275,276 Given the importance of staff, it seems likely that improving conditions of work, either intrinsically through the working environment (e.g. job autonomy, training) or extrinsically through wages and other benefits, could have a noticeable impact on the quality of a care home. 8,277 Furthermore, because of the low-wage nature of the sector, improvements to pay and conditions could have positive, knock-on impacts on both the portrayal and the perception of employment in adult social care139 and levels of staff turnover and job vacancies,278 a long-standing problem for social care in England. 136 However, there is only a small amount of evidence for England as to the effects that staff and their employment conditions have on the quality of care homes. 148,149
This study sought to assess whether or not employment conditions and care home-level staffing factors have an impact on the quality of English care homes using a large, national data set of homes for the years 2016 to 2018. The study utilised appropriate statistical methods, including use of longitudinal data to tackle omitted variables and addressing both missing data and the potentially endogenous relationship between wages and care home quality. Care home-level and local geographical area-level variables were included to control for supply and demand factors likely to influence quality.
Given the statistical methods employed, the findings suggest that the average hourly wage of care workers has a significant positive effect on the quality of a care home, as measured by CQC quality ratings. For the otherwise average care home in England, a 10% increase in the average wage would increase the likelihood of a ‘good’ or ‘outstanding’ rating by 7%. Moreover, certain forms of training, particularly training for dementia, also had a positive association on care home quality. This is an important finding for England as there are high levels of cognitive impairment in the care home resident population2 and care homes with a particular focus on dementia have been found to be of lower quality. 279
Our results also show that staff turnover and, in particular, the number of job vacancies have a significant negative association on the quality of English care homes. For the average care home in England, a one percentage point increase in the job vacancy rate would reduce the likelihood of a ‘good’ or ‘outstanding’ quality rating by 0.3%. This finding both extends and confirms a cross-sectional study that found a significant negative correlation between job vacancies and care home quality in England. 148
In line with previous literature, this study found a strong negative association between quality and competition between care homes. 279,280 The effect of competition on quality may work through price, which is strongly influenced by the dominant market position of local authorities with respect to commissioning social care services. 279 There are, sometimes large, differences in the fees paid for care home places by funding source, with private, self-funding residents tending to pay more than residents who have their placement publicly funded by local authorities. 266,281 The analysis of residents’ SCRQoL (see Chapter 5) found no significant relationship between residents’ funding source and care outcomes. However, it remains likely that there are quality differences between care homes based on the proportion of residents funding their own care. Nonetheless, controlling for wealth and income at a local level (as a proxy for self-funding residents) and utilising longitudinal data to address any omitted variables, this study has shown that increasing staff wages and training levels may improve quality.
Increasing the pay of staff and improving training provision in a, generally, low-wage sector would probably require increased funding from the public purse as local authorities fund a large proportion of resident placements, in whole or in part. 3 This study therefore raises important questions as to what is considered an ‘acceptable’ level of quality and who should be able to access (and ultimately pay for) care that is beyond the ‘acceptable’ level.
Conclusion
The results in this analysis suggest that improving conditions of work for care home employees may have an important positive association with quality. Controlling for care home-level factors related to job quality, local area-level measures of needs, income and supply, and the potential for reverse causality in the case of wage, we found that improved pay and some forms of training for staff leads to higher care home quality ratings. There is also evidence that a lack of adequate staffing (i.e. a greater number of job vacancies) is related to lower care home quality. These findings, which extend the evidence base of the relationship between staffing and care home quality for England, have important policy implications for funding the social care sector.
Limitations
Data on the proportion of self-funding residents are not available nationally and, therefore, local geographical area measures of wealth and income were utilised in the analysis as proxy variables to control for the relationship between quality and funding source. Elements of staffing are assessed within all KLOEs used by the CQC when rating a home and, therefore, will be implicit in care homes’ quality ratings. However, much of the focus will be around staffing systems and procedures for certain KLOEs (e.g. safe). Additionally, we estimated models of the quality ratings for the five underlying KLOEs, which showed variation in the main significant finding with wage.
Chapter 7 Patient and public involvement co-research
Aim of this chapter
The aim of this chapter is to review the experiences of the co-researchers themselves and their involvement with the project, as well as the experience of an academic researcher (NS) who collaborated with them in producing the research in WP1.
Introduction
Patient and public involvement in social and health care research has become fairly commonplace over the last couple of decades both in the UK and internationally. 282,283 Indeed, research funders encourage, and often require, projects to include a PPI component. 284
At the core of PPI is a shift from research that is carried out ‘to’, ‘about’ or ‘for’ the public to research that is carried out ‘with’ or ‘by’ members of the public. 285 In social care research, ‘patients’ and ‘public’ can refer to range of different people. Although PPI can include members of the general public, it can also be operationalised as people who have experience of social care, either as someone who receives support or lives with a condition that is often supported by social care services, or as someone with experience of caring for a friend or relative who needs support. Although PPI may reflect the democratic principles of public accountability or transparency, it is also posited as a way of improving the quality of research. 285 This stems from the recognition that those who use social care services – or support someone who does – gain experiential knowledge286 that may differ from that of an academic researcher whose knowledge draws primarily from academic study. 287 In other words, PPI representatives can bring a different perspective to research projects, an alternative perspective that can feed into and improve the design, implementation and quality of a research project.
Patient and public involvement in health and social care research has, most often, taken an advisory form, with PPI representatives often sitting on advisory or study steering groups. However, PPI representatives are beginning to take on roles once reserved for researchers, such as becoming co-applicants on proposals and being co-researchers on projects. 285,288 Although there is no shared definition of the role of PPI co-researcher, it is clear that it entails a greater level of involvement in a research project than being part of a group that might meet two or three times throughout the project to offer advice. In projects reported by Bindels et al. 289 and Marks et al. ,288 the definition of a PPI co-researcher drew on the idea of an equal partnership or collaboration between academic researchers and PPI representatives. Moreover, Marks et al. 288 also noted that co-research requires the PPI representatives to engage in ‘some or all’ of the project’s research activities. This approach is also encouraged by the 2012 INVOLVE guidelines,290 which outline ways in which PPI representatives can be involved in social and health care research projects. The INVOLVE guidelines290 suggest that, beyond the pure advisory role, PPI representatives may have a role in both the collection and the analysis of research data.
There are different approaches to reviewing and evaluating the role of PPI in research projects. Some studies have focused on measurable impacts,282,283 whereas others have noted that the evidence of PPI impact is not only quite minimal287,291 but may not be the appropriate focus of attempts to understand PPI. As Staley and Barron287 argue, focusing on what they call ‘short-term’ outcomes, such as recruitment and dissemination, runs the risk of ignoring more important long-term impacts on research agendas and culture. Missing from the dissemination of many projects involving patients and the public292 is the experience of carrying out collaborative work from both the academic researcher and the PPI co-researcher perspective. Such personal reflections can shed light on the impact of PPI on health and social care research. 287
MiCareHQ patient and public involvement strategy
The MiCareHQ project aimed to include PPI at all stages of the research. PPI in the project, although not PPI co-research, was a requirement of the funding call. 293 Prior to submission, the application was reviewed by five members of the public from the Quality and Outcomes of Person-Centred Care Policy Research Unit (further information can be found at www.qoru.ac.uk/about/public-involvement; last accessed 24 July 2020).
In addition, during the study PPI was delivered in the following ways:
-
Study Steering Committee – at the beginning of the study, two members of the public with experience of social care were recruited as lay representatives to the Study Steering Committee. One was a retired care home manager and the other an informal carer whose relative had lived in a care home. Their role in the Study Steering Committee was to attend meetings, give a public/patient perspective on issues that arose and comment on emerging findings.
-
Co-researchers in WP1 – we recruited three lay co-researchers from the PSSRU’s Research Advisors Panel to assist with the focus groups with care home staff and contribute to the development of the new measures of pain, anxiety and depression.
-
Contributions to study outputs – lay advisors from the study steering group contributed to the drafting of the Plain English summary of this final report and reviewed it for written quality and clarity. The co-researchers from WP1 co-authored this chapter of the final report.
Thus, although there was a strong advisory component to the PPI in the study, it also included elements that are more in keeping with PPI co-research.
Defining the co-researcher role
During the initial funding application, the project team was keen to go beyond the almost ubiquitous use of PPI project advisors and work with PPI representatives to co-produce the research. Two members of the PSSRU’s Research Advisors Panel were invited to collaborate on aspects of the project’s data collection and analysis. Specifically, PPI co-researchers were invited to collaborate on WP1, which aimed to develop a set of new ASCOT domains and associated tools for three health-related areas: pain, anxiety and depression.
Development of the measures consisted of three activities: two rapid reviews (see Chapter 2), focus groups with care home staff, and cognitive interviews with care staff and the relatives of residents (see Chapter 3). As it was important for our co-researchers to feel immersed in the research process, but also supported in their roles, we agreed that they would be best placed working on the design, conduct and analysis of the staff focus groups. Unlike the cognitive interviews, focus group methodology was ideally suited to a dual moderator approach, whereby the co-researcher could facilitate the focus groups, supported by an experienced academic researcher. See Chapter 3 for a full discussion of the methods used in this section of the study.
Recruiting the co-researchers
An advertisement explaining the project, what the co-researcher role would involve and the skills required for the role was sent to the Research Advisors Panel (see Report Supplementary Material 15). The criteria for joining this panel reflect the public element of Fredriksson and Tritter’s294 distinction between patient and public in PPI. The panel does not require members to have direct previous experience of health or social care as a user or family carer. Instead, it comprises members of the public who have an interest in health and social care.
The aim was to recruit two PPI co-researchers. However, we received three high-quality applications from the group of research advisors. All three were invited to join the research team and the co-researcher roles were split evenly between the three applicants. Although we note above that the PSSRU’s Research Advisors Panel is formally a public rather than a patient panel, two of the three research advisors had direct experience of supporting a person with a long-term condition – including supporting that person while they lived in a care home – and experience of working professionally with people living with dementia. Each of the co-researchers also brought professional experience of working either with older people or in social or health care settings. In this way, the three co-researchers brought considerable experiential knowledge to the project.
The co-research process
The role untaken by the PPI co-researchers had five phases: phase 1, helping the academic researcher to plan the focus groups; phase 2, conducting the focus groups alongside the academic researchers; phase 3, reflecting on the findings of the focus groups; phase 4, reflecting on the experience of being a co-researcher; and phase 5, being involved in dissemination.
Phase 1
A day-long planning workshop was arranged in central London, this being a more convenient location for the co-researchers than the University of Kent. The aim of the workshop was for the academic and co-researchers to get to know one another and for the PPI co-researchers to familiarise themselves with the project. During this workshop, the academic researchers shared their initial ideas about how the focus groups might work. Working together, the co-researchers and the academic researchers turned this into a final guide for facilitating the groups (see Appendix 3). We also discussed how to facilitate a focus group and the role each of the co-researchers felt comfortable adopting. It was agreed that each co-researcher would co-facilitate one focus group with an academic researcher.
Phase 2
The focus groups are described in detail in Chapter 3. Here we focus on the role of the PPI co-researchers.
Three focus groups were carried out by the team of PPI co-researchers and academic researchers. Each group consisted of three sections. The PPI co-researcher facilitated the first two sections, looking at, respectively, how care home staff recognise pain, anxiety and depression, and the words that care home staff use to describe residents’ pain, anxiety and depression. The final section, testing draft questions, was facilitated by an academic researcher. The co-researchers were asked to express a preference for which focus group they attended. Immediately prior to each group, the academic researcher and co-researcher met to run through the plan for the group.
Phase 3
Following the focus groups, the PPI co-researchers were sent copies of verbatim transcripts of the audio-recordings of the three groups. They were asked to review them and think about what we could learn from the data collected and, in particular, how this might feed into the development of our new ASCOT domains. This was then discussed at length in a face-to-face team meeting at the University of Kent Canterbury campus.
Phase 4
The PPI co-researchers were asked to reflect on their experiences of working as a co-researcher. They were sent five questions as a guide to help them write a short reflection:
-
What was it like to be a PPI co-researcher on this project?
-
What impact did being a co-researcher have on you?
-
What aspects of your involvement went well and which did not?
-
What impact do you think the co-researchers had on the project as a whole?
-
What can others learn from your experience on the project?
Two of the three co-researchers submitted a reflection on their experiences, both of whom used the questions to structure their reflections. Their reflections were used as the key part of this chapter.
The same questions were answered by the academic researcher who worked most closely with the PPI co-researchers.
Phase 5
As co-authors, the co-researchers were invited to comment on both drafts and final versions of this report and other publications.
Experiences and reflections of co-research
Patient and public involvement co-researcher 1
What was it like to be a patient and public involvement co-researcher on this project?
This was a particularly interesting project because of the neglected but important topic area, especially for me, having been a carer for my father with dementia who struggled to convey feelings of pain and having known many care home residents. It was also interesting in that traditional PPI was replaced with a co-researcher role, which meant a greater feeling of engagement and a greater sense of ownership in the project and as a result I await news of the impact of the findings with more excited anticipation than I would usually.
What impact did being a co-researcher have on you?
Because of the role I felt more empowered to have greater input and I felt that my role was more useful to the research team. I learnt about a subject and methodology that I was not familiar with and by being a co-researcher I am now able to see how I might help disseminate the findings in a more useful way to aid the impact on care home staff and residents.
What impact do you think the co-researchers had on the project as a whole?
We have different experiences and different expertise but share a common goal, which is to improve the lives of those in care homes, both staff and residents. I found that the combination of our perspectives and the practical running of the focus groups that we did enabled both the researchers and the recipients (care home staff participants) to see public involvement as more than a tick-box exercise. I felt that by us running some of the sessions we were able to relate in a way that made the care home staff feel more at ease than had they just had academics running all the sessions. With us representing the public, there felt a greater sense of comfort and openness about the sessions than there is in traditional academic led focus group work.
What aspects of your involvement in the study went well, and which did not?
The research team were helpful and welcoming and made me feel like a part of the team and were there to answer questions when I wanted. During the practical focus group, the two researchers were great to work with and really interesting and made me feel at ease.
One of the things that makes for successful PPI is to get to know your public advisors and one thing that this research team could have improved on is in this regard. To his credit, the lead researcher admitted he was inexperienced in PPI and was ‘strongly encouraged’ to involve members of the public in this study; he was not hostile to the idea but simply had not considered it. With this lack of experience came a reticence to get to know us as individuals; if time had been spent doing this a host of skills open up that study teams are able to benefit from, ones that they may not have themselves. I also at times felt a little out of the loop, but again this comes both from experience of PPI and also negotiation with the public advisors as I appreciate that researchers are sometimes careful not to bombard public advisors with too much information. A newsletter is always a good way to keep all interested parties, including public advisors, informed about the progress of a study and this is something the team could consider in future.
What can others learn from your experience on the project?
Researchers can learn that there is something to gain and nothing to lose in terms of trying PPI and that, like with everything, it will not be perfect first time but it will improve and crucially it will improve your study and hopefully the impact of the results. Public advisors can learn to give researchers who are new to co-research or working with PPI representative a break. Not everyone is committed to PPI and the demands on time are great and PPI is just one of them; get to know your researchers as people as you expect them to get to know you and through building a relationship you will build a better study.
Patient and public involvement co-researcher 2
What was it like to be a patient and public involvement co-researcher on this project?
It was interesting and thought-provoking to be a co-researcher on this project. I was excited by the invitation to apply, as the domains of pain, anxiety and depression and the nature of their under-reporting and treatment in care homes felt like a very important topic, as did the fact that staff are the people most in contact with residents every day and have a wealth of knowledge about individuals. I felt that I could contribute to the thinking about how to develop outcome measures for these areas, drawing on both my professional experience of developing services that help older people living with dementia to live well, and my personal experience of supporting two family members who have lived in care homes.
I enjoyed the meetings in London, meeting the staff and other co-researchers, and going to the care home to talk to staff in a focus group. Everything was very well prepared and thought through, and there was time to discuss the topics in detail and the aims of the project. Both researchers were welcoming and collaborative in their approach.
What impact did being a co-researcher have on you?
It enabled me to connect with aspects of my professional experience and research about older people, which is a personal and professional interest. Importantly, through meeting care home staff in a focus group and working with them, it brought together, for me, a compelling mix of policy and practice drawing on the experts – the care home staff.
I found on rereading the focus group notes that sentences leapt out that convey the powerful nature of the subtle and skilled experience of the care staff. It reminded me again of the care home staff’s abilities, skill and experience with residents, importantly based on their ability/capacity to relate well to residents in their care and understand them as unique individuals.
The exercise also reminded me of the value of care staff as absolutely key in delivering care to residents given their knowledge and experience, which is so undervalued in terms of pay and status.
What impact do you think the co-researchers had on the project as a whole?
The three co-researchers brought diverse views, skills and experiences to the project that have enriched its findings.
What aspects of your involvement went well and which did not?
My experience was only marred by administrative issues with payment by the university, which I understand are now resolved.
What can others learn from your experience on the project?
That involving public/lay people needs to be well organised, supported and resourced.
It can be a satisfying experience contributing personal and professional knowledge and skills to research.
Academic researcher
What was it like collaborating with patient and public involvement co-researchers on this project?
Collaborating with the PPI co-researchers involved a range of feelings. Most of the actual work we did together was really positive and great to be involved in. For me, the highlight was how they ran their part of the focus groups.
It was also thought-provoking. In previous studies I have worked on, PPI has followed the standard approach of including a PPI representative on an advisory group and nothing more. Carrying out co-research was not only new, but something I had not given much thought to before I volunteered WP1, which I was leading, as the work stream most suited to co-research methods.
At times, though, it was challenging. Collaborating with the PPI co-researchers created some additional tasks for WP1 that were not considered and therefore not resourced when the project was originally developed.
Working together on the analysis was also challenging as we found there was a tension between how the PPI co-researchers approached the data from the focus groups and how we approached it. WP1, which the co-researchers worked on, had, in essence, to develop and deliver a set of tools for WP2 to use in its fieldwork. The format of the tools was also quite constrained by existing methods, which had to fit with the mixed-methods approach of the ASCOT (see Chapter 1). This meant that while we were using a qualitative approach to explore and understand the experiences and views of the staff who work in care homes, our actual output, the design of items for a research instrument, was quite specific. In our meetings to understand the data from the groups, the PPI co-researchers focused on the data in a much broader and more exploratory way than I did. It felt like their question was ‘What is interesting in the data’, whereas mine was ‘How do we take what we found and feed that into the research instruments we are developing’.
What impact did collaborating with the patient and public involvement co-researchers have on you?
Having PPI co-researchers on this project had a significant impact on me. In particular, it made me think about my practice as a researcher and how it can develop. As noted above, despite working in social care research for almost two decades, this was the first time I have collaborated with PPI co-researchers. It was a positive enough experience for me to want to do it again, but it demonstrated that if you want it to be as impactful as it has the potential to be, it has to be well planned, adequately costed (including academic time to support co-researchers in their roles) and embedded from the design phase of the study.
What impact do you think the co-researchers had on the project as a whole?
The PPI co-researchers had a very positive impact on the focus groups in WP1. They helped us think about how we might approach the focus groups with care staff and reflect on what came out of those focus groups. I also cannot praise them highly enough for the way they each conducted their focus groups. So, in this way, their work and their views fed into the tools that were tested in other parts of the project.
We have also asked them to review our outputs, but we are unable to comment on the impact their different perspective may have had, as we are currently in the process of writing draft outputs.
I feel that, for the project as a whole, the impact of the co-researchers was unfortunately limited. This, I think, is a direct result of (a) the nature of this particular project and (b) the section of the project the PPI co-researchers collaborated on.
This project was essentially a tool development project and as such was quite technical in places and entailed a number of constraints. The PPI co-researchers were recruited after the study began and their roles had to fit with the existing study design and the aims of WP1, which were to develop new research measures. This did not allow their thoughtful observations to be properly explored by the study. It actually felt like a slight mismatch between what a PPI co-researcher could bring to the project and what the project as it was set up could truly engage with. In a more exploratory study, the perspectives of our co-researchers could have had a greater influence. Of course, this alternative design would also have required greater resources than we had costed for the purposes of this research.
To give the PPI co-researchers some coherence to their collaboration, their input was all focused on a single aspect of one WP, the focus groups in WP1. The advantage of this approach was that this was a self-contained piece of work that the co-researchers could see through from planning and administering the focus groups to helping with analysis and data interpretation. The downside of this was that they were not involved in the rest of the project and so there was a disconnect to their input here and the rest of the project in which the measures were refined and eventually piloted. Organising the co-research in this way unintentionally limited their impact on the project as a whole.
What aspects of the co-researchers’ involvement went well and which did not?
The most positive aspects of our collaboration was that we did what we set out to achieve, completing a set of focus groups that could input into the development of draft tools for testing. As I have stated, all three of the co-researchers facilitated the focus groups in a skilful and professional manner. If I reflect on how I came to that conclusion, I start to recognise that from my perspective doing a good job in these tasks is very much informed by what I see as good academic research practice. In other words, the way that the PPI co-researchers conducted the focus groups was not dissimilar to how an academic researcher would.
Another thing that went well in in the collaboration with the PPI co-researchers was that it felt like everyone got something positive from the collaboration. I have noted that it has made me reflect on a way to improve my research practice and, indeed, the subsequent research funding application I submitted contained a much more substantial PPI co-researcher component than the MiCareHQ project did.
There were a couple of things that did not go so well. First, some of the university payment systems were not set up in a manner that was appropriate for all of our PPI co-researchers. Second, and as I have mentioned previously, the ability of the PPI co-researchers to really shape the project as a whole was limited due to constraints beyond their control. Finally, I felt that focusing on a self-contained piece of work with the PPI collaboration, while it had a number of positives, meant that at times the PPI co-researchers many have been unintentionally excluded from the wider project.
What can others learn from your experience of working with co-researchers on the project?
Reflecting on my experience of working with the co-researchers, there are a number of ways in which it could have been improved. These are also things that other projects could learn from.
For me, the key would be to collaborate with the PPI co-researchers earlier to plan the project so that their perspectives actually shape the project rather than having to fit into a pre-existing plan. This would also enable projects to better fit the co-researchers’ skills, experiential knowledge and interests with the study design. Our project was very lucky in that we had very competent, skilled and confident co-researchers who were able to go out into the field and conduct fieldwork after only one session. I do not think this would always be the case, and making sure that there is enough time and resources to allow greater training and support for PPI co-researchers would be vital.
Discussion
In our reflections on our involvement in a piece of co-research between PPI and academic researchers, we see links to conclusions made by others who have also engaged in co-research. Like others,282 the PPI co-researchers (and, indeed, the academic researchers) on this project broadly found it a positive experience. In particular, the feelings of empowerment and engagement with the project that our PPI co-researchers reported mirror those in other studies. 282,283,285 The reflections by our PPI co-researchers also demonstrate just how much insight they have gained into the topics that this part of the MiCareHQ project was looking at. This is knowledge that probably would not have been gained by a PPI representative had the standard advisory model been the sole approach utilised by the project. This gaining of knowledge is a feature found in other examples of co-research. 282,283 The literature on PPI involvement and co-research also argues that it can have an impact on academic researchers. Staley and Barron287 suggest that PPI involvement in research has subtle long-term impacts. In the academic researcher’s account, it is noted that one of the more important consequences of the collaboration was the learning and experience that he gained. This, he felt, would enable him in his future research practice to engage more fully with PPI co-research and hopefully address what he felt were the limitations of co-research in the MiCareHQ project.
Less positively, as in some other projects,282 at least one of the co-researchers felt a little on the edge of things. This is a view also supported by the academic researcher’s account of the co-research. In the co-researcher’s account, it was felt that the lack of experience in conducting co-research among the academic researchers meant that they lacked the skills (and possibly the time) to fully engage with the co-researchers and get to know them as people and understand what skills they could bring to the study. It has been noted that discussion of skills and training in co-research often focus on the PPI co-researchers and very rarely consider the skills (or indeed the lack of them) and the training needed by academic researchers involved in collaborating in co-research. 287 This was certainly the case in the MiCareHQ project, where it was probably assumed that if one was experienced in conducting research in a traditional non-collaborative manner, one was in possession of all the skills needed to collaborate in co-research.
One impact that is noted in the literature is that often co-researchers learn how to manage their conditions better following involvement in a research project. This is because often PPI co-research means researching one’s own condition or people receiving similar support. 288 This is not the approach we took in MiCareHQ, opting for the public aspect of PPI as opposed to the patient aspect. Importantly, residents in care homes are not patients but people who need daily care on an ongoing basis. As noted earlier, our co-researchers did still bring relevant experiential knowledge, but it does raise the question of whether or not our recruitment strategy was the most appropriate one. Our co-researchers were not drawn from those who live or work in care homes. However, involving people who live in older adult care homes as co-researchers is a challenging endeavour. This is a population who struggle to self-report their own needs, characteristics and quality of life, which is why our study sought to measure these aspects using a mixed-methods approach.
Indeed, a recent systematic review of older care home residents as collaborators in research found that only small-scale, action research projects involved residents as collaborators, and even then not as co-researchers as we have outlined here. 295 The review identified several barriers to resident involvement in research, including cognitive impairment of residents, resources required to support co-research and organisational factors (of both the research and the care home culture). 295 Thus, although PPI co-research has been carried out in collaboration with lots of different groups of service users and people with long-term conditions, involving people living with dementia in co-research is still relatively rare. 296 Where work has involved people living with dementia, this has tended to involve people with less severe dementia than is often found among the older adult care home population. 297,298
Does this mean that collaborative work with co-researchers should focus its efforts with either the wider public or people who have experience as informal carers, rather than with people who live in older adult care homes? Certainly, the high level of, particularly cognitive, impairment offers a challenge. However, there is evidence that people living with dementia have the desire to be involved in research as co-researchers. 298 Therefore, the question may not be whether or not people living with dementia can collaborate as co-researchers, but rather how they can be supported and enabled to collaborate in research projects. Possible facilitators include (but are not limited to) creative methodologies to engage and enable residents to adopt a co-researcher role, financial resources and time to support meaningful participation in a population who will require significant support to engage, accessible methods of communication and the selection of topics that really matter to and inspire residents. 295
Although all projects should be able to include meaningful consultation and input from PPI advisors, this may not be the case for co-research. Vogsen et al. 291 note that the bulk of the studies that have a PPI co-researcher element are qualitative, and the involvement of the PPI co-researchers in this project also fitted this pattern. However, the way the data were used in this project was not typical of many qualitative studies in that they were used to inform the development of tools to measure pain, anxiety and depression. This very specific and technical use of the data did not really allow the project to use many of the interesting broader insights that the PPI co-researchers brought to it. This suggests that more thought might need to be given to what sort of projects or parts of projects best fit the skills that PPI co-researchers bring to any collaboration.
Alternatively, if we recognise that it is unlikely that PPI co-researchers will bring highly specialist and technical research skills – for example, experience of economic analysis – to a project, would providing co-researchers with more extensive training be preferable to including co-research on only certain projects or specific parts of projects? This is not, however, without its tensions. Concerns have been raised that, as PPI representatives undertake training, they begin to lose their unique ‘lay’ perspective and start to think more like an academic researcher. 299 Perhaps a better approach is for projects unsuited to PPI co-research to consider how lay researchers with relevant experiential knowledge (e.g. a family member of a care home resident) might help the academic team interpret findings and help translate them into messages for other non-academic stakeholders, including service users and their families. Such research summaries are often written as ‘add-ons’ by research teams and sent only to study participants. PPI co-researchers might be ideally placed to lead on the writing and dissemination of these findings, even when their input into other stages of the research is minimal.
Conclusion
This chapter has outlined and reflected on the collaboration between academic researchers and PPI co-researchers that fed into the development of the new domains and tools for measuring pain, anxiety and low mood in this project. For all involved, it was broadly a positive experience. If a wider aim is to help shift the balance in research away from research dominated solely by professional academic researchers, it feels as if this enterprise was reasonably successful, especially if we think of the potential for more subtle long-term impacts, such as those outlined by Staley and Barron. 287
Collectively, our reflections indicate that the way we conducted our collaboration could have been improved. Both PPI and academic researchers felt that the PPI co-researchers were not fully integrated into the project, and a question was posed about how thoughtfully planned the PPI co-research was at the proposal stage. Our reflections noted that we rarely think about the skills of academic researchers with regard to working collaboratively with PPI co-researchers and that even experienced researchers may benefit from training in this area. Another challenge to PPI co-research specific to research in care homes for older adults is around the challenges of enabling residents of care homes to collaborate in research projects. The methods we currently use may enable members of the public and those with experiential knowledge of care homes to participate in co-research, but they are not sufficient to support residents to collaborate in this manner.
An important challenge for future work is to build on the sort of PPI collaboration conducted in this study and find ways of enabling those who live in care homes to collaborate with academic research to produce research that reflects their lives, experiences and perspectives.
Chapter 8 Discussion and conclusions
Summary of findings
This mixed-methods study was about care home quality. Following a Donabedian perspective, we examined indicators relating to the structure (e.g. size of home, type of provider and sector), process (e.g. whether a home was ‘caring’ or ‘well led’) and outcomes (e.g. health and social care-related quality of life) of care. 9 There were three interlinked packages of work to meet the following objectives:
-
to develop and test measures of pain, anxiety and depression for residents unable to self-report (WPs 1 and 2)
-
to assess how far CQC’s quality ratings of the home are consistent with indicators of residents’ quality of life (WPs 2 and 3)
-
to assess the relationship between aspects of the staffing of care homes and the quality of care homes (WP3).
In this chapter, we summarise how the study met each research objective before turning to the conclusions, study limitations and recommendations for further research.
Objective 1: measuring the health and social care-related quality of life of care home residents
As outlined in Chapter 1, many older care home residents are living with frailty and dementia and have difficulty reporting their own needs, outcomes and characteristics using conventional methods such as questionnaires and structured interviews. The ASCOT is a suite of tools designed to measure the aspects of people’s quality of life most affected by social care. As well as conventional interview and self-completion tools, there is a mixed-methods version, ASCOT-CH4, for use in care homes with those who cannot self-report. Although this tool has been previously used in research and shown to have excellent inter-rater reliability and face-validity, the psychometric properties of the care homes tool had not yet been established. This study sought to address this evidence gap, and in Chapter 4 and Chapter 5 we described how the measure was used in two studies to collect information about care home residents’ current and expected SCRQoL.
In line with previous research using the self-completion measures, the results indicated that the ASCOT-CH4 had acceptable internal reliability (α = 0.76, eight items) and structural validity. The feasibility of the mixed-methods approach was measured by examining missing data. There were no missing data for final ratings. Every resident had a final ‘current’, ‘expected’ and ‘gain’ score for social care-related quality of life (see Chapter 1), based on researcher ratings. Conversely, had we relied on resident self-report only, using the structured interviews, we would have had missing data for 75–85% of residents in each domain. Staff proxy ratings helped inform final ratings and had very little missing information, but family members were harder to access. Had we relied on proxy report by staff only, residents’ outcomes would have been overestimated, as, where they diverged, researcher ratings for ‘current’ SCRQoL were often one outcome state lower than the perspective given by staff. Thus, the mixed-methods approach was both necessary and feasible.
However, a rapid review (see Chapter 2) of existing measures of pain, anxiety and depression found that there was no equivalent way of measuring the impact of care homes on these important aspects of HRQoL. Despite a significant number of care home residents experiencing symptoms of pain, anxiety and depression, evidence from the research literature indicates that symptoms are often under-recognised and undertreated in care homes (see Chapter 1). Existing tools tend to focus on severity of symptoms, with a view to diagnosing clinical conditions (see Chapter 2). However, the frequency, not just the severity, of symptoms can have a significant, negative impact on a person’s quality of life. Therefore, informed by the review, we drafted new tools measuring these concepts, using the ASCOT mixed-methods approach. We tested and refined the concepts and draft wording through focus groups with care home staff, and then cognitively tested these with staff and family members using a combination of verbal probing techniques and thinking aloud (see Chapter 3). Lay co-researchers were involved in the focus groups and the subsequent revisions of the new measures (see Chapter 7).
Staff in three focus groups were able to describe the verbal and non-verbal (behavioural) signs of pain, depression and anxiety in residents, which were helpful when conceptualising the observable indicators relevant to our tools. Although staff in the focus groups, and participants in the subsequent cognitive testing, were able to answer our draft questions for the new items, some changes were required. First, we changed the depression item to low mood, to avoid clinical, diagnostic or screening connotations. Second, the use of a four-point scale anchored to ‘hardly ever’ at the ‘ideal state’, instead of ‘never’, improved both ease of answer and clarity of meaning.
The final domains were called pain, anxiety and low mood. The question wording and response options for ‘current’ and ‘expected’ interviews and ratings were finalised in line with the ASCOT approach. In addition, as these measures were designed for use with people who cannot always self-report, observational guidance with non-verbal and observable indicators of outcome states in each domain was developed ready for piloting (see Chapter 3 and Appendix 3, Figures 3, 4, 16 and 17). A primary data collection was undertaken, and 182 residents from 20 care homes for older adults (10 nursing and 10 residential) were recruited to the study from four local authorities in South East England (see Chapter 4). To explore the construct validity of the new items, questionnaires also contained staff-rated, validated scales relating to these constructs so that we could explore hypothesised relationships with the attributes in the analysis.
As expected, psychometric testing of the three new items found similar results to the ASCOT-CH4 in terms of feasibility (see Chapter 4), indicating that self-report alone would not be feasible for this population but that a mixed-methods approach adds another perspective to one purely gathered by staff proxy interviews. The new domains did not form a unidimensional scale with the ASCOT-CH4 but could be used alongside it to measure these important aspects of HRQoL. The structural validity analysis indicated that, rather than being considered a three-item scale representing aspects of ‘health-related quality of life’ (pain, anxiety and low mood), it may be better to conceptualise these three items as separate ‘modules’ that relate to the concepts of psychological health and pain. These may be added flexibly alongside ASCOT-CH4 (as a separate scale), with low mood and anxiety combined and pain standing alone.
Taken together, the results indicate that the mixed-methods approach to measuring health (pain, anxiety and low mood) and social care-related quality of life (ASCOT-CH4) presented in this report offers a robust methodology for measuring the outcomes of care home residents who cannot self-report.
Objective 2: assessing how far Care Quality Commission quality ratings are consistent with indicators of residents’ care-related quality of life
We used the data from the study described in Chapter 4 and also from a previous study, MOOCH, described in Chapter 5, to examine the relationship between CQC quality ratings and residents’ SCRQoL, controlling for confounding variables. Multivariate regression analyses described in Chapter 5 replicated the findings of previous research and found a significant, positive association between residents’ SCRQoL and regulator quality ratings. The analysis also extended our understanding by finding that the impact of quality was greatest for high-need residents, with the equivalent of a 12% improvement in mean current SCRQoL for residents in ‘good/outstanding’ homes compared with homes rated ‘requires improvement’. Of the five KLOEs, being rated ‘good/outstanding’ for ‘caring’ and ‘well led’ were significantly associated with SCRQoL, with ‘well led’ the most important for high-need residents. Good management is positively associated with better care outcomes in those with the highest care needs. It is likely that well-led services have a better and more effective working environment and ensure adequate skill development for care staff (see Chapter 6 and the results of objective 3).
Objective 3: assessing how much the skill mix and employment conditions of the care workforce matter for quality
We conducted an analysis of secondary data of English care homes (n = 12,052 observations of 5555 care homes) for the years 2016 to 2018 to model the relationship between CQC quality ratings and workforce characteristics (see Chapter 6). We focused on the effect that training provision to staff and staff terms and conditions (wages and turnover/vacancy rates) had on the quality of care homes. We expected all of these variables, which can be affected by policy, to be important determinants of quality. To assess the relationship between care quality and workforce characteristics we used longitudinal panel data models, multiple imputation to address missing data, and an instrumental variable approach to control for the potential endogeneity between quality and workforce characteristics, in particular staff wages.
We found a significant positive association between quality and both wages and training (in dementia and dignity-/person-centred care). Staff turnover and job vacancy rates had a significant negative association. Specifically for wages, we found that a 10% increase in care worker average hourly wage increased the likelihood of a ‘good’ or ‘outstanding’ rating by 7%. Improving employment conditions, including pay and training, may have an important positive association with quality in care homes.
Robustness of the results and limitations
A limitation of the primary data collection in care homes (see Chapters 4 and 5), which informed the multivariate analysis of residents’ SCRQoL (see Chapter 5), was that we did not have any ‘inadequate’ homes in our sample. According to the national distribution of quality ratings, 1 of the 54 care homes should have been ‘inadequate’. However, recruiting ‘inadequate’ homes is incredibly challenging, as they are very few and have 6 months to improve before being reinspected. 132 For the MOOCH study (see Chapter 5), one ‘inadequate’ home agreed to take part. However, the home was subsequently reinspected and rated as ‘requires improvement’. 20
Furthermore, owing to the small numbers of homes rated ‘outstanding’, we had to group these homes with homes rated ‘good’ and conduct the analysis on a binary rather than a four-point scale. This limited the ability of the analysis to explore the impact of really poor quality care on residents’ quality of life, as well as what ‘outstanding’ homes are doing above and beyond the majority, which are rated ‘good’. However, overall, the split between the two groups analysed (i.e. ‘inadequate’/’requires improvement’ and ‘good/outstanding’) was largely comparable to that in national averages.
Another limitation of the analysis of residents’ SCRQoL (see Chapter 5) was that the sampled care homes were recruited from only five local authorities in South East England, meaning that one should be cautious about drawing conclusions from this study for England as a whole.
A potential limitation of the analysis of the relationship between CQC quality ratings and workforce characteristics (see Chapter 6) is that data on the proportion of residents of care homes who self-fund are not available nationally, and we used local geographical area measures for wealth and income as confounding variables. How well the wealth and income measures used correlate with numbers of self-funding residents in care homes is unknown. The findings of the multivariate analysis of care home quality ratings and residents’ SCRQoL (see Chapter 5) showed no difference in outcomes for residents in a care home based on their source of funding, but the potential remains for differences between care homes.
Omitted confounding variables, such as the proportion of residents self-funding in a care home, and simultaneity, that is that changes to the quality rating of a care home could occur at the same time as changes to staffing conditions, are potential limitations to finding a causal relationship in any statistical analysis. For the analysis of the relationship between CQC quality ratings and workforce (see Chapter 6), we used both longitudinal data and an instrumental variable (for staff wages), which are standard methods employed in economics literature to estimate a causal relationship. 300
Conclusions
The limitations outlined of this study could be partially addressed through better data linkage of national care homes data sets in England. The ASCOT SCRQoL outcomes are collected at a national level through the Adult Social Care Survey for both care home residents and home care recipients. However, we cannot currently link the Adult Social Care Survey data at establishment level to the CQC care directory or other data sets, such as the ASC-WDS. Doing so would allow a larger-scale, more robust and generalisable analysis of the relationship between residents’ outcomes and CQC quality ratings (see Chapter 5), as well as of the effects of workforce and job characteristics on care home quality ratings (see Chapter 6) and residents’ outcomes. Such data linkage would be possible if the Adult Social Care Survey included care establishment identifiers (e.g. CQC care establishment identifier).
The Adult Social Care Survey, however, is not without limitations. First, it is restricted to publicly funded service users. Second, it employs a ‘self-report’ methodology through postal questionnaires, supplemented with some telephone interviews. As discussed in Chapters 1, 2 and 4, self-report methodologies are not feasible for most care home residents. Thus, it is likely that any analysis relying on Adult Social Care Survey data alone would not be representative of those with the highest care needs, who are also those with the greatest capacity to benefit from high-quality, ‘caring’ and ‘well-led’ services. There is a clear and urgent need for comprehensive, rigorous and meaningful data on care home residents at a national level. We do not currently have a ‘minimum data set’ for care home residents in England, but work is under way to develop and test one,301,302 and the findings of this study will be able to feed into decisions about the inclusion of residents’ outcomes, not just their needs and characteristics.
Limitations aside, this study has demonstrated that improving working conditions and reducing staff turnover is associated with better care quality and outcomes for residents in care homes for older adults. This is particularly important for care home residents with the greatest needs, who gain the most from homes rated ‘good/outstanding’, especially with respect to social participation and feeling in control of their daily lives. These findings are of significant public interest, especially for those commissioning, regulating, working and living in care homes. However, measuring residents’ needs and outcomes is challenging, as many cannot self-report. We have presented a robust, alternative method of measuring the SCRQoL of care home residents, using the ASCOT-CH4. This methodology was found to be feasible and the ASCOT-CH4 (eight items) was found to be structurally valid, performing comparably to other ASCOT ‘self-report’ tools. We applied this methodology to new items of pain, anxiety and low mood and found that the mixed-methods approach could be applied to these items while still capturing the underlying constructs. The new items do not form a scale with the ASCOT-CH4 domains, as they measure constructs of pain and psychological health, but they could be used alongside it as separate ‘modules’ of interest to capture the impact of care homes on pain, anxiety and low mood.
Research implications
The findings of this study suggest that future care home research should:
-
Consider using a mixed-methods approach to data collection in care homes, which includes the voices and experiences of those unable to self-report.
-
Explore the relationship between pain, anxiety and low mood and other indicators of care homes’ quality.
-
Use a larger sample of care homes – potentially oversampling ‘inadequate’ and ‘outstanding’ homes – to explore the impact of bottom- and top-rated homes on residents’ outcomes.
-
Assess the productivity of care home staff by examining the relationship between resident outcomes (i.e. health and social care-related quality of life) and staffing characteristics and employment conditions (e.g. wages and training) directly. This would require a large-scale data collection of residents’ characteristics and outcomes in care homes to be linked with a workforce data set such as the ASC-WDS.
Acknowledgements
We are pleased to acknowledge the work of our Study Steering Committee, chaired by Professor Claire Goodman, in providing advice throughout the study. We particularly thank the lay members of our committee, Joy Fletcher and Joy Scholl, who contributed to the oversight and management of the study and provided critical comments on study information sheets and findings. Thank you also to Clare Ockwell, who replaced another committee member as a lay representative in 2020 and commented on study findings, including a review of the Plain English summary included in this report.
We thank Claire Cox, Senior Clinical Research Nurse, for assisting with the primary data collection and recruitment of homes and including the MiCareHQ project as an early social care case study for the CRN Kent, Surrey, Sussex.
We thank Skills for Care for sharing with us the ASC-WDS and Roy Price for helpful assistance. We also thank the four care home managers who assisted by setting up the field work in WP1 but cannot be named to protect the anonymity of research participants.
Finally, we acknowledge, with thanks, the contributions of the following colleagues and research staff: Sarah Godfrey for assistance with the finances and administration; Madeline Naick for research assistance with ethics, governance, the rapid reviews and some of the primary data collection in WPs 1 and 2; and Raffaella Tate for her work as a co-researcher on WP1.
Contributions of authors
Ann-Marie Towers (https://orcid.org/0000-0003-3597-1061) contributed to the design of the study, led WP2, contributed to primary data collection, prepared the Abstract, Scientific summary, Plain English summary, and Chapters 1 and 8. She co-authored Chapters 2 and 5 and contributed to Chapters 3, 4, 6 and 7.
Nick Smith (https://orcid.org/0000-0001-9793-6988) contributed to the design of the study, led WP1, contributed to primary data collection, prepared Chapters 2, 3 and 7, contributed to Chapter 1 and reviewed other sections of the report for clarity and content.
Stephen Allan (https://orcid.org/0000-0002-1208-9837) contributed to the design of the study, co-led WP3 and conducted the econometric analysis on workforce and CQC quality ratings, prepared Chapter 6, contributed to Chapters 1, 5 and 8 and reviewed other sections of the report for clarity and content.
Florin Vadean (https://orcid.org/0000-0001-7882-3400) contributed to the design of the study, co-led WP3, conducted the multivariate analysis on care home residents’ quality of life and CQC quality ratings, contributed to the econometric analysis on workforce and CQC quality ratings, co-authored Chapter 5, contributed to Chapter 6 and reviewed other sections of the report for clarity and content.
Grace Collins (https://orcid.org/0000-0002-0144-9411) collected research data in WP1, assisted with the psychometric analysis and co-authored Chapter 4, assisted with the preparation of the report and reviewed other sections for clarity and content.
Stacey Rand (https://orcid.org/0000-0001-9071-2842) conducted the psychometric analysis, reported the results in Chapter 4, contributed to the method and results, and reviewed other sections for clarity and content.
Jennifer Bostock (https://orcid.org/0000-0001-9261-9350) was a co-researcher on WP1 and contributed to authorship of Chapter 7.
Helen Ramsbottom was a co-researcher on WP1 and contributed to authorship in Chapter 7.
Julien Forder (https://orcid.org/0000-0001-7793-4328) contributed to the design of the study, provided advice and expertise on the analysis reported in Chapters 5 and 6 and reviewed other sections of the report.
Stefania Lanza (https://orcid.org/0000-0003-4857-7151) provided project management, contributed to the recruitment of care homes in WP2 and provided specific comments on Chapters 1 and 4.
Jackie Cassell (https://orcid.org/0000-0003-0777-0385) contributed to the design of the study, supervised the research team and edited and reviewed drafts of the final report.
Data-sharing statement
The ASC-WDS used for the econometric analysis in Chapter 6 is the property of Skills for Care. The quantitative data generated in this study is about care home residents’ needs, characteristics and quality of life and is held by the University of Kent. We do not have permission or ethics approval from study participants to share this more widely. The ASCOT toolkit and new measures of pain, anxiety and low mood are available, subject to a free licence for not-for-profit use, here: www.pssru.ac.uk/ascot/. Any queries should be addressed to the corresponding author for consideration.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- Laing W. Care of Older People: UK Market Report 2013 2014 2014.
- Gordon AL, Franklin M, Bradshaw L, Logan P, Elliott R, Gladman JR. Health status of UK care home residents: a cohort study. Age Ageing 2014;43:97-103. https://doi.org/10.1093/ageing/aft077.
- Care Quality Commission (CQC) . The State of Health Care and Adult Social Care in England 2018 19 2019.
- Barron DN, West E. The quasi-market for adult residential care in the UK: Do for-profit, not-for-profit or public sector residential care and nursing homes provide better quality care?. Soc Sci Med 2017;179:137-46. https://doi.org/10.1016/j.socscimed.2017.02.037.
- Burton J, Goodman C, Quinn T. The Invisibility of the UK Care Home Population – UK Care Homes and a Minimum Dataset. International Long-term Care Policy Network; 2020.
- Osborne SP. The quality dimension. Evaluating quality of service and quality of life in human services. Br J Soc Work 1992;22:437-53.
- Malley J, Fernández JL. Measuring quality in social care services: theory and practice. Ann Public Coop Econ 2010;81:559-82. https://doi.org/10.1111/j.1467-8292.2010.00422.x.
- Wiener JM. An assessment of strategies for improving quality of care in nursing homes. Gerontologist 2003;43:19-27. https://doi.org/10.1093/geront/43.suppl_2.19.
- Donabedian A. The quality of care. How can it be assessed?. J Am Med Assoc 1988;260:1743-8. https://doi.org/10.1001/jama.1988.03410120089033.
- Great Britain . Care Act 2014 2014.
- Trigg L, Kumpunen S, Holder J, Maarse H, Solé Juvés M, Gil J. Information and choice of residential care provider for older people: a comparative study in England, the Netherlands and Spain. Ageing Soc 2018;38:1121-47. https://doi.org/10.1017/S0144686X16001458.
- Usman A, Lewis S, Hinsliff-Smith K, Long A, Housley G, Jordan J, et al. Measuring health-related quality of life of care home residents: comparison of self-report with staff proxy responses. Age Ageing 2019;48:407-13. https://doi.org/10.1093/ageing/afy191.
- Netten A, Burge P, Malley J, Potoglou D, Towers AM, Brazier J, et al. Outcomes of social care for adults: developing a preference-weighted measure. Health Technol Assess 2012;16. https://doi.org/10.3310/hta16160.
- Personal Social Services Research Unit, University of Kent . ASCOT: Adult Social Care Outcomes Toolkit n.d. www.pssru.ac.uk/ascot/domains (accessed 12 October 2021).
- Rand S, Towers A-M, Razik K, Turnpenny A, Bradshaw J, Caiels J, et al. Feasibility, factor structure and construct validity of the easy-read Adult Social Care Outcomes Toolkit (ASCOT-ER). J Intellect Dev Disabil 2020;45:119-32. https://doi.org/10.3109/13668250.2019.1592126.
- Rand S, Malley J, Towers AM, Netten A, Forder J. Validity and test-retest reliability of the self-completion adult social care outcomes toolkit (ASCOT-SCT4) with adults with long-term physical, sensory and mental health conditions in England. Health Qual Life Outcomes 2017;15. https://doi.org/10.1186/s12955-017-0739-0.
- Malley J, Rand S, Netten A, Towers A-M, Forder J. Exploring the feasibility and validity of a pragmatic approach to estimating the impact of long-term care: the ‘expected’ ASCOT method. J Long Term Care 2019:67-83. https://doi.org/10.31389/jltc.11.
- Prince M, Knapp M, Guerchet M, McCrone P, Prina M, Comas-Herrera A, et al. Dementia UK: Second Edition – Overview. London: Alzheimer’s Society; 2014.
- Towers AM, Smith N, Palmer S, Welch E, Netten A. The acceptability and feasibility of using the Adult Social Care Outcomes Toolkit (ASCOT) to inform practice in care homes. BMC Health Serv Res 2016;16. https://doi.org/10.1186/s12913-016-1763-1.
- Towers AM, Palmer S, Smith N, Collins G, Allan S. A cross-sectional study exploring the relationship between regulator quality ratings and care home residents’ quality of life in England. Health Qual Life Outcomes 2019;17. https://doi.org/10.1186/s12955-019-1093-1.
- Smith N, Towers A, Palmer S, Beecham J, Welch E. Being occupied: supporting ‘meaningful activity’ in care homes for older people in England. Ageing Soc 2018;38:2218-40. https://doi.org/10.1017/S0144686X17000678.
- Netten A, Trukeschitz B, Beadle-Brown J, Forder J, Towers A-M, Welch E. Quality of life outcomes for residents and quality ratings of care homes: is there a relationship?. Age Ageing 2012;41:512-71. https://doi.org/10.1093/ageing/afs050.
- Nolan M, Davies S, Brown J, Wilkinson A, Warnes T, McKee K, et al. The role of education and training in achieving change in care homes: a literature review. J Res Nurs 2008;13:411-33. https://doi.org/10.1177/1744987108095162.
- Clare L, Whitaker R, Woods RT, Quinn C, Jelley H, Hoare Z, et al. AwareCare: a pilot randomized controlled trial of an awareness-based staff training intervention to improve quality of life for residents with severe dementia in long-term care settings. Int Psychogeriatr 2013;25:128-39. https://doi.org/10.1017/S1041610212001226.
- Andrade DC, Faria JW, Caramelli P, Alvarenga L, Galhardoni R, Siqueira SR, et al. The assessment and management of pain in the demented and non-demented elderly patient. Arq Neuropsiquiatr 2011;69:387-94. https://doi.org/10.1590/S0004-282X2011000300023.
- van Kooten J, Smalbrugge M, van der Wouden JC, Stek ML, Hertogh CMPM. Prevalence of Pain in nursing home residents: the role of dementia stage and dementia subtypes. J Am Med Dir Assoc 2017;18:522-7. https://doi.org/10.1016/j.jamda.2016.12.078.
- Achterberg WP, Pieper MJ, van Dalen-Kok AH, de Waal MW, Husebo BS, Lautenbacher S, et al. Pain management in patients with dementia. Clin Interv Aging 2013;8:1471-82. https://doi.org/10.2147/CIA.S36739.
- Leong IY, Nuo TH. Prevalence of pain in nursing home residents with different cognitive and communicative abilities. Clin J Pain 2007;23:119-27. https://doi.org/10.1097/01.ajp.0000210951.01503.3b.
- Leong IY, Chong MS, Gibson SJ. The use of a self-reported pain measure, a nurse-reported pain measure and the PAINAD in nursing home residents with moderate and severe dementia: a validation study. Age Ageing 2006;35:252-6. https://doi.org/10.1093/ageing/afj058.
- Lichtner V, Dowding D, Esterhuizen P, Closs SJ, Long AF, Corbett A, et al. Pain assessment for people with dementia: a systematic review of systematic reviews of pain assessment tools. BMC Geriatr 2014;14. https://doi.org/10.1186/1471-2318-14-138.
- Lin PC, Lin LC, Shyu YI, Hua MS. Predictors of pain in nursing home residents with dementia: a cross-sectional study. J Clin Nurs 2011;20:1849-57. https://doi.org/10.1111/j.1365-2702.2010.03695.x.
- Neville C, Ostini R. A psychometric evaluation of three pain rating scales for people with moderate to severe dementia. Pain Manag Nurs 2014;15:798-806. https://doi.org/10.1016/j.pmn.2013.08.001.
- Rajkumar AP, Ballard C, Fossey J, Orrell M, Moniz-Cook E, Woods RT, et al. Epidemiology of pain in people with dementia living in care homes: longitudinal course, prevalence, and treatment implications. J Am Med Dir Assoc 2017;18. https://doi.org/10.1016/j.jamda.2017.01.024.
- Torvik K, Kaasa S, Kirkevold Ø, Saltvedt I, Hølen JC, Fayers P, Rustøen T. Validation of Doloplus-2 among nonverbal nursing home patients – an evaluation of Doloplus-2 in a clinical setting. BMC Geriatr 2010;10. https://doi.org/10.1186/1471-2318-10-9.
- Zwakhalen SM, Koopmans RT, Geels PJ, Berger MP, Hamers JP. The prevalence of pain in nursing home residents with dementia measured using an observational pain scale. Eur J Pain 2009;13:89-93. https://doi.org/10.1016/j.ejpain.2008.02.009.
- Bruneau B. Barriers to the management of pain in dementia care. Nurs Times 2014;110:12, 14-16.
- Ersek M, Nash PV, Hilgeman MM, Neradilek MB, Herr KA, Block PR, et al. Pain patterns and treatment among nursing home residents with moderate-severe cognitive impairment. J Am Geriatr Soc 2020;68:794-802. https://doi.org/10.1111/jgs.16293.
- Gilmore-Bykovskyi AL, Bowers BJ. Understanding nurses’ decisions to treat pain in nursing home residents with dementia. Res Gerontol Nurs 2013;6:127-38. https://doi.org/10.3928/19404921-20130110-02.
- Helme RD, Gibson SJ. Measurement and management of pain in older people. Aust J Ageing 1998;17:5-9. https://doi.org/10.1111/j.1741-6612.1998.tb00216.x.
- Hemmingsson ES, Gustafsson M, Isaksson U, Karlsson S, Gustafson Y, Sandman PO, et al. Prevalence of pain and pharmacological pain treatment among old people in nursing homes in 2007 and 2013. Eur J Clin Pharmacol 2018;74:483-8. https://doi.org/10.1007/s00228-017-2384-2.
- Kaasalainen S, Coker E, Dolovich L, Papaioannou A, Hadjistavropoulos T, Emili A, et al. Pain management decision making among long-term care physicians and nurses. West J Nurs Res 2007;29:561-80. https://doi.org/10.1177/0193945906295522.
- Martin R, Williams J, Hadjistavropoulos T, Hadjistavropoulos HD, MacLean M. A qualitative investigation of seniors’ and caregivers’ views on pain assessment and management. Can J Nurs Res 2005;37:142-64.
- Jones KR, Vojir CP, Hutt E, Fink R. Determining mild, moderate, and severe pain equivalency across pain-intensity tools in nursing home residents. J Rehabil Res Dev 2007;44:305-14. https://doi.org/10.1682/jrrd.2006.05.0051.
- Abdulla A, Adams N, Bone M, Elliott AM, Gaffin J, Jones D, et al. Guidance on the management of pain in older people. Age Ageing 2013;42:i1-57. https://doi.org/10.1093/ageing/afs200.
- Gregory J. The complexity of pain assessment in older people. Nurs Older People 2015;27:16-21. https://doi.org/10.7748/nop.27.8.16.e738.
- Hadjistavropoulos T, Fitzgerald TD, Marchildon GP. Practice guidelines for assessing pain in older persons with dementia residing in long-term care facilities. Physiother Can 2010;62:104-13. https://doi.org/10.3138/physio.62.2.104.
- Kaasalainen S. Pain assessment in older adults with dementia: using behavioral observation methods in clinical practice. J Gerontol Nurs 2007;33:6-10. https://doi.org/10.3928/00989134-20070601-03.
- McCaffery M. Nursing Practice Theories Related to Cognition, Bodily Pain, and Man-Environment Interactions. Los Angeles, CA: University of California, Los Angeles; 1968.
- Kaasalainen S, Akhtar-Danesh N, Hadjistavropoulos T, Zwakhalen S, Verreault R. A comparison between behavioral and verbal report pain assessment tools for use with residents in long term care. Pain Manag Nurs 2013;14:e106-14. https://doi.org/10.1016/j.pmn.2011.08.006.
- Mckee K, Houston D, Barnes S. Methods for assessing quality of life and well-being in frail older people. Psychol Health 2002;17:737-51. https://doi.org/10.1080/0887044021000054755.
- Hubbard G, Downs MG, Tester S. Including older people with dementia in research: challenges and strategies. Aging Ment Health 2003;7:351-62. https://doi.org/10.1080/1360786031000150685.
- Hadjistavropoulos T, Gibson S, Weiner D. Pain in the Elderly. Seattle, WA: IASP Press; 2005.
- Hellström I, Nolan M, Nordenfelt L, Lundh U. Ethical and methodological issues in interviewing persons with dementia. Nurs Ethics 2007;14:608-19. https://doi.org/10.1177/0969733007080206.
- Bullock L, Chew-Graham CA, Bedson J, Bartlam B, Campbell P. The challenge of pain identification, assessment, and management in people with dementia: a qualitative study. BJGP Open 2020;4. https://doi.org/10.3399/bjgpopen20X101040.
- Buffum MD, Hutt E, Chang VT, Craine MH, Snow AL. Cognitive impairment and pain management: review of issues and challenges. J Rehabil Res Dev 2007;44:315-30. https://doi.org/10.1682/jrrd.2006.06.0064.
- Morrison RS, Siu AL. A comparison of pain and its treatment in advanced dementia and cognitively intact patients with hip fracture. J Pain Symptom Manage 2000;19:240-8. https://doi.org/10.1016/S0885-3924(00)00113-5.
- Kaasalainen S, Middleton J, Knezacek S, Hartley T. Pain and cognitive status in the institutionalized elderly. J Gerontol Nurs 1998;24:24-31. https://doi.org/10.3928/0098-9134-19980801-07.
- Nakashima T, Young Y, Hsu WH. Do nursing home residents with dementia receive pain interventions?. Am J Alzheimers Dis Other Demen 2019;34:193-8. https://doi.org/10.1177/1533317519840506.
- Cassell JA, Middleton J, Nalabanda A, Lanza S, Head MG, Bostock J, et al. Scabies outbreaks in ten care homes for elderly people: a prospective study of clinical features, epidemiology, and treatment outcomes. Lancet Infect Dis 2018;18:894-902. https://doi.org/10.1016/S1473-3099(18)30347-5.
- Buffum MD, Sands L, Miaskowski C, Brod M, Washburn A. A clinical trial of the effectiveness of regularly scheduled versus as-needed administration of acetaminophen in the management of discomfort in older adults with dementia. J Am Geriatr Soc 2004;52:1093-7. https://doi.org/10.1111/j.1532-5415.2004.52305.x.
- Herr K, Coyne PJ, Key T, Manworren R, McCaffery M, Merkel S, et al. Pain assessment in the nonverbal patient: position statement with clinical practice recommendations. Pain Manag Nurs 2006;7:44-52. https://doi.org/10.1016/j.pmn.2006.02.003.
- Jacobs JM, Hammerman-Rozenberg R, Cohen A, Stessman J. Chronic back pain among the elderly: prevalence, associations, and predictors. Spine 2006;31:E203-7. https://doi.org/10.1097/01.brs.0000206367.57918.3c.
- Scherder E, Herr K, Pickering G, Gibson S, Benedetti F, Lautenbacher S. Pain in dementia. Pain 2009;145:276-8. https://doi.org/10.1016/j.pain.2009.04.007.
- While C, Jocelyn A. Observational pain assessment scales for people with dementia: a review. Br J Community Nurs 2009;14:438, 439-42. https://doi.org/10.12968/bjcn.2009.14.10.44496.
- Herr K. Pain in the older adult: an imperative across all health care settings. Pain Manag Nurs 2010;11:1-10. https://doi.org/10.1016/j.pmn.2010.03.005.
- Herr K, Coyne PJ, McCaffery M, Manworren R, Merkel S. Pain assessment in the patient unable to self-report: position statement with clinical practice recommendations. Pain Manag Nurs 2011;12:230-50. https://doi.org/10.1016/j.pmn.2011.10.002.
- Herr K, Bjoro K, Decker S. Tools for assessment of pain in nonverbal older adults with dementia: a state-of-the-science review. J Pain Symptom Manage 2006;31:170-92. https://doi.org/10.1016/j.jpainsymman.2005.07.001.
- Rostad HM, Puts MTE, Cvancarova Småstuen M, Grov EK, Utne I, Halvorsrud L. Associations between pain and quality of life in severe dementia: a Norwegian cross-sectional study. Dement Geriatr Cogn Dis Extra 2017;7:109-21. https://doi.org/10.1159/000468923.
- Bjoro K, Herr K. Assessment of pain in the nonverbal or cognitively impaired older adult. Clin Geriatr Med 2008;24:237-62, vi. https://doi.org/10.1016/j.cger.2007.12.001.
- Zwakhalen SM, Hamers JP, Abu-Saad HH, Berger MP. Pain in elderly people with severe dementia: a systematic review of behavioural pain assessment tools. BMC Geriatr 2006;6. https://doi.org/10.1186/1471-2318-6-3.
- National Institute for Mental Health in England . Facts for Champions 2005.
- Dening T, Milne A. Mental Health and Care Homes. Oxford: Oxford University Press; 2011.
- Dening T, Milne A. Depression and mental health in care homes for older people. Qual Ageing 2009;10:40-6. https://doi.org/10.1108/14717794200900007.
- Jongenelis K, Pot AM, Eisses AM, Beekman AT, Kluiter H, van Tilburg W, et al. Depression among older nursing home patients. A review. Tijdschr Gerontol Geriatr 2003;34:52-9.
- Seitz D, Purandare N, Conn D. Prevalence of psychiatric disorders among older adults in long-term care homes: a systematic review. Int Psychogeriatr 2010;22:1025-39. https://doi.org/10.1017/S1041610210000608.
- Teresi J, Abrams R, Holmes D, Ramirez M, Eimicke J. Prevalence of depression and depression recognition in nursing homes. Soc Psychiatry Psychiatr Epidemiol 2001;36:613-20. https://doi.org/10.1007/s127-001-8202-7.
- Arthur A, Savva GM, Barnes LE, Borjian-Boroojeny A, Dening T, Jagger C, et al. Changing prevalence and treatment of depression among older people over two decades. Br J Psychiatry 2020;216:49-54. https://doi.org/10.1192/bjp.2019.193.
- Australian Institute of Health and Welfare . Depression in Residential Aged Care 2008–12 2013.
- Mitchell AJ, Bird V, Rizzo M, Meader N. Diagnostic validity and added value of the Geriatric Depression Scale for depression in primary care: a meta-analysis of GDS30 and GDS15. J Affect Disord 2010;125:10-7. https://doi.org/10.1016/j.jad.2009.08.019.
- Snowdon J. Depression in nursing homes. Int Psychogeriatr 2010;22:1143-8. https://doi.org/10.1017/S1041610210001602.
- van der Linde RM, Dening T, Stephan BC, Prina AM, Evans E, Brayne C. Longitudinal course of behavioural and psychological symptoms of dementia: systematic review. Br J Psychiatry 2016;209:366-77. https://doi.org/10.1192/bjp.bp.114.148403.
- Stewart R, Hotopf M, Dewey M, Ballard C, Bisla J, Calem M, et al. Current prevalence of dementia, depression and behavioural problems in the older adult care home sector: the South East London Care Home Survey. Age Ageing 2014;43:562-7. https://doi.org/10.1093/ageing/afu062.
- Gum AM, King-Kallimanis B, Kohn R. Prevalence of mood, anxiety, and substance-abuse disorders for older Americans in the national comorbidity survey-replication. Am J Geriatr Psychiatry 2009;17:769-81. https://doi.org/10.1097/JGP.0b013e3181ad4f5a.
- Yochim BP, Mueller AE, June A, Segal DL. Psychometric properties of the Geriatric Anxiety Scale: comparison to the Beck Anxiety Inventory and Geriatric Anxiety Inventory. Clin Gerontol J Aging Ment Health 2011;34:21-33. https://doi.org/10.1080/07317115.2011.524600.
- Frost R, Nair P, Aw S, Gould RL, Kharicha K, Buszewicz M, et al. Supporting frail older people with depression and anxiety: a qualitative study. Aging Ment Heal 2020;24:1977-84. https://doi.org/10.1080/13607863.2019.1647132.
- Creighton AS, Davison TE, Kissane DW. The prevalence of anxiety among older adults in nursing homes and other residential aged care facilities: a systematic review. Int J Geriatr Psychiatry 2016;31:555-66. https://doi.org/10.1002/gps.4378.
- Hoe J, Hancock G, Livingston G, Orrell M. Quality of life of people with dementia in residential care homes. Br J Psychiatry 2006;188:460-4. https://doi.org/10.1192/bjp.bp.104.007658.
- Goyal AR, Bergh S, Engedal K, Kirkevold M, Kirkevold Ø. Trajectories of quality of life and their association with anxiety in people with dementia in nursing homes: a 12-month follow-up study. PLOS ONE 2018;13. https://doi.org/10.1371/journal.pone.0203773.
- Selwood A, Thorgrimsen L, Orrell M. Quality of life in dementia – a one-year follow-up study. Int J Geriatr Psychiatry 2005;20:232-7. https://doi.org/10.1002/gps.1271.
- Michèle J, Guillaume M, Alain T, Nathalie B, Claude F, Kamel G. Social and leisure activity profiles and well-being among the older adults: a longitudinal study. Aging Ment Health 2019;23:77-83. https://doi.org/10.1080/13607863.2017.1394442.
- Moussavi S, Chatterji S, Verdes E, Tandon A, Patel V, Ustun B. Depression, chronic diseases, and decrements in health. Lancet 2007;370:851-8. https://doi.org/10.1016/S0140-6736(07)61415-9.
- Beekman AT, Deeg DJ, Braam AW, Smit JH, Van Tilburg W. Consequences of major and minor depression in later life: a study of disability, well-being and service utilization. Psychol Med 1997;27:1397-409. https://doi.org/10.1017/s0033291797005734.
- Gruber-Baldini AL, Zimmerman S, Boustani M, Watson LC, Williams CS, Reed PS. Characteristics associated with depression in long-term care residents with dementia. Gerontologist 2005;45:50-5. https://doi.org/10.1093/geront/45.suppl_1.50.
- Phillips LJ, Rantz M, Petroski GF. Indicators of a new depression diagnosis in nursing home residents. J Gerontol Nurs 2011;37:42-5. https://doi.org/10.3928/00989134-20100702-03.
- Khambaty T, Callahan CM, Perkins AJ, Stewart JC. Depression and anxiety screens as simultaneous predictors of 10-year incidence of diabetes mellitus in older adults in primary care. J Am Geriatr Soc 2017;65:294-300. https://doi.org/10.1111/jgs.14454.
- Strine TW, Chapman DP, Kobau R, Balluz L. Associations of self-reported anxiety symptoms with health-related quality of life and health behaviors. Soc Psychiatry Psychiatr Epidemiol 2005;40:432-8. https://doi.org/10.1007/s00127-005-0914-1.
- Schiel JE, Spiegelhalder K. Interaction of insomnia in old age and associated diseases: Cognitive, behavioral and neurobiological aspects. Z Gerontol Geriatr 2020;53:112-18. https://doi.org/10.1007/s00391-020-01694-6.
- Gibbons L, Teri L, Logsdon R, McCurry S, Kukull W, Bowden J. Anxiety symptoms as predictors of nursing home placement in patients with Alzheimer’s disease. J Clin Geropsychol 2002;8:335-42. https://doi.org/10.1023/A:1019635525375.
- Brenes GA, Guralnik JM, Williamson JD, Fried LP, Simpson C, Simonsick EM, et al. The influence of anxiety on the progression of disability. J Am Geriatr Soc 2005;53:34-9. https://doi.org/10.1111/j.1532-5415.2005.53007.x.
- Rozzini L, Chilovi BV, Peli M, Conti M, Rozzini R, Trabucchi M, et al. Anxiety symptoms in mild cognitive impairment. Int J Geriatr Psychiatry 2009;24:300-5. https://doi.org/10.1002/gps.2106.
- Meeks S, Looney SW. Depressed nursing home residents’ activity participation and affect as a function of staff engagement. Behav Ther 2011;42:22-9. https://doi.org/10.1016/j.beth.2010.01.004.
- van Dalen-Kok AH, Pieper MJ, de Waal MW, Lukas A, Husebo BS, Achterberg WP. Association between pain, neuropsychiatric symptoms, and physical function in dementia: a systematic review and meta-analysis. BMC Geriatr 2015;15. https://doi.org/10.1186/s12877-015-0048-6.
- Starkstein SE, Jorge R, Petracca G, Robinson RG. The construct of generalized anxiety disorder in Alzheimer disease. Am J Geriatr Psychiatry 2007;15:42-9. https://doi.org/10.1097/01.JGP.0000229664.11306.b9.
- Sinoff G, Werner P. Anxiety disorder and accompanying subjective memory loss in the elderly as a predictor of future cognitive decline. Int J Geriatr Psychiatry 2003;18:951-9. https://doi.org/10.1002/gps.1004.
- Krishnan KR, Delong M, Kraemer H, Carney R, Spiegel D, Gordon C, et al. Comorbidity of depression with other medical diseases in the elderly. Biol Psychiatry 2002;52:559-88. https://doi.org/10.1016/S0006-3223(02)01472-5.
- Lu PH, Edland SD, Teng E, Tingus K, Petersen RC, Cummings JL. Alzheimer’s Disease Cooperative Study Group . Donepezil delays progression to AD in MCI subjects with depressive symptoms. Neurology 2009;72:2115-21. https://doi.org/10.1212/WNL.0b013e3181aa52d3.
- Leontjevas R, van Hooren S, Mulders A. The Montgomery–Åsberg Depression Rating Scale and the Cornell Scale for Depression in Dementia: a validation study with patients exhibiting early-onset dementia. Am J Geriatr Psychiatry 2009;17:56-64. https://doi.org/10.1097/JGP.0b013e31818b4111.
- Lyketsos CG, Steele C, Galik E, Rosenblatt A, Steinberg M, Warren A, et al. Physical aggression in dementia patients and its relationship to depression. Am J Psychiatry 1999;156:66-71. https://doi.org/10.1176/ajp.156.1.66.
- Manthorpe J, Iliffe S. Suicide among older people. Nurs Older People 2006;17:25-9. https://doi.org/10.7748/nop2006.01.17.10.25.c2404.
- Djernes JK. Prevalence and predictors of depression in populations of elderly: a review. Acta Psychiatr Scand 2006;113:372-87. https://doi.org/10.1111/j.1600-0447.2006.00770.x.
- Mykletun A, Bjerkeset O, Overland S, Prince M, Dewey M, Stewart R. Levels of anxiety and depression as predictors of mortality: the HUNT study. Br J Psychiatry 2009;195:118-25. https://doi.org/10.1192/bjp.bp.108.054866.
- Rodda J, Walker Z, Carter J. Depression in older adults. BMJ 2011;343. https://doi.org/10.1136/bmj.d5219.
- Thakur M, Blazer DG. Depression in long-term care. J Am Med Dir Assoc 2008;9:82-7. https://doi.org/10.1016/j.jamda.2007.09.007.
- Leontjevas R, Gerritsen DL, Vernooij-Dassen MJ, Smalbrugge M, Koopmans RT. Comparative validation of proxy-based Montgomery–Åsberg depression rating scale and Cornell scale for depression in dementia in nursing home residents with dementia. Am J Geriatr Psychiatry 2012;20:985-93. https://doi.org/10.1097/JGP.0b013e318233152b.
- Portugal Mda G, Coutinho ES, Almeida C, Barca ML, Knapskog AB, Engedal K, et al. Validation of Montgomery–Åsberg Rating Scale and Cornell Scale for Depression in Dementia in Brazilian elderly patients. Int Psychogeriatr 2012;24:1291-8. https://doi.org/10.1017/S1041610211002250.
- Laks J, Engelhardt E. Peculiarities of geriatric psychiatry: a focus on aging and depression. CNS Neurosci Ther 2010;16:374-9. https://doi.org/10.1111/j.1755-5949.2010.00196.x.
- Chen YH, Lin LC, Watson R. Validating nurses’ and nursing assistants’ report of assessing pain in older people with dementia. J Clin Nurs 2010;19:42-5. https://doi.org/10.1111/j.1365-2702.2009.02950.x.
- Baller M, Boorsma M, Frijters DHM, Van Marwijk HWJ, Nijpels G, Van Hout HPJ. Depression in Dutch homes for the elderly: under-diagnosis in demented residents?. Int J Geriatr Psychiatry 2010;25:712-18. https://doi.org/10.1002/gps.2412.
- Davidson S, Koritsas S, O’Connnor DW, Clarke D. The feasibility of a GP led screening intervention for depression among nursing home residents. Int J Geriatr Psychiatry 2006;21:1026-30. https://doi.org/10.1002/gps.1601.
- Phillips LJ. Measuring symptoms of depression: comparing the Cornell Scale for Depression in Dementia and the Patient Health Questionnaire-9-Observation Version. Res Gerontol Nurs 2012;5:34-42. https://doi.org/10.3928/19404921-20111206-03.
- Snowdon J, Fleming R. Recognising depression in residential facilities: an Australian challenge. Int J Geriatr Psychiatry 2008;23:295-300. https://doi.org/10.1002/gps.1877.
- Boehlen FH, Freigofas J, Herzog W, Meid AD, Saum KU, Schoettker B, et al. Evidence for underuse and overuse of antidepressants in older adults: results of a large population-based study. Int J Geriatr Psychiatry 2019;34:539-47. https://doi.org/10.1002/gps.5047.
- Chopra MP, Sullivan JR, Feldman Z, Landes RD, Beck C. Self-, collateral- and clinician assessment of depression in persons with cognitive impairment. Aging Ment Health 2008;12:675-83. https://doi.org/10.1080/13607860801972412.
- Li Z, Jeon YH, Low LF, Chenoweth L, O’Connor DW, Beattie E, et al. Validity of the geriatric depression scale and the collateral source version of the geriatric depression scale in nursing homes. Int Psychogeriatr 2015;27:1495-504. https://doi.org/10.1017/S1041610215000721.
- Strober LB, Arnett PA. Assessment of depression in three medically ill, elderly populations: Alzheimer’s disease, Parkinson’s disease, and stroke. Clin Neuropsychol 2009;23:205-30. https://doi.org/10.1080/13854040802003299.
- Chen CY, Liu CY, Liang HY. Comparison of patient and caregiver assessments of depressive symptoms in elderly patients with depression. Psychiatry Res 2009;166:69-75. https://doi.org/10.1016/j.psychres.2007.11.023.
- Luff R, Ferreira Z, Meyer J. Care Homes: Methods Review 8. London: NIHR School for Social Care Research; 2011.
- Schofield P. Pain management of older people in care homes: a pilot study. Br J Nurs 2006;15:509-14. https://doi.org/10.12968/bjon.2006.15.9.21092.
- Beary T. Living with the ‘black dog’: depression in care homes. Nurs Resid Care 2013;15:83-7. https://doi.org/10.12968/nrec.2013.15.2.83.
- Achterberg WP, Gambassi G, Finne-Soveri H, Liperoti R, Noro A, Frijters DHM, et al. Pain in European long-term care facilities: cross-national study in Finland, Italy and the Netherlands. Pain 2010;148:70-4. https://doi.org/10.1016/j.pain.2009.10.008.
- Care Quality Commission (CQC) . Key Lines of Enquiry, Prompts and Ratings Characteristics for Adult Social Care Services 2017.
- Care Quality Commission (CQC) . How CQC Monitors, Inspects and Regulates Adult Social Care Services 2020.
- Chou SC, Boldy DP, Lee AH. Factors influencing residents’ satisfaction in residential aged care. Gerontologist 2003;43:459-72. https://doi.org/10.1093/geront/43.4.459.
- Lucas JA, Levin CA, Lowe TJ, Robertson B, Akincigil A, Sambamoorthi U, et al. The relationship between organizational factors and resident satisfaction with nursing home care and life. J Aging Soc Policy 2007;19:125-51. https://doi.org/10.1300/J031v19n02_07.
- Netten A, Jones K, Sandhu S. Provider and care workforce influences on quality of home-care services in England. J Aging Soc Policy 2007;19:81-97. https://doi.org/10.1300/J031v19n03_06.
- Skills for Care . The State of the Adult Social Care Sector and Workforce in England 2019.
- National Audit Office . The Adult Social Care Workforce in England 2018.
- Hayes L, Johnson E, Tarrant A. Professionalisation at Work in Adult Social Care. London: GMB Union; 2019.
- Hussein S. ‘We don’t do it for the money’ . . . The scale and reasons of poverty-pay among frontline long-term care workers in England. Health Soc Care Community 2017;25:1817-26. https://doi.org/10.1111/hsc.12455.
- Barron DN, West E. The financial costs of caring in the British labour market: is there a wage penalty for workers in caring occupations?. Br J Ind Relations 2013;51:104-23. https://doi.org/10.1111/j.1467-8543.2011.00884.x.
- Castle NG, Engberg J. Nurse aide agency staffing and quality of care in nursing. Med Care Res Rev 2008;65:232-52. https://doi.org/10.1177/1077558707312494.
- Bourbonniere M, Feng Z, Intrator O, Angelelli J, Mor V, Zinn JS. The use of contract licensed nursing staff in U.S. nursing homes. Med Care Res Rev 2006;63:88-109. https://doi.org/10.1177/1077558705283128.
- Spilsbury K, Hewitt C, Stirk L, Bowman C. The relationship between nurse staffing and quality of care in nursing homes: a systematic review. Int J Nurs Stud 2011;48:732-50. https://doi.org/10.1016/j.ijnurstu.2011.02.014.
- Becker GS. Human Capital: A Theoretical and Empirical Analysis, with Special Reference to Education. Chicago, IL: University of Chicago Press; 1993.
- Akerlof GA, Yellen JL. Efficiency Wage Models of the Labor Market. Cambridge: Cambridge University Press; 1987.
- Mukamel DB, Spector WD, Limcangco R, Wang Y, Feng Z, Mor V. The costs of turnover in nursing homes. Med Care 2009;47:1039-45. https://doi.org/10.1097/MLR.0b013e3181a3cc62.
- Castle NG. Use of agency staff in nursing homes. Res Gerontol Nurs 2009;2:192-201. https://doi.org/10.3928/19404921-20090428-01.
- Allan S, Vadean F. The association between staff retention and English care home quality [published online ahead of print January 20 2021]. J Aging Soc Policy 2021. https://doi.org/10.1080/08959420.2020.1851349.
- Netten A, Williams J, Darton R. Care-home closures in England: causes and implications. Ageing Soc 2005;25:319-38. https://doi.org/10.1017/S0144686X04002910.
- Konetzka RT, Stearns SC, Park J. The staffing-outcomes relationship in nursing homes. Health Serv Res 2008;43:1025-42. https://doi.org/10.1111/j.1475-6773.2007.00803.x.
- Akosa Antwi Y, Bowblis JR. The impact of nurse turnover on quality of care and mortality in nursing homes: evidence from the great recession. Am J Heal Econ 2018;4:131-63. https://doi.org/10.1162/ajhe_a_00096.
- McColl E. Cognitive interviewing. a tool for improving questionnaire design. Qual Life Res 2006;15:571-3. https://doi.org/10.1007/s11136-005-5263-8.
- Smith N, Towers AM, Collins G, Palmer S, Allan S, Beecham J. Encouraging managers of care homes for older adults to participate in research. Qual Ageing Older Adults 2019;20:120-9. https://doi.org/10.1108/QAOA-04-2019-0017.
- Smith N, Towers AM, Palmer S, Collins G. Quality of life in older adult care homes: comparing office hours with out-of-office hours. J Long Term Care 2019:153-63. https://doi.org/10.31389/jltc.29.
- Great Britain . Mental Capacity Act 2005 2005.
- Department for Constitutional Affairs . Code of Practice 2007.
- Higgins J, Thomas J, Chandler J, Cumpsto M, Li T, Page M, et al. Cochrane Handbook for Systematic Reviews of Interventions. Chichester: John Wiley & Sons, Inc.; 2019.
- Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci 2010;5. https://doi.org/10.1186/1748-5908-5-56.
- Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med 2015;13. https://doi.org/10.1186/s12916-015-0465-6.
- Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLOS Med 2009;6. https://doi.org/10.1371/journal.pmed.1000097.
- Hirdes JP, Ljunggren G, Morris JN, Frijters DH, Finne Soveri H, Gray L, et al. Reliability of the interRAI suite of assessment instruments: a 12-country study of an integrated health information system. BMC Health Serv Res 2008;8. https://doi.org/10.1186/1472-6963-8-277.
- Warden V, Hurley AC, Volicer L. Development and psychometric evaluation of the Pain Assessment in Advanced Dementia (PAINAD) scale. J Am Med Dir Assoc 2003;4:9-15. https://doi.org/10.1097/01.JAM.0000043422.31640.F7.
- Feldt KS. The checklist of nonverbal pain indicators (CNPI). Pain Manag Nurs 2000;1:13-21. https://doi.org/10.1053/jpmn.2000.5831.
- Bieri D, Reeve RA, Champion DG, Addicoat L, Ziegler JB. The Faces Pain Scale for the self-assessment of the severity of pain experienced by children: development, initial validation, and preliminary investigation for ratio scale properties. Pain 1990;41:139-50. https://doi.org/10.1016/0304-3959(90)90018-9.
- Jensen MP, Miller L, Fisher LD. Assessment of pain during medical procedures: a comparison of three scales. Clin J Pain 1998;14:343-9. https://doi.org/10.1097/00002508-199812000-00012.
- Herr K, Spratt KF, Garand L, Li L. Evaluation of the Iowa pain thermometer and other selected pain intensity scales in younger and older adult cohorts using controlled clinical pain: a preliminary study. Pain Med 2007;8:585-600. https://doi.org/10.1111/j.1526-4637.2007.00316.x.
- Melzack R. The McGill Pain Questionnaire: major properties and scoring methods. Pain 1975;1:277-99. https://doi.org/10.1016/0304-3959(75)90044-5.
- Wood BM, Nicholas MK, Blyth F, Asghari A, Gibson S. Assessing pain in older people with persistent pain: the NRS is valid but only provides part of the picture. J Pain 2010;11:1259-66. https://doi.org/10.1016/j.jpain.2010.02.025.
- McGrath PA, Seifert CE, Speechley KN, Booth JC, Stitt L, Gibson MC. A new analogue scale for assessing children’s pain: an initial validation study. Pain 1996;64:435-43. https://doi.org/10.1016/0304-3959(95)00171-9.
- Ware LJ, Epps CD, Herr K, Packard A. Evaluation of the Revised Faces Pain Scale, Verbal Descriptor Scale, Numeric Rating Scale, and Iowa Pain Thermometer in older minority adults. Pain Manag Nurs 2006;7:117-25. https://doi.org/10.1016/j.pmn.2006.06.005.
- Hicks CL, von Baeyer CL, Spafford PA, van Korlaar I, Goodenough B. The Faces Pain Scale-Revised: toward a common metric in pediatric pain measurement. Pain 2001;93:173-83. https://doi.org/10.1016/S0304-3959(01)00314-1.
- Hadjistavropoulos T, Herr K, Prkachin KM, Craig KD, Gibson SJ, Lukas A, et al. Pain assessment in elderly adults with dementia. Lancet Neurol 2014;13:1216-27. https://doi.org/10.1016/S1474-4422(14)70103-6.
- Ferrell BA, Ferrell BR, Rivera L. Pain in cognitively impaired nursing home patients. J Pain Symptom Manage 1995;10:591-8. https://doi.org/10.1016/0885-3924(95)00121-2.
- Galiese L, Melzack R, Wall PD, Melzack R. Textbook of Pain. Edinburgh: Churchill Livingstone; 1999.
- Wynne CF, Ling SM, Remsburg R. Comparison of pain assessment instruments in cognitively intact and cognitively impaired nursing home residents. Geriatr Nurs 2000;21:20-3. https://doi.org/10.1067/mgn.2000.105793.
- Beusher L, Grando V. Challenges in conducting qualitative research with person with dementia. Resid Gerontol Nurs 2009;2:6-11. https://doi.org/10.3928/19404921-20090101-04.
- Liu JY, Briggs M, Closs SJ. The psychometric qualities of four observational pain tools (OPTs) for the assessment of pain in elderly people with osteoarthritic pain. J Pain Symptom Manage 2010;40:582-98. https://doi.org/10.1016/j.jpainsymman.2010.02.022.
- Lukas A, Barber JB, Johnson P, Gibson SJ. Observer-rated pain assessment instruments improve both the detection of pain and the evaluation of pain intensity in people with dementia. Eur J Pain 2013;17:1558-68. https://doi.org/10.1002/j.1532-2149.2013.00336.x.
- Jones KR, Fink R, Hutt E, Vojir C, Pepper GA, Scott-Cawiezell J, et al. Measuring pain intensity in nursing home residents. J Pain Symptom Manage 2005;30:519-27. https://doi.org/10.1016/j.jpainsymman.2005.05.020.
- Abbey J, Piller N, De Bellis A, Esterman A, Parker D, Giles L, et al. The Abbey Pain Scale: a 1-minute numerical indicator for people with end-stage dementia. Int J Palliat Nurs 2004;10:6-13. https://doi.org/10.12968/ijpn.2004.10.1.12013.
- Hurley AC, Volicer BJ, Hanrahan PA, Houde S, Volicer L. Assessment of discomfort in advanced Alzheimer patients. Res Nurs Health 1992;15:369-77. https://doi.org/10.1002/nur.4770150506.
- Husebo BS, Strand LI, Moe-Nilssen R, Husebo SB, Snow AL, Ljunggren AE. Mobilization–Observation–Behavior–Intensity–Dementia Pain Scale (MOBID): development and validation of a nurse-administered pain assessment tool for use in dementia. J Pain Symptom Manage 2007;34:67-80. https://doi.org/10.1016/j.jpainsymman.2006.10.016.
- Lefebvre-Chapiro S. The DOLOPLUS 2 scale – evaluating pain in the elderly. Eur J Palliat Care 2001;8:191-4.
- Fuchs-Lacelle S, Hadjistavropoulos T. Development and preliminary validation of the pain assessment checklist for seniors with limited ability to communicate (PACSLAC). Pain Manag Nurs 2004;5:37-49. https://doi.org/10.1016/j.pmn.2003.10.001.
- Snow AL, Weber JB, O’Malley KJ, Cody M, Beck C, Bruera E, et al. NOPPAIN: a nursing assistant-administered pain assessment instrument for use in dementia. Dement Geriatr Cogn Disord 2004;17:240-6. https://doi.org/10.1159/000076446.
- McAuliffe L, Brown D, Fetherstonhaugh D. Pain and dementia: an overview of the literature. Int J Older People Nurs 2012;7:219-26. https://doi.org/10.1111/j.1748-3743.2012.00331.x.
- Husebo BS, Strand LI, Moe-Nilssen R, Husebo SB, Ljunggren AE. Pain behaviour and pain intensity in older persons with severe dementia: reliability of the MOBID Pain Scale by video uptake. Scand J Caring Sci 2009;23:180-9. https://doi.org/10.1111/j.1471-6712.2008.00606.x.
- Lukas A, Niederecker T, Günther I, Mayer B, Nikolaus T. Self- and proxy report for the assessment of pain in patients with and without cognitive impairment: experiences gained in a geriatric hospital. Z Gerontol Geriatr 2013;46:214-21. https://doi.org/10.1007/s00391-013-0475-y.
- Burrows AB, Morris JN, Simon SE, Hirdes JP, Phillips C. Development of a minimum data set-based depression rating scale for use in nursing homes. Age Ageing 2000;29:165-72. https://doi.org/10.1093/ageing/29.2.165.
- Sunderland T, Minichiello M. Dementia Mood Assessment Scale. Int Psychogeriatr 1996;8:329-31. https://doi.org/10.1017/s1041610297003578.
- Sunderland T, Alterman IS, Yount D, Hill JL, Tariot PN, Newhouse PA, et al. A new scale for the assessment of depressed mood in demented patients. Am J Psychiatry 1988;145:955-9. https://doi.org/10.1176/ajp.145.8.955.
- Hamilton M. A rating scale for depression. J Neurol Neurosurg Psychiatry 1960;23:56-62. https://doi.org/10.1136/jnnp.23.1.56.
- Riskind JH, Beck AT, Brown G, Steer RA. Taking the measure of anxiety and depression. Validity of the reconstructed Hamilton scales. J Nerv Ment Dis 1987;175:474-9. https://doi.org/10.1097/00005053-198708000-00005.
- Snaith RP, Zigmond AS. The hospital anxiety and depression scale. Br Med J 1986;292. https://doi.org/10.1136/bmj.292.6516.344.
- Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand 1983;67:361-70. https://doi.org/10.1111/j.1600-0447.1983.tb09716.x.
- Farner L, Wagle J, Flekkøy K, Wyller TB, Fure B, Stensrød B, et al. Factor analysis of the Montgomery Åsberg Depression Rating Scale in an elderly stroke population. Int J Geriatr Psychiatry 2009;24:1209-16. https://doi.org/10.1002/gps.2247.
- Montgomery SA. Åsberg M. A new depression scale designed to be sensitive to change. Br J Psychiatry 1979;134:382-9. https://doi.org/10.1192/bjp.134.4.382.
- Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001;16:606-13. https://doi.org/10.1046/j.1525-1497.2001.016009606.x.
- Kroenke K, Spitzer RL, Williams JB, Löwe B. The Patient Health Questionnaire Somatic, Anxiety, and Depressive Symptom Scales: a systematic review. Gen Hosp Psychiatry 2010;32:345-59. https://doi.org/10.1016/j.genhosppsych.2010.03.006.
- Saliba D, Buchanan J. Development & Validation of a Revised Nursing Home Assessment Tool: MDS 3.0. Santa Monica, CA: Rand Health Corporation; 2008.
- Evans ME. Development and validation of a brief screening scale for depression in the elderly physically ill. Int Clin Psychopharmacol 1993;8:329-31. https://doi.org/10.1097/00004850-199300840-00022.
- Yesavage JA. Geriatric Depression Scale. Psychopharmacol Bull 1988;24:709-11.
- Yesavage JA, Brink TL, Rose TL, Lum O, Huang V, Adey M, et al. Development and validation of a geriatric depression screening scale: a preliminary report. J Psychiatr Res n.d.;17:37-49. https://doi.org/10.1016/0022-3956(82)90033-4.
- Jongenelis K, Gerritsen DL, Pot AM, Beekman AT, Eisses AM, Kluiter H, et al. Construction and validation of a patient- and user-friendly nursing home version of the Geriatric Depression Scale. Int J Geriatr Psychiatry 2007;22:837-42. https://doi.org/10.1002/gps.1748.
- Abrams RC, Alexopoulos GS. Assessment of depression in dementia. Alzheimer Dis Assoc Disord 1994;8:S227-9.
- Brown T, DiNardo P, Barlow D. Anxiety Disorders Interview Schedule for DSM-IV. San Antonio, TX: Psychological Corporation; 1994.
- Hamilton M. The assessment of anxiety states by rating. Br J Med Psychol 1959;32:50-5. https://doi.org/10.1111/j.2044-8341.1959.tb00467.x.
- Segal DL, June A, Payne M, Coolidge FL, Yochim B. Development and initial validation of a self-report assessment tool for anxiety among older adults: the Geriatric Anxiety Scale. J Anxiety Disord 2010;24:709-14. https://doi.org/10.1016/j.janxdis.2010.05.002.
- Mueller AE, Segal DL, Gavett B, Marty MA, Yochim B, June A, et al. Geriatric Anxiety Scale: item response theory analysis, differential item functioning, and creation of a ten-item short form (GAS-10). Int Psychogeriatr 2015;27:1099-111. https://doi.org/10.1017/S1041610214000210.
- Pachana NA, Byrne GJ, Siddle H, Koloski N, Harley E, Arnold E. Development and validation of the Geriatric Anxiety Inventory. Int Psychogeriatr 2007;19:103-14. https://doi.org/10.1017/S1041610206003504.
- Smalbrugge M, Jongenelis L, Pot AM, Beekman ATF, Eefsting JA. Screening for depression and assessing change in severity of depression. Is the Geriatric Depression Scale (30-, 15- and 8-item versions) useful for both purposes in nursing home patients?. Aging Ment Health 2008;12:244-8. https://doi.org/10.1080/13607860801987238.
- Beck AT, Ward CH, Mendelson M, Mock J, Erbaugh J. An inventory for measuring depression. Arch Gen Psychiatry 1961;4:561-71. https://doi.org/10.1001/archpsyc.1961.01710120031004.
- Beck AT, Steer RA, Ball R, Ranieri W. Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. J Pers Assess 1996;67:588-97. https://doi.org/10.1207/s15327752jpa6703_13.
- Spielberger C, Gorush R, Lushene R, Vagg G. Manual for the State Trait Anxiety Inventory STAI (Form Y): Self Evaluation Questionnaire. Palo Alto, CA: Consulting Psychologists Press; 1983.
- Radloff L. The CES-D Scale: A self-report depression scale for research in the general population. Appl Psychol Meas 1977;1:385-401. https://doi.org/10.1177/014662167700100306.
- Lee AE, Chokkanathan S. Factor structure of the 10-item CES-D scale among community dwelling older adults in Singapore. Int J Geriatr Psychiatry 2008;23:592-7. https://doi.org/10.1002/gps.1944.
- Towers A, Nelson K, Smith N, Razik K. Integrating ASCOT in care planning. Aust J Dement Care 2018;7:31-5.
- EuroQol Group . EuroQol Group EQ-5DTM Health Questionnaire 2009. https://euroqol.org/support/how-to-obtain-eq-5d/ (accessed 12 February 2018).
- Holder J, Smith N. Outcomes and Quality for Social Care Services for Carers: Kent County Council Carers Survey Development Project 2007–8. Canterbury; 2009.
- Towers AM, Holder J, Smith N, Crowther T, Netten A, Welch E, et al. Adapting the adult social care outcomes toolkit (ASCOT) for use in care home quality monitoring: conceptual development and testing. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-0942-9.
- O’Brien K, Morgan D. Successful Focus Groups: Advancing the State of the Art. London: SAGE Publications Ltd; 1993.
- Finch H, Lewis J, Richie J, Lewis J. Qualitative Research Practice: A Guide for Social Science Students and Researchers. London: SAGE Publications Ltd; 2003.
- Krueger RA, Casey MA. Focus Groups. A Practical Guide for Applied Research. Thousand Oaks, CA: SAGE Publications Ltd; 2000.
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006;3:77-101. https://doi.org/10.1191/1478088706qp063oa.
- Haeger H, Lambert AD, Kinzie J, Gieser J. Cognitive interviews to improve survey instruments. Annu Forum Assoc Institutional Res 2012.
- Willis GB, Artino AR. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ 2013;5:353-6. https://doi.org/10.4300/JGME-D-13-00154.1.
- Willis G. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA: SAGE Publications; 2005.
- Caiels J, Forder J, Malley J, Netten A, Windle K. Measuring the Outcomes of Low-Level Services: Final Report 2010.
- Phillipson L, Smith L, Caiels J, Towers AM, Jenkins S. A cohesive research approach to assess care-related quality of life: lessons learned from adapting an easy read survey with older service users with cognitive impairment. Int J Qual Methods 2019;18:1-13. https://doi.org/10.1177/1609406919854961.
- Smith N, Holder J. Measuring Outcomes for Carers n.d.
- Turnpenny A, Caiels J, Whelton B, Richardson L, Beadle-Brown J, Crowther T, et al. Developing an easy read version of the Adult Social Care Outcomes Toolkit (ASCOT). J Appl Res Intellect Disabil 2018;31:e36-e48. https://doi.org/10.1111/jar.12294.
- Tourangeau R, Jabin T, Straf M, Tanu J, Tourangeau R. Cognitive Aspects of Survey Design: Building a Bridge Between Disciplines. Washington, DC: National Academies Press; 1984.
- D’Ardenne J, Collins D. Cognitive Interviewing Practice. London: SAGE Publications Ltd; 2015.
- Knafl K, Deatrick J, Gallo A, Holcombe G, Bakitas M, Dixon J, et al. The analysis and interpretation of cognitive interviews for instrument development. Res Nurs Health 2007;30:224-34. https://doi.org/10.1002/nur.20195.
- Blair J, Brick PD. Methods for the Analysis of Cognitive Interviews n.d.
- Morris JN, Fries BE, Mehr DR, Hawes C, Phillips C, Mor V, et al. MDS Cognitive Performance Scale. J Gerontol 1994;49:M174-82. https://doi.org/10.1093/geronj/49.4.m174.
- Fries BE, Simon SE, Morris JN, Flodstrom C, Bookstein FL. Pain in U.S. nursing homes: validating a pain scale for the minimum data set. Gerontologist 2001;41:173-9. https://doi.org/10.1093/geront/41.2.173.
- Kroenke K, Spitzer RL, Williams JB, Monahan PO, Löwe B. Anxiety disorders in primary care: prevalence, impairment, comorbidity, and detection. Ann Intern Med 2007;146:317-25. https://doi.org/10.7326/0003-4819-146-5-200703060-00004.
- Herdman M, Gudex C, Lloyd A, Janssen M, Kind P, Parkin D, et al. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Qual Life Res 2011;20:1727-36. https://doi.org/10.1007/s11136-011-9903-x.
- Hegeman JM, De Waal MWM, Comijs HC, Kok RM, Van Der Mast RC. Depression in later life: a more somatic presentation?. J Affect Disord 2015;170:196-202. https://doi.org/10.1016/j.jad.2014.08.032.
- Hutcheson G. The Multivariate Social Scientist. London: SAGE Publications Ltd; 1999.
- Tennant A, Conaghan PG. The Rasch measurement model in rheumatology: what is it and why use it? When should it be applied, and what should one look for in a Rasch paper?. Arthritis Care Res 2007;57:1358-62. https://doi.org/10.1002/art.23108.
- Tennant A, McKenna SP, Hagell P. Application of Rasch analysis in the development and application of quality of life instruments. Value Health 2004;7:22-6. https://doi.org/10.1111/j.1524-4733.2004.7s106.x.
- Tabachnick BG, Fidell LS. Using Multivariate Statistics. Boston, MA: Pearson/Allyn & Bacon; 2007.
- Linacre JM. Sample size and item calibration stability. Rasch Meas Trans 1994;7.
- Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests. Chicago, IL: University of Chicago; 1960.
- Rosenbaum PR. Criterion-related construct validity. Psychometrika 1989;54:625-33. https://doi.org/10.1007/BF02296400.
- Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res 2010;19:539-49. https://doi.org/10.1007/s11136-010-9606-8.
- Masters GN. A Rasch model for partial credit scoring. Psychometrika 1982;47:149-74. https://doi.org/10.1007/BF02296272.
- Andrich D. A rating formulation for ordered response categories. Psychometrika 1978;43:561-73. https://doi.org/10.1007/BF02293814.
- Bond T, Fox C. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Lawrence Erlbaum Associates; 2007.
- Wright B, Masters GN. Rating Scale Analysis: Rasch Measurement. Chicago, IL: Mesa; 1982.
- Smith AB, Rush R, Fallowfield LJ, Velikova G, Sharpe M. Rasch fit statistics and sample size considerations for polytomous data. BMC Med Res Methodol 2008;8. https://doi.org/10.1186/1471-2288-8-33.
- Smith RM, Maio CY, Wilson M. Objective Measurement: Theory into Practice Volume 2. Greenwich: Ablex; 1994.
- Raîche G. Critical eigenvalue sizes in standardized residual principal components analysis. Rasch Meas Trans 2005;19.
- Linacre M. A Users’ Guide to Winsteps Ministep Rasch Model Computer Programs: Program Manual 4.5.1. n.d. www.winsteps.com/winman/copyright.htm.
- Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst 1959;22:719-48.
- Mantel N. Chi-square tests with one degree of freedom: extensions of the Mantel-Haenszel procedure. J Am Stat Assoc 1963;58:690-70. https://doi.org/10.1080/01621459.1963.10500879.
- Cortina JM. What is coefficient alpha? An examination of theory and applications. J Appl Psychol 1993;78:98-104. https://doi.org/10.1037/0021-9010.78.1.98.
- Hair JF, Tatham RL, Anderson RE, Black W. Multivariate Data Analysis. Upper Saddle River, NJ: Prentice Hall; 1998.
- Forder J, Vadean F, Rand S, Malley J. The impact of long-term care on quality of life. Health Econ 2018;27:e43-e58. https://doi.org/10.1002/hec.3612.
- Darton R, Netten A, Forder J. The cost implications of the changing population and characteristics of care homes. Int J Geriatr Psychiatry 2003;18:236-43. https://doi.org/10.1002/gps.815.
- West E, McGovern P, Chandler V, Banaszak-Holl J, Barron D, Docking RE, et al. International Handbook of Positive Aging. Abingdon: Taylor & Francis; 2017.
- Malley JN, Towers AM, Netten AP, Brazier JE, Forder JE, Flynn T. An assessment of the construct validity of the ASCOT measure of social care-related quality of life with older people. Health Qual Life Outcomes 2012;10. https://doi.org/10.1186/1477-7525-10-21.
- Netten A, Forder J. Measuring productivity: an approach to measuring quality weighted outputs in social care. Public Money Manag 2010;30:159-66. https://doi.org/10.1080/09540961003794295.
- Allan S, Gousia K, Forder J. Exploring differences between private and public prices in the English care homes market. Heal Econ Policy Law 2020;16:138-53. https://doi.org/10.1017/S1744133120000018.
- Skills for Care . Adult Social Care Workforce Data Set (ASC-WDS) 2021.
- Office for National Statistics . Official Labour Market Statistics n.d. www.nomisweb.co.uk/ (accessed 7 September 2021).
- Her Majesty’s Land Registry . Price Paid Data 2021. www.gov.uk/guidance/about-the-price-paid-data (accessed 7 September 2021).
- Care Quality Commission . Care Directory With Filters 2021. www.cqc.org.uk/about-us/transparency/using-cqc-data (accessed 7 September 2021).
- White IR, Royston P, Wood AM. Multiple imputation using chained equations: issues and guidance for practice. Stat Med 2011;30:377-99. https://doi.org/10.1002/sim.4067.
- Machin S, Wilson J. Minimum wages in a low-wage labour market: care homes in the UK. Econ J 2004;114:C102-9. https://doi.org/10.1111/j.0013-0133.2003.00199.x.
- Wooldridge JM. Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: The MIT Press; 2010.
- Greene WH. Econometric Analysis. Boston, MA: Pearson; 2018.
- Wilson CB, Davies S. Developing relationships in long term care environments: the contribution of staff. J Clin Nurs 2009;18:1746-55. https://doi.org/10.1111/j.1365-2702.2008.02748.x.
- Brown Wilson C, Davies S, Nolan M. Developing personal relationships in care homes: realising the contributions of staff, residents and family members. Ageing Soc 2009;29:1041-63. https://doi.org/10.1017/S0144686X0900840X.
- Devi R, Goodman C, Dalkin S, Bate A, Wright J, Jones L, et al. Attracting, recruiting and retaining nurses and care workers working in care homes: the need for a nuanced understanding informed by evidence and theory. Age Ageing 2021;50:65-7. https://doi.org/10.1093/ageing/afaa109.
- Vadean F, Saloniki E. Determinants of Staff Turnover and Vacancies in the Social Care Sector in England 2020.
- Forder J, Allan S. The impact of competition on quality and prices in the English care homes market. J Health Econ 2014;34:73-8. https://doi.org/10.1016/j.jhealeco.2013.11.010.
- Allan S, Forder J. The determinants of care home closure. Health Econ 2015;24:132-45. https://doi.org/10.1002/hec.3149.
- Competition and Markets Authority . Care Homes Market Study: Final Report 2017.
- Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, et al. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient 2014;7:387-95. https://doi.org/10.1007/s40271-014-0065-0.
- Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, et al. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect 2014;17:637-50. https://doi.org/10.1111/j.1369-7625.2012.00795.x.
- National Institute for Health Research (NIHR) . Going the Extra Mile: Involving People in Healthcare Policy and Practice 2015.
- INVOLVE . Briefing Notes for Researchers: Public Involvement in NHS, Public Health and Social Care Research 2012.
- Beresford P. Developing the theoretical basis for service user/survivor-led research and equal involvement in research. Epidemiol Psichiatr Soc 2005;14:4-9. https://doi.org/10.1017/s1121189x0000186x.
- Staley K, Barron D. Learning as an outcome of involvement in research: what are the implications for practice, reporting and evaluation?. Res Involv Engagem 2019;5:1-9. https://doi.org/10.1186/s40900-019-0147-1.
- Marks S, Mathie E, Smiddy J, Jones J, Da Silva-Gane M. Reflections and experiences of a coresearcher involved in a renal research study. Res Involv Engagem 2018;4:1-10. https://doi.org/10.1186/s40900-018-0120-4.
- Bindels J, Cox K, Widdershoven G, van Schayck OC, Abma TA. Care for community-dwelling frail older people: a practice nurse perspective. J Clin Nurs 2014;23:2313-22. https://doi.org/10.1111/jocn.12513.
- INVOLVE . Briefing Notes for Researchers: Involving the Public in NHS, Public Health and Social Care Research 2012.
- Vogsen M, Geneser S, Rasmussen ML, Hørder M, Hildebrandt MG. Learning from patient involvement in a clinical study analyzing PET/CT in women with advanced breast cancer. Res Involv Engagem 2020;6. https://doi.org/10.1186/s40900-019-0174-y.
- Dawson S, Ruddock A, Parmar V, Morris R, Cheraghi-Sohi S, Giles S, et al. Patient and public involvement in doctoral research: reflections and experiences of the PPI contributors and researcher. Res Involv Engagem 2020;6. https://doi.org/10.1186/s40900-020-00201-w.
- National Institute for Health Research (NIHR) . Health Services and Delivery Research Programme: Commissioning Brief 15 144 – Improving the Quality of Care in Care Homes by Care Home Staff, Closing Date: 17 December 2015 2015.
- Fredriksson M, Tritter JQ. Disentangling patient and public involvement in healthcare decisions: why the difference matters. Sociol Health Illn 2017;39:95-111. https://doi.org/10.1111/1467-9566.12483.
- Backhouse T, Kenkmann A, Lane K, Penhale B, Poland F, Killett A. Older care-home residents as collaborators or advisors in research: a systematic review. Age Ageing 2016;45:337-45. https://doi.org/10.1093/ageing/afv201.
- Waite J, Poland F, Charlesworth G. Facilitators and barriers to co-research by people with dementia and academic researchers: findings from a qualitative study. Health Expect 2019;22:761-71. https://doi.org/10.1111/hex.12891.
- Littlechild R, Tanner D, Hall K. Co-research with older people: perspectives on impact. Qual Soc Work 2015;14:18-35. https://doi.org/10.1177/1473325014556791.
- Tanner D. Co-research with older people with dementia: experience and reflections. J Ment Health 2012;21:296-30. https://doi.org/10.3109/09638237.2011.651658.
- Ives J, Damery S, Redwod S. PPI, paradoxes and Plato: who’s sailing the ship?. J Med Ethics 2012;39:186-7. https://doi.org/10.1136/medethics-2011-100150.
- Angrist JD, Pischke J-S. Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton, NJ: Princeton University Press; 2009.
- Hanratty B, Burton JK, Goodman C, Gordon AL, Spilsbury K. Covid-19 and lack of linked datasets for care homes. BMJ 2020;369. https://doi.org/10.1136/bmj.m2463.
- Musa MK, Akdur G, Hanratty B, Kelly S, Gordon A, Peryer G, et al. Uptake and use of a minimum data set (MDS) for older people living and dying in care homes in England: a realist review protocol. BMJ Open 2020;10. https://doi.org/10.1136/bmjopen-2020-040397.
- Caiels J, Rand S, Crowther T, Collins G, Forder J. Exploring the views of being a proxy from the perspective of unpaid carers and paid carers: developing a proxy version of the Adult Social Care Outcomes Toolkit (ASCOT). BMC Health Serv Res 2019;19. https://doi.org/10.1186/s12913-019-4025-1.
- Netten A, Beadle-Brown J, Trukeschitz B, Towers A-M, Welch E, Forder J, et al. Measuring the Outcomes of Care Homes: Final Report ‘Measuring Outcomes for Public Service Users’ Project 2010. www.pssru.ac.uk/pub/dp2696_2.pdf (accessed October 2021).
- Brooker D. Dementia Care Mapping (DCM): changing and evaluating the culture of care for people with dementia. Gerontologist 2005;45:11-8. https://doi.org/10.1093/geront/45.suppl_1.11.
- Ni Thuathail A, Welford C. Pain assessment tools for older people with cognitive impairment. Nurs Stand 2011;26:39-46. https://doi.org/10.7748/ns2011.10.26.6.39.c8756.
- MacFarlane S, Charlton K, Ferguson A, Barlogie J, Lynch P, McDonell L, et al. Difficulties in recruiting frail older inpatients to intervention studies. Nutr Diet 2016;73:348-55. https://doi.org/10.1111/1747-0080.12229.
- Somes J, Donatelli NS. Pain assessment in the cognitively impaired or demented older adult. J Emerg Nurs 2013;39:164-7. https://doi.org/10.1016/j.jen.2012.11.012.
- Williams A, Ackroyd R. Identifying and managing pain for patients with advanced dementia. GM 2017;47:35-8.
- Inelmen EM, Mosele M, Sergi G, Toffanello ED, Coin A, Manzato E. Chronic pain in the elderly with advanced dementia. Are we doing our best for their suffering?. Aging Clin Exp Res 2012;24:207-12.
- Scherder EJ, Plooij B. Assessment and management of pain, with particular emphasis on central neuropathic pain, in moderate to severe dementia. Drugs Aging 2012;29:701-6. https://doi.org/10.1007/s40266-012-0001-8.
- Herr K, Bursch H, Ersek M, Miller LL, Swafford K. Use of pain-behavioral assessment tools in the nursing home: expert consensus recommendations for practice. J Gerontol Nurs 2010;36:18-29. https://doi.org/10.3928/00989134-20100108-04.
- Gloth FM, Scheve AA, Stober CV, Chow S, Prosser J. The Functional Pain Scale: reliability, validity, and responsiveness in an elderly population. J Am Med Dir Assoc 2001;2:110-14. https://doi.org/10.1016/S1525-8610(04)70176-0.
- Landi F, Tua E, Onder G, Carrara B, Sgadari A, Rinaldi C, et al. Minimum data set for home care: a valid instrument to assess frail older people living in the community. Med Care 2000;38:1184-90. https://doi.org/10.1097/00005650-200012000-00005.
- Flaherty E. Try this: best practices in nursing care to older adults. Pain assessment for older adults. Am J Nurs 2008;108:45-6. https://doi.org/10.1097/01.NAJ.0000324375.02027.9f.
- Klein DG, Dumpe M, Katz E, Bena J. Pain assessment in the intensive care unit: development and psychometric testing of the nonverbal pain assessment tool. Heart Lung 2010;39:521-8. https://doi.org/10.1016/j.hrtlng.2010.05.053.
- Booker SS, Herr K. The state-of- ‘cultural validity’ of self-report pain assessment tools in diverse older adults. Pain Med 2015;16:232-9. https://doi.org/10.1111/pme.12496.
- Ngu SS, Tan MP, Subramanian P, Abdul Rahman R, Kamaruzzaman S, Chin AV, et al. Pain assessment using self-reported, nurse-reported, and observational pain assessment tools among older individuals with cognitive impairment. Pain Manag Nurs 2015;16:595-601. https://doi.org/10.1016/j.pmn.2014.12.002.
- Teske K, Daut RL, Cleeland CS. Relationships between nurses’ observations and patients’ self-reports of pain. Pain 1983;16:289-96. https://doi.org/10.1016/0304-3959(83)90117-3.
- Fuchs-Lacelle S, Hadjistavropoulos T, Lix L. Pain assessment as intervention: a study of older adults with severe dementia. Clin J Pain 2008;24:697-70. https://doi.org/10.1097/AJP.0b013e318172625a.
- Villanueva MR, Smith TL, Erickson JS, Lee AC, Singer CM. Pain Assessment for the Dementing Elderly (PADE): reliability and validity of a new measure. J Am Med Dir Assoc 2003;4:1-8. https://doi.org/10.1097/01.JAM.0000043419.51772.A3.
- Fry M, Arendts G, Chenoweth L. Emergency nurses’ evaluation of observational pain assessment tools for older people with cognitive impairment. J Clin Nurs 2017;26:1281-90. https://doi.org/10.1111/jocn.13591.
- Cohen-Mansfield J. Pain Assessment in Noncommunicative Elderly persons – PAINE. Clin J Pain 2006;22:569-75. https://doi.org/10.1097/01.ajp.0000210899.83096.0b.
- Flaherty E, Boltz M, Greenberg SA. Try This: Best Practices in Nursing Care to Older Adults. New York, NY: Hartford Institute for Geriatric Nursing, New York University, College of Nursing; 2007.
- Herr KA, Spratt K, Mobily PR, Richardson G. Pain intensity assessment in older adults: use of experimental pain to compare psychometric properties and usability of selected pain scales with younger adults. Clin J Pain 2004;20:207-19. https://doi.org/10.1097/00002508-200407000-00002.
- McCormack HM, Horne DJ, Sheather S. Clinical applications of visual analogue scales: a critical review. Psychol Med 1988;18:1007-19. https://doi.org/10.1017/s0033291700009934.
- Diefenbach GJ, Tolin DF, Meunier SA, Gilliam CM. Assessment of anxiety in older home care recipients. Gerontologist 2009;49:141-53. https://doi.org/10.1093/geront/gnp019.
- Gould CE, Segal DL, Yochim BP, Pachana NA, Byrne GJ, Beaudreau SA. Measuring anxiety in late life: a psychometric examination of the geriatric anxiety inventory and geriatric anxiety scale. J Anxiety Disord 2014;28:804-11. https://doi.org/10.1016/j.janxdis.2014.08.001.
- Beck AT, Epstein N, Brown G, Steer RA. An inventory for measuring clinical anxiety: psychometric properties. J Consult Clin Psychol 1988;56:893-7. https://doi.org/10.1037//0022-006x.56.6.893.
- Beck A, Steer R. Beck Anxiety Inventory Manual. San Antonio, TX: Psychological Corporation; 1993.
- Bentz BG, Hall JR. Assessment of depression in a geriatric inpatient cohort: a comparison of the BDI and GDS. Int J Clin Heal Psychol 2008;8:93-104.
- Smarr KL, Keefer AL. Measures of depression and depressive symptoms: Beck Depression Inventory-II (BDI-II), Center for Epidemiologic Studies Depression Scale (CES-D), Geriatric Depression Scale (GDS), Hospital Anxiety and Depression Scale (HADS), and Patient Health Questionnaire-9 (PHQ-9). Arthritis Care Res 2011;63:454-66. https://doi.org/10.1002/acr.20556.
- Gladstone GL, Parker GB, Mitchell PB, Malhi GS, Wilhelm KA, Austin MP. A Brief Measure of Worry Severity (BMWS): personality and clinical correlates of severe worriers. J Anxiety Disord 2005;19:877-92. https://doi.org/10.1016/j.janxdis.2004.11.003.
- Alexopoulos GS, Abrams RC, Young RC, Shamoian CA. Cornell Scale for Depression in Dementia. Biol Psychiatry 1988;23:271-84. https://doi.org/10.1016/0006-3223(88)90038-8.
- Newman MG, Zuellig AR, Kachin KE, Constantino MJ, Przeworski A, Erickson T, et al. Preliminary reliability and validity of the generalized anxiety disorder questionnaire-IV: a revised self-report diagnostic measure of generalized anxiety disorder. Behav Ther 2002;33:215-33. https://doi.org/10.1016/S0005-7894(02)80026-0.
- Shear K, Belnap BH, Mazumdar S, Houck P, Rollman BL. Generalized anxiety disorder severity scale (GADSS): a preliminary validation study. Depress Anxiety 2006;23:77-82. https://doi.org/10.1002/da.20149.
- Creighton AS, Davison TE, Kissane DW. The assessment of anxiety in aged care residents: a systematic review of the psychometric properties of commonly used measures. Int Psychogeriatrics 2017:1-13.
- Therrien Z, Hunsley J, Brodaty H, Donkin M, Creighton AS, Davison TE, et al. The psychometric properties, sensitivity and specificity of the geriatric anxiety inventory, hospital anxiety and depression scale, and rating anxiety in dementia scale in aged care residents. Aging Ment Health 2018;36:1-10.
- Gould CE, Beaudreau SA, Gullickson G, Tenover JL, Bauer EA, Huh JW. AGCNS-BC . Implementation of a brief anxiety assessment and evaluation in a Department of Veterans Affairs geriatric primary care clinic. J Rehabil Res Dev 2016;53:335-44. https://doi.org/10.1682/JRRD.2014.10.0258.
- Hamilton M. Development of a rating scale for primary depressive illness. Br J Soc Clin Psychol 1967;6:278-96. https://doi.org/10.1111/j.2044-8260.1967.tb00530.x.
- Snaith RP. The Hospital Anxiety And Depression Scale. Health Qual Life Outcomes 2003;1. https://doi.org/10.1186/1477-7525-1-29.
- Meyer TJ, Miller ML, Metzger RL, Borkovec TD. Development and validation of the Penn State Worry Questionnaire. Behav Res Ther 1990;28:487-95. https://doi.org/10.1016/0005-7967(90)90135-6.
- Shankar KK, Walker M, Frost D, Orrell MW. The development of a valid and reliable scale for rating anxiety in dementia (RAID). Aging Ment Health 1999;3:39-4. https://doi.org/10.1080/13607869956424.
Appendix 1 ASCOT-CH4: description of the mixed-methods approach to collecting data on care home residents’ social care-related quality of life
Methodologically, ASCOT-CH4 involves structured observations of residents alongside a staff proxy interview (for each resident), a resident interview (to the extent that the resident is able) and a family proxy interview (where possible).
Structured observation
A 2-hour period of observation, leading up to and including lunchtime, is undertaken for every five care home residents. Therefore, in a home in which 10 residents were recruited to the study, two researchers would observe five residents each in this period. The observer rotates between the residents, making notes of what the resident is doing (e.g. sleeping in chair, watching television, engaged in a cake-making activity with a member of staff), recording behavioural, verbal and non-verbal indicators of how the resident is feeling (e.g. smiling and engaged, crying out and trying to make eye contact with staff) and linking the evidence to the domains (e.g. occupation, social participation, dignity). Notes will also be made about how staff treat each resident and the warmth and nature of interactions, and about the care home environment itself (accommodation domain).
Resident interviews (see Report Supplementary Material 11)
The methodology recommends that researchers always try to gather some information from each resident. A minority of residents will be able to self-report and undertake a full structured interview about their own current SCRQoL. For these residents, interviews are supported by large A4-sized show cards with the response options typed out in a large, easy-to-read font (see example in Report Supplementary Material 14). This helps residents stay on topic and reduces cognitive load, as they can look at the response options while these are read aloud by the researcher. Despite this, most residents will only be able to manage a qualitative, semistructured conversation about some (rarely all) of the ASCOT domains (see Report Supplementary Material 13). However, even semistructured or conversational interviews can be particularly helpful when trying to interpret observational evidence and so interviews with residents are carried out after the observations have taken place. For example, the researcher might notice that the resident did not eat much at lunchtime. In this case, food and drink might be a topic to explore qualitatively with the resident that afternoon.
Staff and family interviews (see Report Supplementary Material 12)
These are ‘proxy-patient’ interviews, meaning that the proxy informant (staff or family) is being asked to internally reconstruct what the resident might think or feel about each domain. 303 It is therefore important that this interview is undertaken with a member of staff who knows the resident well. 303 Sometimes, the proxy will recognise that their view and the view of the resident differs. This is a well-established phenomenon in the literature and is why proxy interviews should be considered a different perspective from self-report303 and is why we do not rely exclusively on this to inform our ratings. Where such situations arise, interviewers ask the proxy to rate how they think the resident feels but will then make qualitative notes about why the proxy has a different opinion, as this too can provide very valuable information and context.
The only difference between the family and staff proxy interviews relates to the dignity domain. In ASCOT, the dignity domain asks the respondent how the way in which they are helped and treated (by staff) makes them feel. When undertaking the proxy interviews, we asked this question of family members but not of care workers, as in the development stages of this toolkit we found that this was problematic for staff who were being asked to directly rate the impact of their own care. 304
Making final ratings
The minimum number of data ASCOT-CH4 collects on each resident is a structured observation and a staff interview. Usually the researcher will also have the resident’s own voice for at least some of the domains and occasionally we have full proxy interviews from family members.
The role of the researcher is to use the evidence collected from these sources to rate individual residents’ SCRQoL according to one of four outcome states in each domain: ideal state, no (unmet) needs, some (unmet) needs and high (unmet) needs. 19,22 As with other observational tools for use in care homes, such as Dementia Care Mapping,305 there is a mandatory training course to be able to use this tool. As well as focusing on the observational aspects of the methodology, the training outlines how to deal with ‘conflicts in evidence’ when making final ratings of residents’ SCRQoL. This may occur when people have different perspectives or when the evidence relating to a domain is somewhat mixed. Under these circumstances, a hierarchy of evidence in used to help make a final rating (Figure 15).
Conceptually, ASCOT is a subjective measure of SCRQoL, meaning that, where a resident can self-report, their view should take priority and be reflected in the final ratings. However, often this is not possible or the interviewer may feel that the resident has not fully understood the question or has been able to give only a qualitative response. For example, when asked about the food and drink, the resident might say ‘It’s OK but the meat is really tough and I usually have to leave it’, rather than being able to select a specific response from the structured interview. We would then turn to the observational evidence. Did the person eat their lunch? Did they have access to food and drink/snacks throughout the day? Finally, we reflect on what the proxy informants told us: which outcome state did they tick and why?
Thus, training aims to help researchers make ratings consistently based on the evidence collected, and research comparing independent ratings of the same residents has found excellent levels of inter-rater reliability. 19,20
Appendix 2 Rapid reviews of tools measuring pain, anxiety and depression
Search strategy rapid review 1: pain
Search terms
(Pain) AND (Scale OR assessment OR measure) AND (Elderly OR residents Or geriatric OR nursing homes OR care homes OR cognitive impairment OR dementia OR Alzheimer OR older people)
(Pain) AND (Scale OR assessment OR measure) AND observation* or behaviour*
Databases
MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), Cochrane Library, PsycInfo (via EBSCOhost) and Abstracts in Social Gerontology.
Inclusion and exclusion criteria for pain
Inclusion criteria | Exclusion criteria |
---|---|
Articles/tools in English | Articles/tools not in English |
Articles or tools that focus on older adults | Articles or tools focusing solely on children, adolescents or adults aged under 65 years |
Articles published since 1 January 2007a | Articles published before 1 January 2007 |
Articles that test or review more than one tool | Articles that test or review only one tool |
Search strategy rapid review 2: anxiety and depression
Search terms
(anxiety OR depression) AND (Scale OR assessment OR measure) AND (Elderly OR residents Or geriatric OR nursing homes OR care homes OR cognitive impairment OR dementia OR Alzheimer OR older people)
(anxiety OR depression) AND (Scale OR assessment OR measure) AND observation* or behaviour*
Databases
MEDLINE, CINAHL, Cochrane Library, PsycInfo (via EBSCOhost) and Abstracts in Social Gerontology.
Inclusion and exclusion criteria for depression
Inclusion criteria | Exclusion criteria |
---|---|
Articles/tools in English | Articles/tools not in English |
Articles or tools that focus on older adults | Articles or tools focusing solely on children, adolescents or adults aged under 65 years |
Articles published since 1 January 2007 | Articles published before 1 January 2007 |
Articles that test or review more than one tool | Articles that test or review only one tool |
Name of tool | Review(s) | Original article(s) | Population (main) | Dimensions of pain | Methods of data collection | Scale/scoring/rating |
---|---|---|---|---|---|---|
21-Point Box Scale | 46,306 | 165 | General population | Intensity | Self-report | The 21-Point Box Scale is an intensity-based scale that ranges from 0 (which indicates no pain) to 100 (which indicates pain as bad as it could be). The scale is broken down into 21 sections, or boxes, which increase in increments of five. Those completing the scales are required to indicate which box best matches the intensity of the pain they are experiencing |
APS | 32,45,46,172,177,178,307–310 | 180 | Dementia nursing homes | Intensity change over time | Observation | Contains six items (vocalisation, facial expression, change in body language, behavioural changes, physiological changes, physical changes), which are rated on a four-point scale (0 = absent, 1 = mild, 2 = moderate, 3 = severe). Total score ranges from 0 to 18. Total is scored 0–2 = no pain, 3–7 = mild pain, 8–13 = moderate pain, ≥ 14 = severe |
CNPI | 25,32,65,308,310 | 163 | Dementia | Presence of pain | Observation | Scoring involves patient observation at rest and during movement. Six items (vocal complaints: non-verbal, facial grimaces/winces, bracing, restlessness, rubbing, vocal complaints: verbal) scored yes or no on the presence of pain. Scores added together but no cut-off value for the intensity of pain |
Colour Analogue Scale | 46 | 169 | Childrena | Intensity | Self-report | Patients indicate the severity or intensity of their pain by sliding a bar along an image that resembles a thermometer. The bottom is thin and white and says ‘no pain’, the top is wide and red and says ‘most pain’. The location on the scale is compared with a 1–10 scale on the back of the tool |
Doloplus-2 | 32,46,47,172,308,310,311 | 183 | Dementia |
Presence of pain Experience of pain |
Observation | Tool contains 10 items organised under three categories (somatic reaction, psychomotor reactions and psychosocial reactions). Each item is scored on a four-point scale from 0 (behaviour not present or abnormal for the individual) to 3 (significant behavioural disturbance). Overall score runs from 0 to 30. A score of 5–30 indicates pain |
DS-DAT | 32,47,172,312 | 181 | Dementia | Frequency, duration, intensity | Observation | A nine-item scale (noisy breathing, negative vocalisation, content of facial expression, sad facial expression, frightened facial expression, frown, relaxed body language, tense body language, fidgeting). Each item is scored on a 0–3 scale based on frequency, intensity and duration. Possible score of 0–27. A higher score equals a higher level of discomfort |
FPS (Bieri version and revised version) | 43,65,172,309 | 164,171 | Childrena | Intensity | Self-report | Patients choose the picture of a facial expression that best corresponds to their pain. Has six or seven (depending on the version) distinct images that range from a smiling face (intensity = 0) to a harsh grimace (intensity = 10). Faces often categorised into four groups (no, mild, moderate and severe). There are several versions of this tool. Information here is based on the Bieri and revised versions of the FPS |
Functional Pain Scale | 25 | 313 | General population | Intensity functional status | Self-report | Uses a combination of verbal, numerical and visual descriptors to assess intensity of pain and impact of function |
interRAI Long-Term Care Facilities | 25 | 237,314 | Nursing homes | Frequency, intensity | Self-report proxy – caregiver (can also draw on observations to answer the questions) | Two questions, one on the frequency of pain, the other on pain intensity, feed in to a 0–4 scale running from 0 (no pain) to 4 (daily severe pain) |
Iowa Pain Thermometer | 65,172 | 166 | General population | Intensity | Self-report | Descriptor scale presented as a thermometer with pain intensity analogous to temperature. Thirteen levels with words to guide. Ranges from no pain to the most intense pain imaginable |
McGill Pain questionnaire | 25,188,315 | 167 | General population | Intensity, location of, description of pain | Self-report | Uses a verbal description scale: no pain, mild, discomforting, distressing, horrible and excruciating. Some versions also allow reporting of pain using respondents’ own words to describe the pain |
MOBID-2 | 309 | 182 | Dementia | Intensity, location, behaviours | Observation | Intensity is rated on a 1–10 visual scale, with 1 being no pain and 10 being as bad as it could possibly be. Assessor guides patient through five mobilisation activities. Pain behaviour is assessed during the suggested movements by observing (1) facial expression, (2) defence actions and (3) pain noises |
NOPPAIN | 65,172,178,308,310 | 185 | Dementia | Intensity behaviour | Observation | Contains four sections: (1) care conditions under which pain behaviours are observed such as bathing, dressing, transfers; (2) six items about presence/absence of pain behaviours (pain words, pain noises, pain faces, bracing, rubbing and restlessness); (3) pain behaviour intensity ratings using a six-point Likert scale; and (4) a pain thermometer for rating overall pain intensity |
NPAT | 310 | 316 | Non-verbal patients in ICU | Intensity | Observation | Five behavioural categories are rated on a scale of 0–2, which results in a total score between 0 and 10. 0 represents no pain, 10 represents the worst pain ever experienced |
NRS | 45,46,49,172,178,315,317,318 | 168 | General population | Intensity | Self-report | There are several different versions of the NRS. They tend to utilise a 1–10 scale marked at even intervals. Some scales use 0 = no pain, 10 = worst pain imaginable. Others use alternative scales, such as 1–3 = mild pain, 4–6 = moderate pain, 7–10 = severe pain |
Observational Pain Behaviour Assessment Instrument | 25 | 319 | Unknown | Intensity | Observation | 17 behavioural items with seven levels from none to extreme |
PACSLAC | 46,47,65,172,308,310,312 | 184,320 | Dementia | Intensity | Observation | Consists of 60 behaviours grouped into four subscales: (1) measures of facial expression; (2) social, personality or mood; (3) activity and body movement; (4) physiological changes, eating or sleeping behaviour. 58 of these items rate present or not. Two rated on the binary scale: 1 (decreased) and 2 (increased). The higher the overall score the worse pain is being experienced |
Pain Assessment for the Dementing Elderly | 310 | 321 | Dementia | Intensity, function | Proxy report (caregivers) | Has 24 items organised into three categories (behaviour, intensity and function). All items rated between 1 and 4 but use different methods (multiple choice, visual-based Likert scale). Overall score between 0 and 96. Lower scores are associated with more functional behaviour and less pain |
PAINAD | 25,46,65,172,308–310,312,318,322 | 162 | Dementia | Intensity | Observation | PAINAD is a five-item scale: (1) breathing independent of vocalisation; (2) negative vocalisation; (3) facial expression; (4) body language; and (5) consolability. Each item is rated 0–2. The numeric ratings have different descriptions for each item. 1 and 2 in an item may indicate pain. Overall score runs from 0 = no pain to 10 severe pain. A score of 2 has been identified as signifying the need for pharmacological pain management |
PAINE | 172 | 323 | Dementia | Frequency | Proxy rating | 22-item scale (based on four behavioural categories: facial expressions, verbalisations, body movements and changes in activity patterns or routines). Each item has seven frequency levels ranging from never to every hour |
VDSa | 25,43,45,46,172,177,324 | 325 | General population | Intensity | Self-report | There are many different versions of the VDS (sometimes referred to as Verbal Rating Scales). All versions use verbal descriptions of pain, usually numbered and ascending in severity. For example, the six-level version where the verbal descriptor for 1 is no pain and for 6 is worst possible pain. The seven-level version runs from no pain to pain as bad as it could be. Interpreting the results of the verbal descriptor scale should focus on the words used to describe the pain |
VASa | 25,46,172 | 326 (note: not original article; tool first used in 1921) | General population | Intensity | Self-report | Lots of different versions of the VAS. They use a vertical or horizontal line, usually 10 cm in length. Tend to be anchored by verbal descriptors at each end, such as no pain and pain as bad as it can be. Participants make a mark on the line that corresponds to their level of pain. Scored by measuring the distance between the beginning of the scale and the mark made by the participant. This distance is the pain intensity score |
APS180 | CNPI163 | Doloplus-2183 | DS-DAT181 | MOBID-2182 | NOPPAIN185 | NPAT316 | aPACSLAC184 | PAINAD162 |
---|---|---|---|---|---|---|---|---|
Vocalisation | Vocal complaints: non verbal | Somatic complaints | Noisy breathing | Facial expression | Facial expressions | Emotion | Measures of facial expression | Breathing independent of vocalisation |
Facial expression | Facial grimaces/winces | Protective body postures adopted at rest | Negative vocalisation | Defence actions | Pain-related words | Movement | Social, personality or mood | Negative vocalisation |
Change in body language | Bracing | Protection of sore areas | Content of facial expression | Pain noises | Rubbing | Verbal clues | Activity and body movement | Facial expression |
Behavioural changes | Restlessness | Expression | Sad facial expression | Bracing | Facial clues | Physiological changes, eating or sleeping behaviour | Body language | |
Physiological changes | Rubbing | Sleep pattern | Frightened facial expression | Pain noises | Positioning/guarding | Consolability | ||
Physical changes | Vocal complaints: verbal | Washing and/or dressing | Frown | Restlessness | ||||
Mobility | Relaxed body language | |||||||
Communication | Tense body language | |||||||
Social life | Fidgeting | |||||||
Problems of behaviour |
Name of tool | Anxiety or depression | Review(s) | Original article(s) | Population (main) | Dimensions of anxiety/depression | Methods of data collection | Scale/scoring/rating |
---|---|---|---|---|---|---|---|
ADIS-IV | Anxiety | 327 | 206 | General population | Diagnosis | Clinical interview | Scoring based on a semistructured clinical interview. Details of scoring not found in review |
Beck Anxiety Inventory | Anxiety | 84,327,328 | 329,330 | General population | Severity | Self-report | 21 items that reflect symptoms of anxiety. Each item rated on a four-point scale (0 = not at all, 1 = mildly, but did not bother me much, 2 = moderately, it was not pleasant at times, 3 = severely, it bothered me a lot). Item scores added together for overall score. 0–21 represents low anxiety, 22–35 represents moderate anxiety and a score of ≥ 36 suggests potentially concerning levels of anxiety |
Beck Depression Inventory/Beck Depression Inventory II | Depression | 125,331,332 | 212,213 | General population | Severity | Self-report | 21 items that reflect symptoms of depression. Each item rated on a four-point scale that indicates degree of severity; items are rated from 0 (not at all) to 3 (extreme form of each symptom). Item scores added together for overall score. Range runs from 0 to 63. With 1–10 representing normal mood, 11–16 mild mood disturbance, 17–20 borderline clinical depression, 21–30 moderate depression, 31–40 severe depression, > 40 extreme depression |
Brief Measure of Worry Severity | Anxiety | 327 | 333 | General population | Experience and impact of severity | Self-report | Eight-item measure assessing several aspects of pathological worry (e.g. impairment/distress, uncontrollability, associated negative affect). Items are rated from 0 (not at all true) to 3 (definitely true) |
CES-D | Depression | 125,332 | 215 | General population | Severity | Self-report | Contains 20 items that are rated by frequency. Scale runs from 1 (rarely or none of time) to 4 (most of time). Overall score created by combining the scores for each item. Possible range of scores is 0 to 60, with the higher scores indicating the presence of more symptomatology |
CSDD | Depression | 107,115,120 | 334 | General population | Severity | Clinician interviews (proxy and patient) | 19 items each rated as absent, mild or intermittent and severe, ranging therefore from 0 to 2. The time frame evaluated for most of the 19 items is the previous week. Items combined to create overall score. 0–7 equals no depression, 8–12 mild depression and > 12 is indicative of major depression |
DMAS | Depression | see text | 190,191 | Dementia | Severity | Clinician interview (input from proxies) | 24-item scale designed to focus on observable mood and functional capabilities. Most items scaled from 0 (within normal limits) to 6 (most severe) |
Evans Liverpool Depression Rating Scale | Depression | 125 | 201 | Older people | Screening | Self and proxy report | 15-item questionnaire; uses yes/no to detect presence of symptoms. Ten questions are administered to the patient and five questions are administered to a proxy. Positive answers score 1 and are summed for overall score. Scores of ≥ 6 indicate need for further investigation |
Generalised Anxiety Disorder Questionnaire for DSM-IV | Anxiety | 327 | 335 | General population | Screening | Self-report | Consists of nine items. Most items ask about the presence of anxiety/worry symptoms on yes/no or presence/absence scale. Two items use a nine-point scale to measure impairment and distress. By combining scores for items and 0–12 overall score can be created. The original cut-off value for further investigation was 5.7. Other studies have suggested other cut-off points |
Generalised Anxiety Disorder Severity Scale | Anxiety | 327 | 336 | General population | Severity | Clinician interview | Six-item scale. Each item is rated on a 0–4 scale, and total scores range from 0 to 24. Little information found in review |
GAI | Anxiety | 84,327,328,337,338 | 100,210 | Older people | Presence of symptoms | Self-report | 20 item, dichotomous agree/disagree format. Items of reflect symptoms of anxiety. No details of overall score |
GAS | Anxiety | 84,328 | 208 | Older people | Severity | Self-report | Contains 30 items. Items 1–25 are scorable items. They rate the frequency of symptoms over the past week ranging from 0 (not at all) to 3 (all of the time). Items 26–30 are used to help clinicians identify areas of concern for the respondent. They are not used to calculate the total score of the GAS or any subscale. Can produce overall score and subscores |
GDS | Depression | 79,123–126,211,327,331,332 | 202,203 | Older people | Diagnosis/screening | Self-report (plus proxy) | There are different versions of tool, including GDS-15 and GDS-30. All versions have items that are rated on a yes/no scale. Items ask about symptoms and feelings associated with depression. Overall score calculated from combining scores from individual items. In the GDS-15 possible score runs from 0 to 15. A score of > 5 suggestions depression |
HAM-A | Anxiety | 339 | 207 | General population | Severity | Clinician interview | 14-item rating scale focusing on symptoms of anxiety using a five-point scale. Each item is scored on a scale of 0 (not present) to 4 (severe) Scale has a total score range of 0–56, where < 17 indicates mild severity, 18–24 mild to moderate severity and 25–30 moderate to severe |
HDRS | Depression | 125 | 192,340 | General population | Severity | Clinician interview | Most commonly a 17-item scale (but other versions exist) with an emphasis on melancholic and physical symptoms of depression. Method for scoring varies by version. Questions have different numbers of response items and attached descriptions. For the HDRS17, a score of 0–7 is generally accepted to be within the normal range, 8–13 to reflect mild depression, 14–18 moderate depression, 19–22 severe depression and scores of 23 and above reflect very severe depression |
HADS | Depression and anxiety | 332,337,338 | 194,195,341 | General population | Screening/diagnosis | Self-report | 14-item scale (seven items anxiety/seven items depression). Each item has a four-point scale from 0 to 3. Each item focuses the frequency or severity of a symptom associated with anxiety or depression. Items populate separate anxiety and depression scores. Both scales run from 0 to 21, with scores of > 11 indicating the presence of anxiety or depression |
MDS-DRS | Depression | See Chapter 2 | 189 | Nursing homes | Screening | Observation (staff) | A seven-item scale, based on common symptoms of depression. Each item is scored 0–3 based on frequency. For overall score, items recoded into 0, 1 or 2 to give a possible score of 0–14. 0 equals no mood symptoms, 14 equals all mood symptoms present in the last 3 days. Scores of ≥ 3 indicate depression |
MADRS | Depression | 107,115,125,211 | 197 | General population | Severity | Observation and interview by clinical staff | Ten items focusing on symptoms of depression are scored on severity. Scale in each item runs from 0 to 6. Overall score calculated by combining item score and has a range of 0–60. Usual cut-off points are (0–6) normal, (7–19) mild depression, (20–34) moderate depression and (> 34) severe depression |
PHQ-9/PHQ-9 OV | Depression | 120,327,332 | 198–200 | General population/nursing homes | Severity | Self-report/observation (staff) | PHQ-9 contains nine items. Each item uses a four-point scale based on frequency. It ranges from 0 (not at all) to 3 (nearly every day). Overall score has possible scores in the range of 0–27, with the following cut-off points: 1–4, minimal depression; 5–9, mild depression; 10–14, moderate depression; 15–19, moderately severe depression; 20–27, severe depression. The observation version contains an extra item and the potential score is 0–30 |
Penn State Worry Questionnaire (PSWQ) | Anxiety | 327,339 | 342 | General population | Screening | Self-report | Contains 16 items based on symptoms and feelings associated with worry. Items are rated from 1 (not at all typical) to 5 (very typical). Overall score items are summed (some reversed). Potential scores range from 0 to 80. Cut-off scores have not been developed |
RAID | Anxiety | 337,338 | 343 | Dementia and older people | Severity | Clinician interview (patient and proxy) | 18 items assess symptoms in four categories: worry, apprehension and vigilance, motor tension and autonomic hyperactivity. Each item rated on a four-point scale; absent (0), mild (1), intermittent or moderate (2) and severe (3). No details of overall score |
State–Trait Anxiety Inventory | Anxiety | 337 | 214 | General population | Severity | Self-report | A 40-item tool: 20 items assess trait anxiety and 20 items assess state anxiety. All items are rated on a four-point scale (from ‘almost never’ to ‘almost always’). Higher scores indicate greater anxiety |
MADRS197 | MDS-DRS189 | PHQ-9 OV198–200 |
---|---|---|
|
|
|
Appendix 3 Developing new measures of pain, anxiety and depression for older adult care home residents
-
I haven’t felt at all anxious.
-
I have rarely felt anxious.
-
I sometimes felt anxious.
-
I have felt anxious a lot of the time.
-
I feel completely free from worry and concerns on a day-to-day basis.
-
I feel free from worry and concerns on a day-to-day basis.
-
Sometimes I feel worried and concerned.
-
I feel very worried and concerned on a daily basis.
-
I do not feel at all depressed.
-
I generally do not feel depressed.
-
I sometimes feel depressed.
-
I often feel depressed.
-
I never feel down or depressed.
-
I rarely feel down or depressed.
-
I sometimes feel down or depressed.
-
I often feel down or depressed and struggle to cope.
-
I am completely free of pain.
-
I am generally free of pain.
-
I am sometimes free of pain.
-
I am never free of pain.
-
I never feel pain.
-
Most days I am free from pain.
-
I sometimes feel pain.
-
I feel severe pain on a daily basis.
I think [the resident/your relative] . . .
-
never feels worried or anxious
-
rarely feels worried or anxious
-
sometimes feels worried or anxious
-
often feels worried or anxious.
I think [the resident/your relative] . . .
-
hardly ever feels worried or anxious
-
occasionally feels worried or anxious
-
often feels worried or anxious
-
constantly feels worried or anxious.
I think [the resident/your relative] . . .
-
never feels down or depressed
-
rarely feels down or depressed
-
sometimes feels down or depressed
-
often feels down or depressed.
I think [the resident/your relative] . . .
-
hardly ever feels down or depressed
-
occasionally feels down or depressed
-
often feels down or depressed
-
constantly feels down or depressed.
If [the resident/your relative] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident] is in pain with these things in place. If [the resident’s/your relative’s] pain is not well managed, base your answer on that.
I think [the resident/your relative] is . . .
-
never in pain
-
rarely in pain
-
sometimes in pain
-
often in pain.
If [the resident/your relative] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident/your relative] is in pain with these things in place. If [the resident’s/your relative’s] pain is not well managed, base your answer on that.
I think [the resident/your relative] is . . .
-
hardly ever in pain
-
occasionally in pain
-
often in pain
-
constantly in pain.
Research team (this includes co-researchers) to introduce themselves briefly (name, job title/role/institution).
Researcher to introduce the session: what the aims of the session are, a few ground rules (these two items supported by a poster), and a quick double check of consent (in terms of informing those present that recording will begin).
Will also cover why we are asking care workers their views: care home staff have a wealth of knowledge and experience identifying and supporting residents’ pain, anxiety and depression. We would like your knowledge and experience to inform how we measure pain, anxiety and depression.
Section 1: recognising residents’ anxiety, depression and pain (15 minutes)Moderator to lead discussion (see below).
Assistant(s) watch the time (use cards to indicate 10 minutes left, 5 minutes left, 1 minute left to moderator), administer the recording, and write on the flip chart. Aid with discussion where needed.
We would like to know how you, as people who support older adults who live in care homes recognise when they are in pain, or are feeling anxious or depressed.
Let us take each of those separately. First think about pain (try to broadly split the 15 minutes between the three areas):
-
How do you recognise when the people you support are in pain?
-
How do you recognise when the people you support are feeling anxious?
-
How do you recognise when the people you support are feeling depressed?
-
Prompts you might like to use if the participants in the group are struggling:
-
Think about someone you support, what are the signs they are in pain/feeling anxious/feeling depressed?
-
If residents cannot tell you if they are in pain/feeling anxious/feeling depressed, how do you know how they are feeling?
-
What are the non-verbal signs that the residents you support are in pain/feeling anxious/feeling depressed?
Moderator to lead discussion (see below)
Assistant(s) watch the time (use cards to indicate 10 minutes left, 5 minutes left, 1 minute left to moderator), administer the recording, write on the flip chart. Aid with discussion where needed.
What words do you use to describe residents’ anxiety, pain and depression? Let’s take each of these separately (try to broadly split the 15 minutes between the three areas):
-
What words do you use to describe anxiety?
-
What words do you use to describe pain?
-
What words do you use to describe depression?
Prompts to help the discussion.
-
Think about words you use at work.
-
Think about words you use to talk about the intensity of pain/anxiety/depression.
-
Think about words you use to talk about how often residents experience pain/anxiety/depression.
-
What about words colleagues or residents use?
Can we order the words? (Use words on the flip chart, which words are worse, which are better – for example lots of pain may be worse than little pain, or always anxious may be worse than almost never anxious.)
Section 3: questions on pain, anxiety and depressionModerator to lead the discussion (see below).
Assistant(s) watch the time (use cards to indicate 10 minutes left, 5 minutes left, 1 minute left to moderator), administer the recording and place the questions on the flip chart (two per session – different domains, different scales).
Why do we ask staff to tell us about a resident’s pain/anxiety/depression?
Assistant to hand out two questions to each participant (remember sheets and pens).
Participants asked to complete based on somebody they know (we do not want to know their name or too much that will identify them).
Participants to be given 1–2 minutes to complete the questions.
Discussion points (repeat for each question):
-
Were you able to answer the question, if not why?
-
Would you feel able to answer a question like this for all the residents you support?
-
What did you think it was asking?
-
Were there any terms that were not clear or you did not understand?
-
Was there an answer option that matched how you felt?
If time:
-
Run through the options and ask if anyone picked that answer option and, briefly, why?
I think [the resident/your relative] . . .
-
hardly ever feels worried or anxious
-
occasionally feels worried or anxious
-
often feels worried or anxious
-
constantly feels worried or anxious.
I think [the resident/your relative] . . .
-
hardly ever feels down or has a low mood
-
occasionally feels down or has a low mood
-
often feels down or has a low mood
-
constantly feels down or has a low mood.
I think [the resident/your relative] . . .
-
never or hardly ever feels down or has a low mood
-
sometimes feels down or has a low mood
-
often or constantly feels down or has low mood.
If [the resident/your relative] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident] is in pain with these things in place. If [the resident’s/your relative’s] pain is not well managed, base your answer on that.
I think [the resident/your relative] is . . .
-
hardly ever in pain
-
occasionally in pain
-
often in pain
-
constantly in pain.
If [the resident/your relative] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident/your relative] is in pain with these things in place. If [the resident’s/your relative’s] pain is not well managed, base your answer on that.
I think [the resident/your relative] is . . .
-
never or hardly ever in pain
-
sometimes in pain
-
often or constantly in pain.
I think [the resident/your relative] would . . .
-
hardly ever feel worried or anxious
-
occasionally feel worried or anxious
-
often feel worried or anxious
-
constantly feel worried or anxious.
I think [the resident/your relative] . . .
-
hardly ever feels down or has a low mood
-
occasionally feels down or has a low mood
-
often feels down or has a low mood
-
constantly feels down or has a low mood.
I think [the resident/your relative] would . . .
-
hardly ever feel down or have a low mood
-
occasionally feel down or have a low mood
-
often feel down or have a low mood
-
constantly feel down or have a low mood.
If [the resident/your relative] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident/your relative] is in pain with these things in place. If [the resident’s/your relative’s] pain is not well managed, base your answer on that.
I think [the resident/your relative] is . . .
-
hardly ever in pain
-
occasionally in pain
-
often in pain
-
constantly in pain.
If [the resident/your relative] has pain but you feel it would be well managed through medication or other techniques, base your answer on how often [the resident/your relative] would be in pain with these things in place. If [the resident’s/your relative’s] pain would not be well managed, base your answer on that.
I think [the resident/your relative] would . . .
-
hardly ever be in pain
-
occasionally be in pain
-
often be in pain
-
constantly be in pain.
Who we are: a researcher at the University of Kent.
What this research is about: developing ways of measuring pain, anxiety and depression among care home residents who struggle to report it themselves. We often get the views of people who know them – care workers/friends/relatives – to help us understand their lives when they cannot tell us directly.
We will be asking you to answer questions about someone you know. It can be somebody you currently support in your job or somebody you used to support and help. How would you like me to refer to them? We do not need to know lots of other information about them, as our interest today is in testing our questions.
We want to make sure that the questions we ask are suitable, understandable, clear, etc.
What we are going to do today is run through a selection of the questions that might be used in future work. We already ask questions about other areas of people’s lives.
Consent/anonymity/recording
I am testing our questions not you! There are no right or wrong answers. If you cannot answer any of the questions or you find it difficult to, tell me why. Or if you do not wish to answer that question please tell me and we can move on to the next question.
What I would like to do is go through the questions in turn. There are six of them. Some many seem very similar but we are testing different ways of asking things. I would like to test the questions by reading them to you. When you hear them, I want you to tell me whatever comes into your mind. This is called ‘thinking aloud’. If you are unsure of what to say, you could talk about the following:
-
what you think the question is about
-
the option you would pick
-
why you chose it
-
if you are finding the question difficult to understand or answer and why.
But do feel free to say anything that comes into your head when you see the question, for example ‘why are they asking this?’.
I will probably follow up what you say about each question with a couple of further questions. We will take each question in turn.
1. Which of the following statements best describes how often you think [the resident] feels down or has a low mood?I think [the resident] . . .
-
hardly ever feels down or has a low mood
-
occasionally feels down or has a low mood
-
often feels down or has a low mood
-
constantly feels down or has a low mood.
Prompts to use if it is not covered by ‘thinking aloud’
-
What do you think this question is asking about?
-
What do you think the question means by the terms ‘feeling down or has a low mood’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
In what situation do you think you could choose ‘constantly feels down or has a low mood’? (consider repeating for all levels if appropriate) OR What do the different answer options mean? You could ask this for each level.
-
Overall, how difficult/easy was it to answer this question?
I think [the resident] . . .
-
hardly ever feels worried or anxious
-
occasionally feels worried or anxious
-
often feels worried or anxious
-
constantly feels worried or anxious.
Prompts to use if it is not covered by ‘thinking aloud’.
-
What do you think this question is asking about?
-
What do you think the question means by the terms ‘worried’ and ‘anxious’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
How confident do you feel answering a question on how often [the resident] feels worried or anxious?
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
In what situation do you think you could choose ‘constantly feels worried or anxious’? (Consider repeating for all levels if appropriate) OR What do the different answer options mean? You could ask this for each level.
-
Overall, how difficult/easy was it to answer this question?
If [the resident] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident] is in pain with these things in place. If [the resident’s] pain is not well managed, base your answer on that.
I think [the resident] is . . .
-
hardly ever in pain
-
occasionally in pain
-
often in pain
-
constantly in pain.
Prompts to use if it is not covered by ‘thinking aloud’.
-
What do you think this question is asking about?
-
What do you think the question means by the term ‘pain’?
-
What do you think the questions means by the term ‘well managed’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
How confident do you feel answering a question on how often [the resident] is in pain?
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
In what situation do you think you could choose ‘constantly in pain’? (Consider repeating for all levels if appropriate) OR What do the different answer options mean? You could ask this for each level.
-
Overall, how difficult/easy was to answer this question?
I think [the resident] . . .
-
never or hardly ever feels down or has a low mood
-
sometimes feels down or has a low mood
-
often or constantly feels down or has low mood.
Prompts to use if it is not covered by ‘thinking aloud’.
-
What do you think this question is asking about?
-
What do you think the question means by the terms ‘feeling down or has a low mood’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
How confident do you feel answering a question on how often [the resident] feels down or has a low mood?
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
What do the different answer options mean? You could ask this for each level.
If appropriate, ask the participant to compare the two questions, which did they prefer, which was easier to answer.
-
Overall, how difficult/easy was it to answer this question?
I think [the resident] . . .
-
never or hardly ever feels worried or anxious
-
sometimes feels worried or anxious
-
often or constantly feels worried or anxious.
Prompts to use if it is not covered by ‘thinking aloud’.
-
What do you think this question is asking about?
-
What do you think the question means by the terms ‘worried’ and ‘anxious’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
How confident do you feel answering a question on how often [the resident] feels worried or anxious?
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
What do the different answer options mean? You could ask this for each level.
If appropriate, ask the participant to compare the two questions, which did they prefer, which was easier to answer.
-
Overall, how difficult/easy was it to answer this question?
If [the resident] has pain but it is well managed through medication or other techniques, base your answer on how often [the resident] is in pain with these things in place. If [the resident’s] pain is not well managed, base your answer on that.
I think [the resident] is . . .
-
never or hardly ever in pain
-
sometimes in pain
-
often or constantly in pain.
Prompts to use if it is not covered by ‘thinking aloud’.
-
What do you think this question is asking about?
-
What do you think the question means by the term ‘pain’?
-
What do you think the questions means by the term ‘well managed’?
-
Are there any words or phrases that are unclear?
-
Was there an answer option you felt fitted with the answer you wanted to give?
If struggling to pick an option, consider asking if there is a word or phrase that would summarise the answer they would like to give.
-
How confident do you feel answering a question on how often [the resident] is in pain?
-
Why did you choose [the answer option]?
-
What are the things about [the resident’s] life that made you choose [the answer option]?
-
What do the different answer options mean? You could ask this for each level.
If appropriate, ask the participant to compare the two questions, which did they prefer, which was easier to answer.
-
Overall, how difficult/easy was it to answer this question?
Ask the participant if needed:
-
How they felt about answering the questions.
-
Overall, how easy/difficult were the questions?
-
What was it like answering about the experience of someone else?
-
Is there anything they would like to add that we have not asked?
Thank participant for taking part.
Explain how helpful their answers have been.
Give the participant their voucher.
Appendix 4 Additional results for Care Quality Commission ratings and social care workforce
Variable | (1) Quality rating (LPM) | (2) Quality rating (LPMIV) | (3) Quality rating (LPM) | (4) Quality rating (LPM) | (5) Quality rating (LPM) | (6) Quality rating (LPM) | (7) Quality rating (LPMIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.195*** (0.076) | 0.817*** (0.120) | 0.794*** (0.118) | ||||
Dementia trained (%) | 0.098*** (0.015) | 0.103*** (0.018) | |||||
Person-centred care or dignity trained (%) | 0.079*** (0.021) | 0.018 (0.024) | |||||
Staff turnover rate | –0.054*** (0.018) | –0.033* (0.020) | |||||
Job vacancy rate | –0.383*** (0.083) | –0.368*** (0.090) | |||||
Care home controls | |||||||
Registration: nursing | –0.050*** (0.011) | –0.046*** (0.011) | –0.043*** (0.011) | –0.050*** (0.011) | –0.049*** (0.011) | –0.049*** (0.011) | –0.035*** (0.011) |
Sector: voluntary | 0.071*** (0.015) | 0.030* (0.016) | 0.090*** (0.014) | 0.087*** (0.014) | 0.084*** (0.013) | 0.087*** (0.013) | 0.042*** (0.016) |
Clients: living with dementia | –0.045*** (0.010) | –0.039*** (0.010) | –0.056*** (0.010) | –0.049*** (0.010) | –0.045*** (0.010) | –0.049*** (0.010) | –0.050*** (0.010) |
Care home competition (HHI) | 0.237*** (0.062) | 0.225*** (0.063) | 0.251*** (0.062) | 0.247*** (0.062) | 0.243*** (0.063) | 0.237*** (0.063) | 0.236*** (0.063) |
Size (beds, log) | –0.038*** (0.008) | –0.037*** (0.008) | –0.036*** (0.008) | –0.036*** (0.008) | –0.037*** (0.008) | –0.035*** (0.008) | –0.032*** (0.008) |
Staff-to-resident ratio | 0.077 (0.071) | 0.081 (0.072) | 0.090 (0.071) | 0.079 (0.071) | 0.066 (0.071) | 0.063 (0.072) | 0.080 (0.071) |
Staff-to-resident ratio (squared) | –0.043 (0.032) | –0.053 (0.033) | –0.040 (0.032) | –0.041 (0.032) | –0.037 (0.032) | –0.036 (0.032) | –0.046 (0.032) |
Female staff (%) | 0.121** (0.059) | 0.135** (0.060) | 0.098* (0.059) | 0.117** (0.059) | 0.113* (0.059) | 0.098* (0.059) | 0.092 (0.060) |
Local area controls | |||||||
Attendance Allowance (%) | 0.003 (0.004) | 0.004 (0.004) | 0.002 (0.004) | 0.003 (0.004) | 0.003 (0.004) | 0.002 (0.004) | 0.003 (0.004) |
Pension Credit (%) | –0.001 (0.001) | –0.002 (0.001) | –0.001 (0.001) | –0.001 (0.001) | –0.001 (0.001) | –0.001 (0.001) | –0.002 (0.001) |
Jobseeker’s Allowance (%) | –0.001 (0.013) | 0.004 (0.013) | –0.001 (0.013) | 0.000 (0.013) | –0.002 (0.013) | –0.003 (0.013) | 0.004 (0.013) |
Average house price (log) | 0.038* (0.021) | 0.009 (0.022) | 0.048** (0.021) | 0.050** (0.021) | 0.047** (0.021) | 0.052** (0.021) | 0.019 (0.022) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –0.109 (0.309) | –1.045*** (0.340) | 0.135 (0.289) | 0.126 (0.290) | 0.195 (0.289) | 0.154 (0.290) | –1.099*** (0.336) |
Observations | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 |
Care homes | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 |
Imputations | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
Average RVI | 0.129 | 0.131 | 0.120 | 0.119 | 0.118 | 0.122 | 0.176 |
Largest FMI | 0.382 | 0.345 | 0.357 | 0.358 | 0.357 | 0.358 | 0.357 |
Variables | (1) Quality rating (FE) | (2) Quality rating (FEIV) | (3) Quality rating (FE) | (4) Quality rating (FE) | (5) Quality rating (FE) | (6) Quality rating (FE) | (7) Quality rating (FEIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.098 (0.120) | 0.408** (0.201) | 0.411** (0.203) | ||||
Dementia trained (%) | 0.062** (0.028) | 0.076** (0.032) | |||||
Person-centred care or dignity trained (%) | 0.019 (0.036) | –0.027 (0.041) | |||||
Staff turnover rate | 0.004 (0.027) | 0.012 (0.029) | |||||
Job vacancy rate | –0.112 (0.117) | –0.134 (0.124) | |||||
Care home controls | |||||||
Registration: nursing | –0.004 (0.077) | –0.008 (0.079) | –0.000 (0.077) | –0.003 (0.077) | –0.003 (0.077) | –0.002 (0.077) | –0.004 (0.079) |
Sector: voluntary | –0.019 (0.190) | –0.008 (0.197) | –0.009 (0.186) | –0.022 (0.188) | –0.022 (0.188) | –0.021 (0.186) | 0.010 (0.193) |
Clients: living with dementia | –0.065 (0.052) | –0.062 (0.053) | –0.075 (0.052) | –0.067 (0.052) | –0.067 (0.052) | –0.065 (0.052) | –0.072 (0.053) |
Care home competition (HHI) | –0.318 (0.733) | –0.287 (0.736) | –0.319 (0.731) | –0.329 (0.733) | –0.329 (0.732) | –0.333 (0.734) | –0.283 (0.737) |
Size (beds, log) | –0.037 (0.086) | –0.032 (0.087) | –0.037 (0.085) | –0.039 (0.085) | –0.039 (0.085) | –0.040 (0.085) | –0.029 (0.087) |
Staff-to-resident ratio | –0.070 (0.121) | –0.070 (0.121) | –0.061 (0.120) | –0.069 (0.121) | –0.069 (0.121) | –0.078 (0.121) | –0.067 (0.121) |
Staff-to-resident ratio (squared) | 0.030 (0.051) | 0.029 (0.051) | 0.028 (0.051) | 0.030 (0.051) | 0.030 (0.051) | 0.032 (0.051) | 0.029 (0.051) |
Female staff (%) | 0.024 (0.107) | 0.027 (0.107) | 0.011 (0.107) | 0.022 (0.106) | 0.023 (0.106) | 0.020 (0.107) | 0.010 (0.108) |
Local area controls | |||||||
Attendance Allowance (%) | 0.002 (0.012) | 0.002 (0.012) | 0.002 (0.012) | 0.002 (0.012) | 0.002 (0.012) | 0.002 (0.012) | 0.002 (0.012) |
Pension Credit (%) | 0.009 (0.010) | 0.009 (0.010) | 0.009 (0.010) | 0.009 (0.010) | 0.009 (0.010) | 0.009 (0.010) | 0.009 (0.010) |
Jobseeker’s Allowance (%) | 0.029 (0.024) | 0.030 (0.024) | 0.029 (0.024) | 0.030 (0.024) | 0.029 (0.024) | 0.029 (0.024) | 0.029 (0.024) |
Average house price (log) | 0.119 (0.134) | 0.117 (0.134) | 0.122 (0.134) | 0.119 (0.134) | 0.120 (0.134) | 0.121 (0.134) | 0.123 (0.134) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –0.949 (1.675) | –0.790 (1.654) | –0.744 (1.658) | –0.750 (1.658) | –0.757 (1.658) | ||
Observations | 12,052 | 10,454 | 12,052 | 12,052 | 12,052 | 12,052 | 10,454 |
Care homes | 5555 | 3957 | 5555 | 5555 | 5555 | 5555 | 3957 |
Imputations | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
Average RVI | 16.66 | 0.247 | 16.95 | 15.68 | 17.67 | 17.92 | 0.333 |
Largest FMI | 0.527 | 0.530 | 0.529 | 0.524 | 0.526 | 0.531 | 0.538 |
Variable | (1) Quality rating (RE) | (2) Quality rating (REIV) | (3) Quality rating (RE) | (4) Quality rating (RE) | (5) Quality rating (RE) | (6) Quality rating (RE) | (7) Quality rating (REIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.398*** (0.091) | 0.935*** (0.147) | 1.071*** (0.177) | ||||
Dementia trained (%) | 0.080*** (0.017) | 0.077*** (0.025) | |||||
Person-centred care or dignity trained (%) | 0.048** (0.023) | 0.006 (0.033) | |||||
Staff turnover rate | –0.046** (0.021) | –0.014 (0.025) | |||||
Job vacancy rate | –0.245** (0.100) | –0.295** (0.119) | |||||
Care home controls | |||||||
Registration: nursing | –0.043*** (0.015) | –0.041*** (0.015) | –0.038*** (0.014) | –0.042*** (0.014) | –0.040** (0.016) | –0.020 (0.016) | –0.028 (0.018) |
Sector: voluntary | 0.055*** (0.018) | 0.024 (0.020) | 0.087*** (0.017) | 0.085*** (0.017) | 0.066*** (0.020) | 0.061*** (0.021) | –0.010 (0.026) |
Clients: living with dementia | –0.027** (0.013) | –0.020 (0.014) | –0.044*** (0.013) | –0.038*** (0.012) | –0.031** (0.014) | –0.031** (0.014) | –0.028* (0.016) |
Care home competition (HHI) | 0.242*** (0.080) | 0.233*** (0.081) | 0.229*** (0.077) | 0.228*** (0.078) | 0.208** (0.085) | 0.188* (0.099) | 0.233** (0.098) |
Size (beds, log) | –0.067*** (0.012) | –0.067*** (0.013) | –0.057*** (0.011) | –0.059*** (0.011) | –0.050*** (0.012) | –0.069*** (0.013) | –0.052*** (0.015) |
Staff-to-resident ratio | 0.050 (0.094) | –0.030 (0.100) | 0.052 (0.088) | 0.045 (0.088) | 0.052 (0.101) | 0.040 (0.104) | –0.084 (0.123) |
Staff-to-resident ratio (squared) | –0.027 (0.044) | 0.005 (0.047) | –0.019 (0.041) | –0.020 (0.041) | –0.031 (0.047) | –0.033 (0.048) | 0.021 (0.058) |
Female staff (%) | 0.042 (0.073) | 0.064 (0.078) | 0.014 (0.064) | 0.028 (0.064) | 0.066 (0.075) | 0.074 (0.077) | 0.051 (0.095) |
Local area controls | |||||||
Attendance Allowance (%) | 0.005 (0.005) | 0.007 (0.005) | 0.004 (0.004) | 0.004 (0.004) | 0.001 (0.005) | –0.001 (0.005) | 0.000 (0.006) |
Pension Credit (%) | –0.002 (0.002) | –0.003* (0.002) | –0.002 (0.002) | –0.002 (0.002) | 0.001 (0.002) | 0.001 (0.002) | –0.000 (0.002) |
Jobseeker’s Allowance (%) | 0.016 (0.016) | 0.015 (0.017) | 0.009 (0.015) | 0.010 (0.015) | –0.007 (0.017) | –0.005 (0.017) | 0.010 (0.019) |
Average house price (log) | 0.017 (0.027) | –0.012 (0.029) | 0.048* (0.025) | 0.050** (0.025) | 0.053* (0.029) | 0.035 (0.029) | –0.008 (0.036) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –0.115 (0.389) | –0.853** (0.426) | 0.278 (0.346) | 0.277 (0.348) | 0.213 (0.398) | 0.528 (0.407) | –1.151** (0.517) |
Observations | 6509 | 5733 | 7385 | 7385 | 5733 | 5438 | 4002 |
Care homes | 3392 | 3237 | 3861 | 3861 | 3087 | 2973 | 2361 |
Variables | (1) Quality rating (LPM) | (2) Quality rating (LPMIV) | (3) Quality rating (LPM) | (4) Quality rating (LPM) | (5) Quality rating (LPM) | (6) Quality rating (LPM) | (7) Quality rating (LPMIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.406*** (0.093) | 0.971*** (0.145) | 1.142*** (0.174) | ||||
Dementia trained (%) | 0.082*** (0.017) | 0.073*** (0.025) | |||||
Person-centred care or dignity trained (%) | 0.062*** (0.023) | 0.016 (0.033) | |||||
Staff turnover rate | –0.048** (0.022) | –0.008 (0.026) | |||||
Job vacancy rate | –0.327*** (0.105) | –0.398*** (0.124) | |||||
Care home controls | |||||||
Registration: nursing | –0.048*** (0.015) | –0.042*** (0.015) | –0.043*** (0.014) | –0.048*** (0.014) | –0.044*** (0.015) | –0.025 (0.016) | –0.031* (0.018) |
Sector: voluntary | 0.049*** (0.018) | 0.021 (0.020) | 0.082*** (0.017) | 0.080*** (0.017) | 0.064*** (0.019) | 0.066*** (0.021) | –0.010 (0.025) |
Clients: living with dementia | –0.023* (0.013) | –0.015 (0.014) | –0.038*** (0.013) | –0.032** (0.012) | –0.024* (0.014) | –0.024* (0.014) | –0.023 (0.016) |
Care home competition (HHI) | 0.255*** (0.075) | 0.223*** (0.079) | 0.251*** (0.072) | 0.252*** (0.072) | 0.233*** (0.078) | 0.221** (0.093) | 0.247*** (0.095) |
Size (beds, log) | –0.066*** (0.012) | –0.068*** (0.013) | –0.054*** (0.011) | –0.057*** (0.011) | –0.050*** (0.012) | –0.066*** (0.013) | –0.049*** (0.015) |
Staff-to-resident ratio | 0.096 (0.097) | 0.014 (0.104) | 0.117 (0.090) | 0.109 (0.090) | 0.076 (0.102) | 0.079 (0.106) | –0.061 (0.127) |
Staff-to-resident ratio (squared) | –0.046 (0.045) | –0.017 (0.049) | –0.045 (0.042) | –0.046 (0.042) | –0.039 (0.048) | –0.046 (0.049) | 0.012 (0.060) |
Female staff (%) | 0.050 (0.075) | 0.075 (0.080) | 0.012 (0.066) | 0.030 (0.066) | 0.085 (0.076) | 0.075 (0.079) | 0.054 (0.096) |
Local area controls | |||||||
Attendance Allowance (%) | 0.006 (0.005) | 0.007 (0.005) | 0.005 (0.004) | 0.005 (0.004) | 0.002 (0.005) | –0.000 (0.005) | 0.002 (0.006) |
Pension Credit (%) | –0.002 (0.002) | –0.003* (0.002) | –0.002 (0.002) | –0.002 (0.002) | 0.001 (0.002) | 0.000 (0.002) | –0.001 (0.002) |
Jobseeker’s Allowance (%) | 0.017 (0.016) | 0.021 (0.017) | 0.011 (0.015) | 0.012 (0.015) | –0.000 (0.017) | 0.002 (0.018) | 0.024 (0.020) |
Average house price (log) | 0.020 (0.027) | –0.011 (0.029) | 0.049** (0.025) | 0.051** (0.025) | 0.058** (0.028) | 0.042 (0.029) | –0.001 (0.036) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –0.214 (0.389) | –0.955** (0.425) | 0.219 (0.347) | 0.207 (0.348) | 0.114 (0.398) | 0.398 (0.410) | –1.413*** (0.518) |
Observations | 6509 | 5733 | 7385 | 7385 | 5733 | 5438 | 4002 |
R 2 | 0.051 | 0.046 | 0.052 | 0.049 | 0.044 | 0.046 | 0.048 |
Variables | (1) Quality rating (RE) | (2) Quality rating (REIV) | (3) Quality rating (RE) | (4) Quality rating (RE) | (5) Quality rating (RE) | (6) Quality rating (RE) | (7) Quality rating (REIV) |
---|---|---|---|---|---|---|---|
Staffing measures | |||||||
Mean wage (log) | 0.856** (0.392) | 3.436*** (0.585) | 3.389*** (0.579) | ||||
Dementia trained (%) | 0.412*** (0.075) | 0.446*** (0.088) | |||||
Person-centred care or dignity trained (%) | 0.312*** (0.104) | 0.045 (0.120) | |||||
Staff turnover rate | –0.189** (0.080) | –0.150* (0.086) | |||||
Job vacancy rate | –1.463*** (0.354) | –1.398*** (0.378) | |||||
Care home controls | |||||||
Registration: nursing | –0.220*** (0.050) | –0.205*** (0.050) | –0.193*** (0.050) | –0.220*** (0.050) | –0.219*** (0.050) | –0.216*** (0.050) | –0.159*** (0.050) |
Sector: voluntary | 0.379*** (0.080) | 0.215** (0.084) | 0.454*** (0.077) | 0.443*** (0.077) | 0.433*** (0.076) | 0.444*** (0.076) | 0.260*** (0.084) |
Clients: living with dementia | –0.232*** (0.050) | –0.206*** (0.049) | –0.274***(0.050) | –0.246*** (0.050) | –0.233*** (0.050) | –0.248*** (0.050) | –0.248*** (0.050) |
Care home competition (HHI) | 1.130*** (0.364) | 1.082*** (0.359) | 1.195*** (0.361) | 1.175*** (0.363) | 1.156*** (0.365) | 1.130*** (0.364) | 1.128*** (0.356) |
Size (beds, log) | –0.178*** (0.040) | –0.178*** (0.040) | –0.171*** (0.039) | –0.172*** (0.040) | –0.177*** (0.040) | –0.170*** (0.040) | –0.158*** (0.039) |
Staff-to-resident ratio | 0.240 (0.323) | 0.280 (0.322) | 0.299 (0.320) | 0.252 (0.321) | 0.207 (0.323) | 0.182 (0.325) | 0.274 (0.319) |
Staff-to-resident ratio (squared) | –0.133 (0.149) | –0.183 (0.149) | –0.126 (0.148) | –0.125 (0.148) | –0.113 (0.149) | –0.105 (0.150) | –0.161 (0.148) |
Female staff (%) | 0.422 (0.274) | 0.479* (0.272) | 0.328 (0.273) | 0.403 (0.272) | 0.389 (0.274) | 0.335 (0.275) | 0.318 (0.274) |
Local area controls | |||||||
Attendance Allowance (%) | 0.011 (0.017) | 0.016 (0.016) | 0.009 (0.016) | 0.010 (0.017) | 0.010 (0.017) | 0.007 (0.017) | 0.012 (0.016) |
Pension Credit (%) | –0.006 (0.006) | –0.010* (0.006) | –0.005 (0.006) | –0.006 (0.006) | –0.006 (0.006) | –0.005 (0.006) | –0.009 (0.006) |
Jobseeker’s Allowance (%) | 0.019 (0.059) | 0.035 (0.059) | 0.017 (0.059) | 0.021 (0.059) | 0.014 (0.059) | 0.009 (0.059) | 0.036 (0.058) |
Average house price (log) | 0.176* (0.099) | 0.059 (0.101) | 0.220** (0.097) | 0.225** (0.098) | 0.215** (0.098) | 0.235** (0.098) | 0.093 (0.100) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | –2.756* (1.493) | –6.641*** (1.627) | –1.662 (1.362) | –1.669 (1.370) | –1.409 (1.368) | –1.581 (1.368) | –6.931*** (1.615) |
Observations | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 | 12,052 |
Care homes | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 | 5555 |
Imputations | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
Average RVI | 0.160 | 0.163 | 0.153 | 0.149 | 0.147 | 0.152 | 0.223 |
Largest FMI | 0.450 | 0.446 | 0.455 | 0.453 | 0.454 | 0.457 | 0.449 |
Variables | (1) ‘Safe’ rating (REIV) | (2) ‘Caring’ rating (REIV) | (3) ‘Effective’ rating (REIV) | (4) ‘Well led’ rating (REIV) | (5) ‘Responsive’ rating (REIV) |
---|---|---|---|---|---|
Staffing measures | |||||
Mean wage (log) | 0.502*** (0.129) | 0.260*** (0.075) | 0.543*** (0.118) | 0.634*** (0.125) | 0.424*** (0.106) |
Dementia trained (%) | 0.075*** (0.019) | 0.047*** (0.011) | 0.064*** (0.017) | 0.077*** (0.019) | 0.084*** (0.016) |
Person-centred care or dignity trained (%) | –0.006 (0.026) | –0.015 (0.017) | –0.004 (0.024) | –0.007 (0.026) | –0.045** (0.023) |
Care home controls | |||||
Registration: nursing | –0.037*** (0.012) | –0.029*** (0.007) | –0.034*** (0.011) | –0.030** (0.012) | –0.035*** (0.011) |
Sector: voluntary | 0.056*** (0.018) | 0.014 (0.009) | 0.026* (0.016) | 0.053*** (0.017) | 0.035** (0.014) |
Clients: living with dementia | –0.039*** (0.012) | –0.022*** (0.006) | –0.041*** (0.010) | –0.053*** (0.011) | –0.039*** (0.010) |
Care home competition (HHI) | 0.087 (0.081) | 0.085** (0.036) | 0.248*** (0.057) | 0.261*** (0.067) | 0.158** (0.065) |
Size (beds, log) | –0.045*** (0.009) | –0.020*** (0.005) | –0.030*** (0.008) | –0.028*** (0.009) | –0.035*** (0.007) |
Staff-to-resident ratio | 0.072 (0.072) | 0.050 (0.047) | 0.006 (0.068) | 0.043 (0.072) | 0.037 (0.061) |
Staff-to-resident ratio (squared) | –0.036 (0.032) | –0.029 (0.021) | –0.013 (0.031) | –0.024 (0.033) | –0.020 (0.027) |
Female staff (%) | 0.036 (0.059) | 0.079** (0.038) | 0.017 (0.055) | 0.065 (0.057) | 0.053 (0.057) |
Local area controls | |||||
Attendance Allowance (%) | 0.004 (0.004) | 0.001 (0.002) | 0.009** (0.004) | 0.005 (0.004) | 0.007* (0.003) |
Pension Credit (%) | –0.004** (0.001) | –0.001* (0.001) | –0.003** (0.001) | –0.002 (0.001) | –0.002 (0.001) |
Jobseeker’s Allowance (%) | –0.001 (0.014) | 0.001 (0.008) | 0.010 (0.012) | 0.006 (0.013) | 0.003 (0.011) |
Average house price (log) | 0.010 (0.024) | –0.003 (0.014) | 0.020 (0.022) | 0.011 (0.023) | 0.001 (0.020) |
Year controls | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes |
Constant | –0.362 (0.373) | 0.373* (0.212) | –0.586* (0.336) | –0.688* (0.354) | –0.094 (0.309) |
Observations | 11,003 | 11,003 | 11,003 | 11,003 | 11,003 |
Care homes | 5298 | 5298 | 5298 | 5298 | 5298 |
Imputations | 50 | 50 | 50 | 50 | 50 |
Average RVI | 0.0787 | 0.0986 | 0.0881 | 0.0835 | 0.0829 |
Largest FMI | 0.343 | 0.360 | 0.377 | 0.339 | 0.361 |
Variables | (1) Mean wage (MI) | (2) Mean wage (MI) | (3) Mean wage (CC) | (4) Mean wage (CC) | (5) Quality rating (CC-IV) | (6) Quality rating (CC-IV) |
---|---|---|---|---|---|---|
Wage | ||||||
Mean wage (log) | 0.973*** (0.145) | 1.145*** (0.174) | ||||
Instrument | ||||||
Staff below future minimum wage (%) | –0.195*** (0.004) | –0.196*** (0.004) | –0.191*** (0.004) | –0.179*** (0.004) | ||
Spatial lag of Pension Credit uptake | –0.000 (0.000) | –0.000 (0.000) | ||||
Staffing measures | ||||||
Dementia trained (%) | –0.004 (0.002) | 0.009*** (0.003) | 0.073*** (0.025) | |||
Person-centred care or dignity trained (%) | –0.004 (0.003) | –0.008** (0.004) | 0.017 (0.033) | |||
Staff turnover rate | –0.012*** (0.003) | –0.011*** (0.003) | –0.008 (0.026) | |||
Job vacancy rate | 0.018 (0.014) | 0.031** (0.013) | –0.398*** (0.124) | |||
Care home controls | ||||||
Registration: nursing | –0.007*** (0.002) | –0.007*** (0.002) | –0.009*** (0.002) | –0.009*** (0.002) | –0.042*** (0.015) | –0.031* (0.018) |
Sector: voluntary | 0.039*** (0.003) | 0.038*** (0.003) | 0.030*** (0.003) | 0.026*** (0.004) | 0.021 (0.020) | –0.010 (0.025) |
Clients: living with dementia | –0.008*** (0.002) | –0.007*** (0.002) | –0.007*** (0.002) | –0.006*** (0.002) | –0.015 (0.014) | –0.023 (0.016) |
Care home competition (HHI) | –0.011 (0.009) | –0.011 (0.009) | –0.006 (0.009) | –0.002 (0.012) | 0.223*** (0.079) | 0.247*** (0.095) |
Size (beds, log) | –0.002 (0.002) | –0.002 (0.002) | 0.003* (0.002) | 0.005** (0.002) | –0.068*** (0.013) | –0.049*** (0.015) |
Staff-to-resident ratio | –0.016 (0.013) | –0.018 (0.013) | 0.022 (0.014) | 0.041*** (0.015) | 0.014 (0.104) | –0.061 (0.127) |
Staff-to-resident ratio (squared) | 0.015** (0.006) | 0.016** (0.006) | –0.005 (0.007) | –0.015** (0.007) | –0.017 (0.049) | 0.012 (0.060) |
Female staff (%) | 0.005 (0.010) | 0.006 (0.010) | 0.019 (0.012) | 0.022* (0.013) | 0.075 (0.080) | 0.055 (0.096) |
Local area controls | ||||||
Attendance Allowance (%) | –0.002*** (0.001) | –0.002*** (0.001) | –0.001** (0.001) | –0.001** (0.001) | 0.007 (0.005) | 0.002 (0.006) |
Pension Credit (%) | 0.001*** (0.000) | 0.001*** (0.000) | 0.001*** (0.000) | 0.001*** (0.000) | –0.003* (0.002) | –0.001 (0.002) |
Jobseeker’s Allowance (%) | –0.004* (0.002) | –0.004* (0.002) | –0.003 (0.002) | –0.001 (0.002) | 0.021 (0.017) | 0.024 (0.020) |
Average house price (log) | 0.011*** (0.004) | 0.010*** (0.004) | 0.017*** (0.004) | 0.021*** (0.004) | –0.012 (0.029) | –0.001 (0.036) |
Year controls | Yes | Yes | Yes | Yes | Yes | Yes |
Region controls | Yes | Yes | Yes | Yes | Yes | Yes |
Constant | 2.021*** (0.051) | 2.031*** (0.051) | 1.891*** (0.058) | 1.821*** (0.063) | –0.958** (0.425) | –1.417*** (0.518) |
Observations | 12,052 | 12,052 | 5733 | 4002 | 5733 | 4002 |
R 2 | 0.590 | 0.600 | 0.046 | 0.047 | ||
Weak identification | 2508.4*** | 2529.8*** | 1062.6*** | 1674.7*** | 1062.6*** | 1674.7*** |
Overidentification | 0.532a | 0.881a | 0.532a | 0.881a | ||
Strict exogeneity test | –0.067* | –0.066* | ||||
Endogeneity test | –0.879*** | –0.821*** | 27.03*** | 23.52*** | 27.03*** | 23.52*** |
Imputations | 50 | 50 | ||||
Average RVI | 0.482 | 0.548 | ||||
Largest FMI | 0.498 | 0.670 |
Glossary
- Care Quality Commission
- The national health and social care regulator in England.
- Cognitive testing
- An interview technique used to improve questionnaire design.
- Confirmatory factor analysis
- A statistical technique used to evaluate a measure’s dimensionality.
- Construct
- A theoretical concept (e.g. quality of life) that cannot be measured directly but may be estimated from indicators that represent the underlying concept.
- Construct validity (by hypothesis testing)
- The degree to which the scores of a tool (or its items) are consistent with hypotheses, which is based on the assumption that the tool validly measures the construct in question.
- Dementia
- A general term for a set of symptoms caused by different disorders of the brain (e.g. Alzheimer’s disease).
- Dimensionality
- The number and nature of constructs captured by a measure.
- Domain or attribute
- An aspect of quality of life.
- Dummy variable
- A qualitative or categorical variable that is introduced into regression analysis to utilise information that cannot be measured on a numeric scale.
- Easy read
- An accessible presentation of text, usually with pictures, often used to convey written information to people with intellectual and developmental disabilities.
- Endogeneity
- A term for where a predictor variable is itself affected by the statistical model in which it is included. This can be caused by important predictor variables being excluded from the model (omitted variable bias) or through the outcome variable (e.g. quality) determining a predictor variable(s) (simultaneity).
- Exogeneity
- Where a predictor variable included in a statistical model is not explained by the model.
- Factor structure
- See Dimensionality.
- Focus group
- A group interviewing technique.
- Instrumental variable
- A statistical technique to address endogeneity caused by simultaneity.
- Internal reliability
- The degree of inter-relatedness among the items (questions) within a measure. Also referred to as internal consistency.
- Latent construct
- See Construct.
- Multiple imputation
- A statistical technique for addressing missing data.
- National Living Wage
- The UK minimum wage rate for employees aged ≥ 25 years.
- Person-centred care
- Care that meets the wishes and preferences of care recipients.
- Psychometric testing
- Generally, the analyses used to evaluate the properties (e.g. validity, reliability) of a measurement instrument.
- Rasch analysis
- A method, based on item response theory, used to evaluate a measure’s psychometric properties.
- Structural validity
- The degree to which the scores of a measure adequately reflect the dimensionality of the construct to be measured.
- Unidimensionality
- The relationship of all items in a measure to one underlying construct.
- Validity
- An assessment of whether or not an instrument measures the construct(s) it claims to measure.
List of abbreviations
- ADASS
- Association of Directors of Social Services
- ADIS-IV
- Anxiety Disorders Interview Schedule for DSM-IV
- ADL
- activities of daily living
- ANOVA
- analysis of variance
- APS
- Abbey Pain Scale
- ASC-WDS
- Adult Social Care Workforce Data Set
- ASCOT
- Adult Social Care Outcomes Toolkit
- ASCOT-CH4
- Adult Social Care Outcomes Toolkit Care Homes, four levels
- CES-D
- Centre for Epidemiologic Studies Depression Scale
- CNPI
- Checklist of Nonverbal Pain Indicators
- CQC
- Care Quality Commission
- CSDD
- Cornell Scale for Depression in Dementia
- df
- degrees of freedom
- DIF
- differential item functioning
- DMAS
- Dementia Mood Assessment Scale
- DS-DAT
- Discomfort Scale in Dementia of the Alzheimer’s Type
- EFA
- exploratory factor analysis
- EQ-5D
- EuroQol-5 Dimensions
- EQ-5D-5L
- EuroQol-5 Dimensions, five-level version
- FPS
- Faces Pain Scale
- GAD-2
- Generalised Anxiety Disorder 2-item
- GAI
- Geriatric Anxiety Inventory
- GAS
- Geriatric Anxiety Scale
- GDS
- Geriatric Depression Scale
- HADS
- Hospital Anxiety and Depression Scale
- HAM-A
- Hamilton Anxiety Rating Scale
- HDRS
- Hamilton Depression Scale
- HRQoL
- health-related quality of life
- KLOE
- key line of enquiry
- KMO
- Kaiser–Meyer–Olkin
- MADRS
- Montgomery–Åsberg Depression Rating Scale
- MDS-DRS
- Minimum Data Set Depression Rating Scale
- MDSCPS
- Minimum Data Set Cognitive Performance Scale
- MiCareHQ
- Measuring and Improving Care Home Quality
- MOBID-2
- Mobilisation–Observation–Behaviour–Intensity–Dementia-2 Pain Scale
- MOOCH
- Measuring Outcomes Of Care Homes
- NIHR
- National Institute for Health Research
- NOPPAIN
- Non-Communicative Patient’s Pain Assessment Instrument
- OLS
- ordinary least squares
- PACSLAC
- Pain Assessment Checklist for Seniors with Limited Ability to Communicate
- PAINAD
- Pain Assessment in Advanced Dementia Scale
- PHQ-9
- Patient Health Questionnaire-9
- PHQ-9 OV
- Patient Health Questionnaire-9 Observational Version
- PPI
- patient and public involvement
- PSSRU
- Personal Social Services Research Unit
- RAID
- Rating Anxiety and Dementia Scale
- REIV
- random-effects model with instrumental variables
- SCRQoL
- social care-related quality of life
- WP
- work package
Notes
-
Information for family members acting as personal consultees
-
Care home resident qualitative interview schedule (CH3INT-Qual)
Supplementary material can be found on the NIHR Journals Library report page (https://doi.org/10.3310/hsdr09190).
Supplementary material has been provided by the authors to support the report and any files provided at submission will have been seen by peer reviewers, but not extensively reviewed. Any supplementary material provided at a later stage in the process may not have been peer reviewed.