Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 14/04/48. The contractual start date was in November 2015. The final report began editorial review in October 2018 and was accepted for publication in June 2019. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
John Powell declares current membership of the National Institute for Health Research (NIHR) Health Technology Assessment and Efficacy and Mechanism Evaluation Editorial Board (2005 to present), of which he is chairperson and editor-in-chief (since April 2019). In addition, John Powell is a co-investigator on another NIHR Health Services and Delivery Research (HSDR)-funded project, which was funded under the same call [Understanding how frontline staff use patient experience data for service improvement: an exploratory case study evaluation and national survey (HSDR 14/156/06)]. Louise Locock declares personal fees from the Point of Care Foundation (London, UK) outside the submitted work. In addition, Louise Locock is principal investigator on another NIHR HSDR-funded project, which was funded under the same call [Understanding how frontline staff use patient experience data for service improvement: an exploratory case study evaluation and national survey (HSDR 14/156/06)]. Sue Ziebland declares her work as programme director of the NIHR Research for Patient Benefit programme (2017 to present). Sue Ziebland is also a co-investigator on another NIHR HSDR-funded project, which was funded under the same call [Understanding how frontline staff use patient experience data for service improvement: an exploratory case study evaluation and national survey (HSDR 14/156/06)]. Sue Ziebland is a NIHR Senior Investigator. We acknowledge support from the NIHR Oxford Collaboration for Leadership in Applied Health Research and Care at Oxford Health NHS Foundation Trust for salary support to John Powell, Anne-Marie Boylan and Michelle van Velthoven.
Disclaimer
This report contains transcripts of interviews conducted in the course of the research and contains language that may offend some readers.
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2019. This work was produced by Powell et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2019 Queen’s Printer and Controller of HMSO
Chapter 1 Background and rationale
Introduction
Digital health is fast becoming a new determinant of health. Access to and use of digital services will soon influence both the care options available to individuals and the outcomes they gain from them. 1 In this context, a new challenge for the NHS is to know how to interpret online patient feedback in relation to other sources of data on patient experience, and if and how to act on this content to improve services. Online feedback may have advantages, such as timeliness and transparency, but anecdotally it is sometimes seen as providing unrepresentative information from just a few users and, at the extreme ends of feedback, from overly negative and very positive experiences. The overarching aim of this study was therefore to provide the NHS with the evidence required to make best use of online patient feedback to improve health-care delivery in combination with other local qualitative and quantitative information on patients’ experiences.
Background
Person-centredness is a fundamental pillar of health-care quality,2,3 and patient experience is associated with patient safety and self-rated and objectively measured health outcomes for a wide range of disease and service areas. 4–6 Despite the importance placed on creating a patient-centred, responsive health system, a series of high-profile investigations, including those by Sir Robert Francis into the Mid Staffordshire NHS Foundation Trust,7 Sir Bruce Keogh’s investigation into struggling trusts8 and Don Berwick’s national review of patient safety,9 noted a failure at both the team and organisational level within the NHS in recognising and responding to feedback from patients and their families and carers.
At the same time, as most ‘traditional’ feedback mechanisms, such as surveys and complaints systems, are struggling to elicit good response rates and be used to make a difference, health-care providers are receiving large amounts of (often unacknowledged) commentary from patients and carers via the internet. 10–17 Gathering, interpreting and responding to solicited and unsolicited online consumer feedback is now established practice and fundamental to success in industries, such as retail, travel and hospitality industries. 18,19 In 2015, the UK Competition and Markets Authority estimated that online reviews influence £23B of consumer spending each year. 20 The digital consumer has become accustomed to leaving such feedback on products and services, and these industries harness crowdsourced evaluations to drive consumer choice and to inform service improvements, albeit this has not been without challenges, including the potential gaming and manipulation of feedback.
The internet is having a major impact on people’s relationships within health care and people are already commentating on their health experiences online. 10,21–25 UK and US data show that online feedback on health care is increasing and likely to continue to grow fast. 26,27 This includes comments on structured patient rating sites [e.g. NHS Choices (URL: www.nhs.uk), iWantGreatCare (URL: www.iwantgreatcare.org) and Care Opinion (URL: www.careopinion.org.uk)], and also unstructured and unsolicited commentary about treatment, health services and illness in online settings, such as blogs, forums and social media. (Note: in this document we use terms such as feedback or comments to refer to all of this solicited and unsolicited content.)
When we started this project, NHS England had just committed to using internet feedback as part of its vision for a digital NHS founded on the concepts of participation, transparency and transaction. NHS managers and health-care practitioners will therefore need to understand how to interpret, respond to and harness online content from patients. Patients, carers and the public need to understand how they can provide useful feedback to the NHS and what influence this can have. Yet, there is no consensus or clear policy about how and who should use online feedback to deliver NHS and patient benefit, and there is a very limited evidence base. Little is known about the people who provide online content on their experience of care, why they do this, whether or not there are issues of inequality and what influence this feedback has on other patients, practitioners and organisations. We need to understand the strengths, weaknesses and uses of the data. There is some limited work on this from outside the UK14,28,29 (e.g. from surveys conducted by the Pew Research Center). 14 However, research exploring motivation to provide feedback is sparse and the focus is on administrative procedures for handling complaints, rather than patients themselves. 30 In the USA, 40% of a nationally representative sample reported that online ratings were ‘very important’ in choosing a physician. 13 In Germany, online raters were more likely to be younger, female and more educated. 14 A small UK study suggested that the views of certain groups may be disproportionately represented in ratings. 15
We need better data to provide a robust understanding of online feedback from a user’s perspective andthe role of online feedback in improving health-care services, and more information about the authors and receivers of feedback. We also need to understand individual, professional and organisational issues influencing the use of online feedback in health care. Many clinicians appear resistant to using online feedback, worrying about selection bias, vulnerability to gaming or malice, and have the concern that subjective patient experience and objective care quality may be only tangentially related. 31 To the best of our knowledge, before this study, there were no representative data on health professionals’ attitudes to and experiences of online feedback and no in-depth analysis of the organisational issues to guide its use in NHS organisations.
Objectives
We therefore had three research objectives, each of which addressed gaps in the current evidence base:
-
to identify the current practice and future challenges, for online patient feedback, and to determine the implications for the NHS
-
to understand what online feedback from patients represents and who is excluded, with what consequences
-
to understand the potential barriers to and facilitators of the use of online patient feedback by NHS staff and organisations, and the organisational capacity required to combine, interpret and act on patient experience data.
We also had a fourth ‘knowledge translation’ objective:
-
to use the study findings to develop a toolkit and training resources for NHS organisations, to encourage appropriate use of online feedback in combination with other patient experience data.
Methods
The study comprised five projects, listed here and aligned with our three research objectives:
-
stakeholder consultation and evidence synthesis (scoping review) regarding use of online feedback in health care (to address objective 1)
-
questionnaire survey of the public on the use of online comment on health services (to address objectives 1 and 2)
-
qualitative study of patients’ and carers’ experiences of creating and using online comment (to address objectives 1 and 2)
-
survey and focus groups of health-care professionals (to address objectives 1 and 3)
-
ethnographic organisational case studies with four NHS secondary care provider organisations (to address objectives 1 and 3).
There was one minor change to the protocol during the course of the study. Our questionnaire survey of the public was originally going to form one part of the Oxford Internet Surveys (OxIS) for 2015, but OxIS did not take place and so we used exactly the same method but as a standalone survey.
Research team and advisers
The research team had quantitative and qualitative expertise in the areas of digital health and patient experience research, and included people with disciplinary backgrounds in health services research (especially in primary care and public health), sociology, science and technology studies, psychology, epidemiology and nursing and statistics, as well as a lay co-investigator with experience as an expert patient and blogger. This range of perspectives has been a particular strength throughout the programme, enabling us to examine findings through several different lenses. We also participated in the learning set established to bring together the other projects funded by the National Institute for Health Research (NIHR) Health Services and Delivery Research (HSDR) programme under its themed call for patient experience projects. Our project was not originally submitted to this call, but given the obvious synergy of our topic area, the funders subsequently included it with these other studies. The learning set meetings were very helpful in building a community of collaborative researchers interested in this area, and in sharing our emerging findings and receiving constructive feedback to inform our methods, analyses and discussion.
The study was overseen by a Study Steering Committee (SSC) (see Appendix 1 for membership), which met approximately every 6 months. The full research team also met every 6 months, with a smaller core team meeting monthly. Our public and patient involvement (PPI) activity was led by the lay co-investigator, who was a full member of the project team and we were advised by a Patients, Carers and Public Reference Group (PCPRG) chaired by an independent lay representative, which met as needed and which also provided feedback via e-mail (full details of our PPI activity is in Chapter 7).
Structure of monograph
Each of our five projects is described in a separate chapter (see Chapters 2–6), with full details of methods and findings. Chapter 7 describes our PPI activity, and an overarching synthesis of findings and their implications is provided in Chapter 8. Research ethics considerations are covered in each project chapter.
The next chapter describes the first of our projects: the stakeholder consultation and evidence synthesis work.
Chapter 2 A scoping review and stakeholder consultation charting the current landscape of the evidence on online patient feedback
Summary
As the initial project, this scoping review and stakeholder consultation aimed to identify and synthesise the current practice, state-of-the-art practice and future challenges in the field of online patient feedback. We searched electronic bibliographic databases and conducted hand-searches up to January 2018. We included primary studies of internet-based reviews and other online feedback (e.g. from social media and blogs), from patients, carers or the public, about health-care providers (individuals, services or organisations). Key findings were extracted and tabulated for further synthesis, guided by the themes arising from a consultation with 15 stakeholders with online feedback expertise from a range of backgrounds, including health-care policy, practice and research. We found that, as with much digital innovation, research is lagging behind practice. The current literature helped to clarify the frequency of online commentary and challenged the assumption that feedback is usually negative. The review identified gaps in the evidence base, which can guide future work, especially in understanding how organisations can use feedback to deliver health-care improvement.
Method
When we began this work, to the best of our knowledge, no synthesis of the existing body of literature on online patient feedback (reviews and/or ratings) had been conducted. Collating knowledge and developing an understanding of current research was an important precursor to further work in this area. Adopting a scoping review methodology allowed us to access and review existing evidence, summarise and disseminate research findings and identify gaps in the existing literature.
Scoping studies ‘aim to map rapidly the key concepts underpinning a research area and the main sources and types of evidence available’. 32 They are useful when reviewing literature on complex topics or areas that have not been reviewed before. The depth of the subsequent analysis of findings depends on the purpose of the review. 33 Unlike other types of reviews, such as quantitative systematic reviews, the scoping review does not appraise the quality of research evidence. However, it does consider the strengths and limitations of individual studies and critique the existing body of knowledge.
To identify relevant literature, a list of free-text and thesaurus terms likely to retrieve articles about online patient feedback was compiled using an iterative process of consultation between the research team and an information specialist.
Searches were run in May 2015 and updated in January 2018. Five databases were searched: MEDLINE (In-Process & Other Non-Indexed Citations and Ovid MEDLINE, 1948–present, accessed through OvidSP), EMBASE (1974–present, accessed through OvidSP), PsycINFO (1967–present, accessed through OvidSP), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1981–present, accessed through EBSCOhost) and Social Science Citation Index (1956–present, accessed through Web of Knowledge). Titles and abstracts were subsequently screened for relevance using the following inclusion criteria.
-
Topic area: the main focus of the article had to be about online feedback (to include internet-based reviews, ratings and other online feedback, such as found in social media and blogs) from patients, carers and/or the public, about health-care providers (individuals or organisations).
-
Type of paper: original research.
-
Study design: all study designs.
-
Date: 2000 to present.
Titles and abstracts were screened independently by two authors (AMB and VW) using Covidence (Veritas Health Innovation, Melbourne, VIC, Australia), a software package designed to aid the screening process. Disagreements were resolved in discussion with a third author (JP). Full texts were screened using the same criteria and process (again by two authors, with referral to a third in cases of disagreement). All included articles can be found in Report Supplementary Material 1. We have used a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram to illustrate this, although as this was a scoping review of diverse studies we do not follow PRISMA reporting more generally (Figure 1).
Once full-text articles had been selected, they were randomly assigned to one of two authors for single data extraction. Data were extracted and tabulated using a standard pro forma (in a scoping review, it is not necessary to conduct double extraction). The following data were extracted: information on authors, date of publication, study aims, sample, methods and findings. Reference lists were checked for further articles to include.
The articles were thematically grouped by aim for further analysis in a process of data charting. A descriptive narrative synthesis of the findings was produced through individual analysis and discussion within the research team.
Stakeholder consultation
We conducted a stakeholder consultation with 15 representatives from a range of organisations, including policy-makers, senior clinicians working in patient experience, patient experience managers, regulators, representatives from patient feedback organisations and service users who have read or provided feedback. They had all collected or used online feedback. The aim of this exercise was to identify stakeholder priorities and questions to guide the literature review. In other words, we wanted to respond to the preoccupations of stakeholders and to see the extent to which the current evidence base could address the questions they had about online feedback. They were consulted about their perceptions and concerns about online feedback, including what they thought was important for the future. Consultations were conducted individually, in person or on the telephone, and notes were taken to capture the data, which were then analysed inductively. Ethics approval was not required.
In the stakeholder consultation, we identified six key issues to help navigate the online patient feedback landscape, which addressed evidence gaps identified by stakeholders:
-
Who provides and who uses reviews?
-
How do organisations currently use reviews?
-
What is the content of reviews?
-
Why is online feedback given?
-
What are staff and service user attitudes towards online feedback?
-
How reliable is online feedback?
Findings
Search results
The search yielded 29,039 papers. Twelve further papers were identified through hand-searching and citation checking. After duplicates (n = 14,221), animal studies, conference abstracts, non-English-language papers and those published before 2000 (n = 13,911) were excluded, 310 papers were accepted for full-text screening, after which 78 papers were included in the review.
Where, when and what kind of research has been conducted?
The majority of the 78 included papers described studies conducted in the USA (n = 4415,26,29,34–74). Others were from the UK (n = 1227,75–85), Germany (n = 811,28,86–91), the Netherlands (n = 349,92,93), China (n = 394–96), Austria (n = 197), Canada (n = 198) and Switzerland (n = 199). Five studies10,21,100–102 were conducted using patient feedback collected in more than one country.
As presented in Table 1, the majority of included studies used quantitative methods and were predominantly exploratory or descriptive cross-sectional studies, surveys or experiments, or employed machine learning. There were also qualitative and mixed-methods studies.
Design | Studies |
---|---|
Quantitative | Bardach et al.;34 Bidmon et al.;86 Black et al.;35 Burkle and Keegan;36 Emmert et al.;103 Emmert et al.;28 Emmert and Meier;87 Emmert et al.;11 Emmert et al.;88 Frost and Mesfin;37 Gao et al.;26 Gao et al.;38 Galizzi et al.;75 Gilbert et al.;39 Glover et al.;40 Gray et al.;41 Greaves et al.;76 Greaves et al.;104 Hanauer et al.;42 Hao;94 Johnson;43 Kadry et al.;44 Kinast et al.;45 Lagu et al.;15 Lewis;105 McCaughey et al.;46 Merrell et al.;47 Riemer et al.;48 Samora et al.;49 Segal et al.;50 Sobin and Goyal;51 Terlutter et al.;89 Thackeray et al.;52 Timian et al.;74 Trehan et al.;53 and van Velthoven et al.77 |
Experimental | Grabner-Kräuter and Waiguny;97 Hanauer et al.;54 Jans and Kranzbühler;92 Kanouse et al.;55 Li et al.;56 and Yaraghi et al.57 |
Machine learning | Brody and Elhadad;58 Brooks and Baker;78 Greaves et al.;79 Hao;94 Hawkins et al.;59 Hopper and Uriyo;60 Paul et al.;61 Ranard et al.;62 Rastegar-Mojarad et al.;63 and Wallace et al.64 |
Mixed methods | Ellimoottil et al.;65 Emmert et al.;90 Greaves et al.;80 Lagu et al.;100 Lagu et al.;66 MacDonald et al.;98 Reimann and Strech;101 Smith and Lipoff;67 and van de Belt et al.93 |
Qualitative | Adams;10 Adams;21 Bardach et al.;68 Brown-Johnson et al.;102 Detz et al.;69 Kilaru et al.;70 Kleefstra;106 López et al.;29 Nakhasi et al.;71 Patel et al.;81 Patel et al.;82 Rothenfluh et al.;99 Shepherd et al.;83 Speed et al.;84 Sundstrom et al.;72 and Zhang et al.95 |
Who provides and who uses online reviews?
From the literature, it is apparent that public awareness of rating sites differs, that at present the numbers of people providing online reviews are still low, but people are starting to use these sites more frequently. A German survey28 showed that 32% of the public were aware of health rating sites (people were more commonly aware of rating sites for other products and services) and health rating sites were seen as less important sources of health information than other sources (e.g. recommendations of friends and family). 54
Posting (providing) a rating was a slightly more established activity in Germany than in other countries: a German survey87 (2013) showed that 11% of participants had posted online feedback. In studies in Austria (2014)54 and in the USA (2015),97 the prevalence was 6%. 54,97 The most recent UK figure, as identified in our own Improving NHS Quality Using Internet Ratings and Experiences (INQUIRE) survey (see Chapter 3), was 8%. 77 Other studies showed that women were more likely than men to post feedback. 89,90
People who provide online feedback are likely to be younger, have higher levels of education89 and have a long-term condition. 52,89 Likelihood to use online review sites may also be influenced by the doctor–patient relationship:75 perceiving the relationship to be friendly, feeling listened to and being the same sex as the general practitioner (GP) seem to predict use, and willingness to use was predicted by autonomy in health-care decisions. However, patients who felt that they had clear explanations from their GP were less likely to use online review sites. Men and those with less formal education were less likely to use these sites,52,89 as were people with higher incomes. 75
How are online reviews used by organisations?
From this scoping review, it was evident that this is a clear gap in the literature: no papers that considered the purpose of online patient feedback or uncovered the practices and processes governing its use in health-care organisations were found. However, evidence that some services have begun to incorporate online reviews into service improvement was uncovered: in Germany, a survey88 found that ophthalmology and gynaecology services were the most likely to implement change based on online patient feedback. Similarly, there was limited research on the value of online review in health-care inspection or monitoring agencies, although its potential was noted despite some concerns, for example by staff in a study of the Dutch Health Inspectorate. 49 This is particularly true of structured patient feedback websites, which were thought to contain more pertinent additional information than other social media platforms [e.g. Facebook (Facebook, Inc., Menlo Park, CA, USA; www.facebook.com) and Twitter (Twitter, Inc., San Francisco, CA, USA; www.twitter.com)]. 93 The structured websites were considered by patients to provide ‘on the ground’ or ‘bottom-up’ quality monitoring.
What is the content of reviews?
Characteristics of reviews
Strikingly, the included studies repeatedly showed that the majority of reviews were positive and that numeric ratings for health-care providers tended to be high. 51,90,91,100 Reviewers often recommended the health service to other patients. 85,100 Positive reviews were more likely to be posted by females, older adults and those with private health insurance. 91 Having a long-standing relationship with a health-care professional was also linked to providing a positive review. 69
Reviews tend to be short. A sentiment analysis of 33,654 reviews of 12,898 medical practitioners in the New York State area found that, on average, reviews were 4.17 sentences long and 15.5% contained only one line of text. 58 Lengthier commentaries were more likely to be negative. 90 When family members reviewed health services, they were more likely to comment on matters of patient safety. 68
In general, comments tended to concern services or providers, clinical and administrative staff, and the physical environment. They often related specifically to clinicians and focused on knowledge and competency,10,98 patient-centred communication,10,95,98 personal character traits,10,29 professional conduct,10 dignified care100 and co-ordination of care. 69 Waiting times and length of appointments often featured in the reviews29,45,95,100 and other themes focusing on the service or environment pertained to cleanliness,67,100 scheduling appointments,67 insurance,45 access,69 administrative staff45,69 and parking. 11 Again, these facets of patients’ experiences were more frequently commented on in positive (rather than negative) terms. 78
Who is reviewed?
Male staff were more likely to be the subject of reviews than female staff, who were more likely to receive positive feedback. 87,91 When comments and ratings were specifically aimed at professional staff, they included mainly generalists. 29 Two studies indicated that some specialties (e.g. radiology) were less frequently commented on and some subspecialties received a higher number of reviews (e.g. facial plastic surgery) than other services. 51 Two studies showed that surgeons were reviewed quite frequently on German and US websites and plastic surgeons in California had a large number of online ratings and reviews. 73
How and why do service users use these sites?
In general, the included studies report that these sites are used to post feedback, or to help choose a doctor or another health professional (e.g. dentist). 28,42,52,54,97 Twenty-eight per cent of respondents in a 2014 US survey54 had used reviews and ratings websites to find a doctor. A 2013 German survey28 showed that 25% of respondents had used the websites for this purpose. In the latter survey, 65.35% of the 1505 respondents had chosen a doctor based on the reviews and ratings, whereas 52.23% had used the online feedback to identify which doctors to avoid. In a nationally representative survey in the USA,42 35% of those who had used rating sites in the last year said that good ratings had a positive effect and 37% said that poor ratings had a negative effect on physician choice. Further evidence of the impact of the valence of reviews on physician choice was found in an experimental study, which confirmed that negative review content reduces the willingness to choose a doctor and that presenting negative reviews before positive ones has a greater (negative) effect than if the positive reviews are presented first. 56
Qualitative research exploring the motivations to post or read online reviews is limited. An English interview study82 with primary care patients who had never posted feedback found that they suggested they would do so only if they wanted to review an extremely positive or extremely negative experience at their general practice. They did not see the value in providing feedback on routine or ordinary experiences.
The number of negative reviews read, and the order in which they were read, was also found to have an impact. Reading negative reviews before positive reviews led to patients becoming less willing to consult a particular doctor. 56 Characteristics of the reviewers, including perceived trustworthiness, credibility and expertise, were also found to have an impact. When it came to content, fact-oriented reviews were reported as preferred over emotional-oriented reviews, or those containing slang or humour. 97 In addition, those who used a rating or review website to read comments were then more likely to rate a health-care experience in future. 89 Patients are more likely to spend more time on websites that contain comments (i.e. not just numeric ratings),55 which, the authors speculate, may increase the potential for ‘suboptimal choices’. 55
How do staff and service users feel about online feedback?
Based on the extant literature, health professionals hold a range of concerns about online feedback, but patients’ attitudes are more varied. Further to the research reported earlier about the perceptions of monitoring agency staff, three studies using interviews and surveys explored health professionals’ views. 43,70,81 In a qualitative interview study in England, GPs expressed their apprehension, particularly on the validity and representativeness of online feedback. 81 Twelve per cent of doctors responding to a survey deemed online rating websites useful and 39% agreed with the feedback they had received. 43 In another survey, 65% of US hand surgeons said that they were sceptical of feedback websites, with 82% reporting that it had no implications for their practice. 70
Patients have varied attitudes towards online feedback. A qualitative interview study82 about reviews and ratings in general practice in England found that participants questioned the need for online feedback and were unsure if GPs would use it. For some, the benefits of online feedback were that it could be posted remotely, could be shared with other patients and would be taken seriously by GPs. Others were concerned about privacy and security, and believed that online feedback could be ignored. In a qualitative study99 conducted in Switzerland, parents reported that review websites were more like a directory of services than a decision aid, asserting that there was not enough information to guide a choice. For them, the most effective way to evaluate a health professional was to do so in person.
How reliable are online ratings and reviews?
Comparisons have been made between traditional measures of experience and satisfaction, such as the NHS Inpatient Survey in England and the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) in the USA, and feedback on rating and review websites (e.g. Yelp or NHS Choices). These have shown similarities between online feedback and standardised measures of patient satisfaction and experience, although the online reviews tend to contain more information.
Online reviews and ratings of specific hospitals were correlated with survey responses in both England and the USA. 34,58,76,79 A strong correlation (r = 0.49; p < 0.001) was found in the USA between Yelp scores and overall scores on the HCAHPS. 34 In an English study,76 the number of patients willing to ‘recommend the hospital to a friend’ was correlated with a hospital’s overall rating on the national inpatient survey (Spearman’s ρ = 0.41; p < 0.001). Weak correlations were established between positive online recommendations and lower hospital mortality ratios (Spearman’s ρ = –0.20; p = 0.01), and better ratings of hospital cleanliness were weakly associated with lower rates of infections, particularly meticillin-resistant Staphylococcus aureus (MRSA) (Spearman’s ρ = –0.30; p = 0.001) and Clostridium difficile (C. difficile) (Spearman’s ρ = –0.16; p = 0.04). 85
Analysis of online reviews in both England and the USA showed that their content was similar to the domains on patient surveys, suggesting that the surveys do cover items that matter to patients. Significant associations were found between scores on the NHS Inpatient Survey in England and online feedback on domains such as hospital cleanliness, dignified and respectful treatment, and involvement in decisions about care. 79 However, studies in the USA showed that more topics were raised in online feedback than on the HCAHPS,34,62 indicating the potential for it to supplement existing measures of patient experience. However, the additional topics may have been more salient to some services than others. 58
Discussion
As with many digital health innovations, the research of the field of online ratings and reviews lags behind the practice and the issues of interest to stakeholders. We know that there are many websites collecting online patient feedback and we know that people are using them; however, this scoping literature review has shown that the current evidence base is limited to a relatively small number of, often small-scale, studies from which it is hard to draw definitive conclusions. Our initial consultation with stakeholders about their priorities helped guide our questions, but it is clear from the literature that, as yet, current research does not address all areas of interest.
We can conclude that patients in several high-income countries are using online feedback sites to choose health professionals and to gauge public opinion about them, and that this use is increasing. These sites are also beginning to be used to monitor health services, especially in the Netherlands, demonstrating their potential for the care quality regulators.
Papers examining the content of online reviews reveal that patients commented on a range of factors about their health-care experience, including waiting times, environmental factors and staff. 100 Comments about staff predominantly related to medics themselves and centred on their perceived knowledge, skills, competence and communication ability. 10,42 Such findings illustrate that patients can, and do, comment on a range of aspects of their experience. Studies also consistently demonstrate that the majority of reviews are positive, and that negative reviews may be influential both in terms of their content106 and the order in which they are read. 56 Several studies found that negative reviews tend to be expressed with more words than positive comments. 63,90 This could mean that negative comments provide detail that could be used in locating and addressing the problem. Such findings also have implications for the design of online platforms for capturing feedback, which should allow the option of free-text comments, as well as check boxes and scales.
In general, online feedback has been shown to complement standardised patient surveys and can correlate with other measures of quality. Two US studies (Bardach and colleagues68 and Ranard and colleagues62) showed that more topics were raised online than in a patient survey. We can speculate that without the constraints of a structured survey, patients might be able to provide a more diverse range of data for use in quality and service improvement.
Few studies have focused on the attitudes and perceptions of health professionals in relation to online patient feedback. Patel and colleagues81 found that health professionals were concerned about its usability, validity and transparency. Our initial search uncovered numerous editorials and opinion pieces written by, and for, health professionals who were sceptical about online reviews.
Two other literature reviews have sought to examine the research in this field. Verhoef and colleagues107 followed Arksey and O’Malley’s33 protocol to conduct a scoping review of literature about the relationship between quality of care and social media and rating sites. Their 29 papers included opinion pieces and original research, and focused on the relationship between social media and care quality.
Emmert and colleagues‘28 systematic review aimed to answer eight questions about the percentage of physicians who were rated, the average number of ratings, the relationship of rating with physician characteristics, whether ratings were more likely to be positive or negative, the significance of patient narratives and the problems with rating sites and how they could be improved. The current review provides updated information on these questions; we have provided a synthesis of research on the content of patient comments, a more complete description of users of these sites, including patients who post reviews or are influenced by the reviews they read, and other use by inspectorate bodies.
Strengths and weaknesses of the study
This was a broad scoping review of the literature that was guided by a stakeholder consultation. It included a large number of diverse peer-reviewed primary research studies. We employed rigorous, systematic and transparent processes throughout and were guided by a protocol that was reviewed by an information specialist. Reference management (EndNote X7.4; Clarivate Analytics, Philadelphia, PA, USA) and Cochrane-recommended systematic review (Covidence) software were used to manage the review, ensuring that all papers were accounted for.
We conducted a broad search, ensuring that all relevant databases were included. However, as we did not search the grey literature, it is possible that we have failed to find some relevant non-peer-reviewed work. The search was conducted in English and, although it yielded some papers in other languages, these were excluded as we did not have capacity to assess them for inclusion. However, we did review German-language papers, as one team member is a native speaker, although none of these were subsequently included.
The majority of studies included in the review were quantitative, descriptive or small scale, although some included qualitative analyses and machine learning approaches. The included research was mainly conducted on US patient feedback sites so its application to the UK context is potentially limited. Health care in the USA is largely privatised and patients exercise more choice in seeking out a health-care provider, so it is perhaps unsurprising that much of the academic activity in this area of ‘reputation’ sites has been undertaken in the USA. All of the research was conducted in high-income countries.
We chose to conduct a scoping literature review rather than a systematic review because this is an emerging field and previous reviews on related topics indicated that there was limited but varied literature on the subject. We also felt that it was important to include a wide range of study designs. However, a limitation of the adopted method, acknowledged by its proponents Arksey and O’Malley,33 is that it does not present a clear process for synthesising data. Arksey and O’Malley33 also state that quality appraisal is not necessary in scoping reviews, so we did not explicitly aim to appraise the quality of the included studies.
Conclusion
By systematically searching for and presenting research evidence that addresses the preoccupations of the stakeholders we consulted, this scoping review charts the current landscape of online patient feedback research.
We have demonstrated that research in this area has emerged rapidly in recent years, but remains limited in both quantity and quality, given the spread of the phenomenon of online feedback. Many of the concerns of stakeholders remain unaddressed in the extant literature and therefore informed our own primary work in the INQUIRE project. For example, in the next few chapters we describe our findings about which patients provide and use reviews, and why do they do this; what the attitudes of professionals are towards online feedback; and how health-care organisations approach online feedback. The evidence gaps also inform our other recommendations for further research presented in the final chapter (see Chapter 8).
Chapter 3 A cross-sectional survey of the UK public to understand use of online ratings and reviews of health services
Summary
We conducted a face-to-face cross-sectional survey of a representative sample of the UK population to investigate the self-reported behaviour of the public in reading and writing online feedback in relation to health services. Descriptive and logistic regression analyses were used to describe and explore the use of online feedback. A total of 2036 participants were surveyed, and of the 1824 internet users (90% of the sample), 42% (n = 760) had read online health-care feedback in the last year and 8% (n = 147) had provided this feedback in the same period. People who were more likely to read feedback were younger, female, with a higher income, experiencing a health condition, urban-dwelling and more frequent internet users. For providing feedback, the only significant association was with more frequent internet use. The most frequent reasons for reading feedback were finding out about a drug, treatment or test, and informing a choice of treatment or provider. For writing feedback, the most frequent reasons were to inform other patients, praise a service or improve standards of services. Ninety-four per cent of internet users in the general population had never been asked to leave online feedback by their health-care provider. In conclusion, many people read online feedback from others and some write feedback, although few are encouraged to do so. This emerging phenomenon can support patient choice and quality improvement, but needs to be better harnessed.
This chapter is based on material reproduced from van Velthoven and colleagues. 77 This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/. The text below includes minor additions and formatting changes to the original text.
Introduction
Given the absence of any recent or robust data on use of online feedback of UK health services, despite huge interest in this area in the UK and elsewhere, to our knowledge, we undertook the first nationally representative UK survey on providing and using online feedback about health and health services among the general population. In this chapter, we describe the results of a survey measuring the frequency of use, user characteristics and self-reported behaviour of members of the public in reading and writing online feedback on health services, health professionals and medical treatments or tests. Previous work on the use of online feedback by patients has been relatively limited. 14,36,75,106 Surveys found that those who are more likely to use online feedback of health services include people who are younger,14,75 live in (sub)urban areas and have higher levels of education. 75 Prior to us starting this project, to the best of our knowledge, the last UK survey75 was published in 2012. The survey75 was conducted among a small non-representative sample of 200 people living in one borough in London and showed that just 29 people (15%) were aware of doctor-rating websites and only six people had used them. In a US survey conducted in 2012, 65% of 2137 participants were aware of online patient feedback websites and 23% had used them. 42 Of 854 respondents in another US survey in 2013, 16% said that they had previously visited a patient feedback website. 36 Although there are some caveats in the non-comparability of studies that have been conducted in different settings, using different questionnaires, it seems that the number of people using online feedback is rising rapidly from a very low baseline over time. Subsequent to undertaking this project, a separate study108 conducted in 2016 has been published examining the prevalence of knowledge and use of online feedback specifically in relation to UK general practice, showing a very low prevalence of usage in relation to feedback specifically about GPs (0.4% prevalence), in combination with a low awareness among the public of GP rating sites (15% awareness).
Methods
Study design
A cross-sectional face-to-face questionnaire-based household survey was conducted with members of the UK public about their use of online ratings and reviews (see Appendix 2). A market research agency, ICM Unlimited (London, UK), conducted the fieldwork. ICM Unlimited had previously conducted the OxIS on behalf of the Oxford Internet Institute, which uses similar methodology and which collaborated on this project, advising on design of the survey and choice of provider. 109 Similar to the OxIS, a two-stage design was used for sampling. First, a random sample of output areas stratified by region was selected. Second, within each selected output area a random selection of addresses was used. ICM Unlimited recruited and interviewed participants by sending interviewers to the homes of selected people in February 2017.
Ethics approval and consent
The survey received institutional ethics approval from the University of Oxford Central University Research Ethics Committee (reference SSH_OII_C1A_074).
Participants
We included adult members of the UK general public who were willing and able to give informed consent for participation in the study, lived in the UK, were able to speak and read English and were aged ≥ 16 years. To select participants, a random location sampling system was used in which we randomly selected output areas as the geographical sampling unit. Each output area consisted of around 150 households and all properties were available to the interviewer to achieve the target number of interviews (usually four or five per point). Demographics quotas were applied to ensure that the profile of achieved interviews in each sample point reflected the known population of the area. 109
Variables
We collected data on participant’s characteristics, including age, sex, ethnicity, annual household income, education level, living in an urban or rural area, health status and internet use (see Appendix 3). There were also 20 questions relating to online feedback (see Appendix 2, Table 15).
These questions were principally designed based on items from previous surveys14,75 and on policy documents and reports by online feedback organisations,110 and were informed by our concurrent survey of health-care professionals (see Chapter 4). We piloted the questionnaire with a patient and public reference group and tested it using two rounds of cognitive interviews (also with the public). Questions were asked about if, where and why participants read or wrote online ratings or reviews of health services, individuals, drugs, treatments or tests.
Data sources
All data were obtained through face-to-face interviews with participants. Surveys were completed on a tablet and transferred to the study team in a Microsoft Excel® spreadsheet (Microsoft Corporation, Redmond, WA, USA). The names and any other identifying details of participants were not collected in any of the surveys.
The survey was a fully representative sample of the population of Great Britain aged ≥ 16 years. A sample size of 2000, with a margin of error percentage of 2, was chosen to maximise accuracy within reasonable resource constraints. 109 Data were weighted to the sociodemographic profile [census data that included sex, age, socioeconomic grade, region and ACORN (A Classification Of Residential Neighbourhoods) group of the target population (UK citizens aged ≥ 16 years)].
Quantitative variables and statistical methods
All analyses were conducted using the statistical software package IBM SPSS Statistics version 22 (IBM Corporation, Armonk, NY, USA). Descriptive analyses of participants’ characteristics and the prevalence of providing and of reading online feedback were conducted. Non-internet users were excluded from these analyses, as they would not be reading or writing online content.
We coded the outcome as binary: use of any type of feedback compared with no use. Logistic regression was used to explain the use of online feedback (as the dependent variable), with the following independent variables that were considered to be potentially relevant: age, sex, education, income, living in rural or urban area and frequency of internet use. These sociodemographic and internet use variables have been shown to influence the uptake of a wide range of online activities, including health. 111 Ethnicity was not included in the logistic regression analyses because of the small number of participants in the ethnicity subgroups. In the results, we present the model fit (%), chi-squared, p and R2 (Nagelkerke) values. We used binary logistic regression in SPSS and included all variables that were found to be statistically significant in univariate analysis in the model. Missing data were not imputed.
Results
This section has been reproduced from van Velthoven and colleagues. 77 This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/. The text below includes minor additions and formatting changes to the original text.
Our total sample comprised 2036 participants, of whom 1824 used the internet over the past year; it is this group of internet users in the general population who were included in further analyses (their characteristics are shown in Table 2, as well as the characteristics of those who read and provided feedback). Appendices 3 and 4 show characteristics of the 10% of our sample who were non-users of the internet (n = 212). Our main findings were that of the 1824 internet users, 42% (n = 760) had read feedback about health services, or about health professionals or about medical tests or treatments during the past year, whereas 8% (n = 147) had written such feedback in the same period.
Total (N = 1824; 100%) | Readers (N = 760; 42%) | Writers (N = 147; 8%) | ||||
---|---|---|---|---|---|---|
n | % of total sample | n | % within demographic subgroup | n | % within demographic subgroup | |
Age (years) | ||||||
16–34 | 616 | 34 | 290 | 47 | 58 | 9 |
35–54 | 639 | 35 | 253 | 40 | 49 | 8 |
55–64 | 256 | 14 | 110 | 43 | 20 | 8 |
≥ 65 | 313 | 17 | 107 | 34 | 20 | 6 |
Sex | ||||||
Male | 904 | 45 | 344 | 38 | 65 | 7 |
Female | 920 | 50 | 416 | 45 | 82 | 9 |
Educationa | ||||||
No formal qualifications | 177 | 10 | 61 | 35 | 11 | 6 |
GCSE/O level/CSE/vocational qualifications/A level or equivalent | 864 | 47 | 348 | 40 | 66 | 8 |
Bachelor’s degree or equivalent/MSc/PhD or equivalent | 636 | 35 | 307 | 48 | 58 | 9 |
Still studying | 14 | 1 | 7 | 47 | 0 | 0.0 |
Other | 119 | 7 | 37 | 31 | 12 | 10 |
Household income | ||||||
≤ £24,999 | 470 | 26 | 213 | 45 | 45 | 10 |
£25,000–49,999 | 431 | 24 | 178 | 41 | 40 | 9 |
£50,000–74,999 | 141 | 8 | 62 | 44 | 9 | 6 |
£75,000–99,999 | 72 | 4 | 37 | 51 | 3 | 4 |
≥ £100,000a | 76 | 4 | 45 | 60 | 8 | 11 |
Ethnic origina | ||||||
White | 1563 | 86 | 635 | 41 | 120 | 8 |
Other | 252 | 14 | 120 | 48 | 25 | 10 |
Health status: long-term illness, health problem or disabilitya | ||||||
Yes | 373 | 21 | 183 | 49 | 39 | 10 |
No | 1449 | 80 | 576 | 40 | 108 | 8 |
Area | ||||||
Urban | 499 | 27 | 240 | 48 | 52 | 10 |
Suburban | 1057 | 58 | 424 | 40 | 75 | 7 |
Rural | 251 | 14 | 89 | 36 | 19 | 8 |
Internet access frequencya | ||||||
Several times a day | 1490 | 82 | 669 | 45 | 132 | 9 |
Around once a day | 185 | 10 | 56 | 30 | 10 | 5 |
Less than once a day | 148 | 8 | 35 | 24 | 5 | 3 |
Associations between people’s characteristics and use of online feedback
Age, sex and ethnicity
The highest proportions of feedback readers and writers were among those aged 16–34 years and the lowest proportions were among those aged ≥ 65 years (see Table 2). People aged 16–34 years were significantly more likely to read online feedback [odds ratio (OR) 1.695, 95% confidence interval (CI) 1.278 to 2.246; p = 0.000] than those aged ≥ 65 years (Table 3). Of women, 45% (n = 416) read and 9% (n = 82) gave feedback, compared with 38% (n = 344) and 7% (n = 65) of men, respectively (see Table 2). Men were significantly less likely to read online feedback than women (OR 0.742, 95% CI 0.615 to 0.894; p = 0.002) (see Table 3). Among people with an ethnicity other than white, 48% (n = 120) read and 10% (n = 25) wrote reviews, compared with 41% (n = 635) and 8% (n = 120) of people with a white ethnicity, respectively (see Table 2).
Predictor variable (individual data) | Readers (n = 760)a | Writers (n = 147) | ||||
---|---|---|---|---|---|---|
OR | 95% CI | p-value | OR | 95% CI | p-value | |
Age (years) | ||||||
16–34 | 1.695 | 1.278 to 2.246 | 0.000 | 1.496 | 0.885 to 2.529 | 0.133 |
35–54 | 1.250 | 0.942 to 1.657 | 0.122 | 1.190 | 0.696 to 2.035 | 0.525 |
55–64 | 1.446 | 1.029 to 2.031 | < 0.005 | 1.204 | 0.633 to 2.291 | 0.571 |
≥ 65b | NR | NR | NR | NR | NR | NR |
Sex | ||||||
Male | 0.742 | 0.615 to 0.894 | < 0.005 | 0.786 | 0.560 to 1.105 | 0.166 |
Femaleb | NR | NR | NR | NR | NR | NR |
Education | ||||||
No formal qualifications | 1.185 | 0.720 to 1.950 | 0.504 | 0.583 | 0.249 to 1.364 | 0.213 |
GCSE/O level/CSE, vocational qualifications (= NVQ 1 + 2), A level or equivalent (= NVQ 3) | 1.519 | 1.006 to 2.296 | < 0.05 | 0.722 | 0.379 to 1.375 | 0.322 |
Bachelor’s degree or equivalent (= NVQ 4), master’s degree/PhD or equivalent | 2.102 | 1.382 to 3.198 | 0.001 | 0.877 | 0.457 to 1.682 | 0.692 |
Still studying | 1.933 | 0.641 to 5.834 | 0.242 | –c | –c | –c |
Other | NR | NR | NR | NR | NR | NR |
Household income | ||||||
≥ £100,000b | 1.784 | 1.088 to 2.924 | < 0.05 | 1.113 | 0.503 to 2.463 | 0.792 |
£75,000–99,999 | 1.237 | 0.754 to 2.029 | 0.400 | 0.424 | 0.131 to 1.372 | 0.152 |
£50,000–74,999 | 0.955 | 0.654 to 1.395 | 0.812 | 0.644 | 0.307 to 1.351 | 0.244 |
£25,000–49,999 | 0.846 | 0.650 to 1.102 | 0.216 | 0.957 | 0.612 to 1.498 | 0.848 |
≤ £24,999 | NR | NR | NR | NR | NR | NR |
Health status: long-term condition | ||||||
Yes | 1.463 | 1.164 to 1.839 | 0.001 | 1.434 | 0.974 to 2.110 | 0.067 |
Nob | NR | NR | NR | NR | NR | NR |
Area | ||||||
Urban | 1.697 | 1.241 to 2.320 | 0.001 | 1.426 | 0.823 to 2.473 | 0.206 |
Suburban | 1.226 | 0.920 to 1.633 | 0.164 | 0.934 | 0.552 to 1.578 | 0.798 |
Ruralb | NR | NR | NR | NR | NR | NR |
Internet use | ||||||
Several times a day | 2.680 | 1.808 to 3.974 | 0.000 | 3.206 | 1.216 to 8.449 | < 0.05 |
Around once a day | 1.440 | 0.880 to 2.357 | 0.147 | 1.965 | 0.629 to 6.141 | 0.245 |
Less than once a dayb | NR | NR | NR | NR | NR | NR |
Education and household income
The highest proportion of readers and writers were also among those with degree-level qualifications and above (see Table 2), and these people were significantly more likely to read online feedback than those with other qualifications (see Table 3). People in the highest income bracket of ≥ £100,000 were significantly more likely to read online feedback than those with the lowest income (≤ £24,999) (OR 1.784, 95% CI 1.088 to 2.924; p = 0.022).
Health status
Of people with a long-term condition, health problem or disability, 49% (n = 183) read and 10% (n = 39) wrote online feedback (see Table 2), and they were significantly more likely to read it than those without such a health condition (OR 1.463, 95% CI 1.164 to 1.839; p = 0.001) (see Table 3).
Area and internet use
Of people living in urban areas, 48% (n = 240) read and 10% (n = 52) wrote online feedback (see Table 2), and they were significantly more likely to read it than those living in rural areas (OR 1.697, 95% CI 1.241 to 2.320; p = 0.001) (see Table 3). People accessing the internet several times a day were significantly more likely to read (OR 2.680, 95% CI 1.808 to 3.974; p = 0.000) and write (OR 3.206, 95% CI 1.216 to 8.449; p = 0.018) online feedback than those who went online less than once a day (see Table 3).
Regression analysis
Our multivariate regression model for ‘reading feedback’ showed a model fit of 55%, which increased to 61% when the following significant variables were included: age, sex, education, income, health status, area and internet use (see Table 3). For writing reviews, the only significant variable was internet use, and no multivariate model is presented.
Frequency of reading and writing online feedback for different domains: health services, health professionals, and medical treatments and tests
Of the 1824 internet users, 28% (n = 507) had read feedback about (NHS) health-care organisations, 18% (n = 331) had read feedback about health professionals and 32% (n = 579) had read feedback about drugs, treatments or tests (see Appendices 7 and 8). Far fewer participants had written reviews: 6% (n = 105) about health-care organisations, 4% (n = 69) about health professionals and 4% (n = 69) about drugs, treatments or tests (see Appendix 9). Most participants who read or wrote feedback had done this once or every few months/monthly over the past year (Table 4 and see Appendix 10).
Frequency | Subject of feedback | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
NHS organisations | Individuals | Drugs, treatments or tests | ||||||||||
Read (N = 507) | Written (N = 105) | Read (N = 331) | Written (N = 69) | Read (N = 579) | Written (N = 69) | |||||||
n | % | n | % | n | % | n | % | n | % | n | % | |
Daily/every couple of days | 14 | 3 | 1 | 1 | 9 | 3 | 3 | 5 | 11 | 2 | 1 | 2 |
Weekly/fortnightly | 44 | 9 | 9 | 9 | 42 | 13 | 6 | 9 | 49 | 9 | 6 | 9 |
Monthly/every few months | 230 | 45 | 29 | 27 | 149 | 45 | 22 | 32 | 335 | 58 | 30 | 43 |
Once in the last year | 220 | 43 | 66 | 63 | 131 | 40 | 37 | 54 | 183 | 32 | 32 | 46 |
Of the 760 participants who read feedback about a health-care organisation, a health professional or a treatment or test, 42% (n = 320) read feedback about one of these, 29% (n = 223) read feedback about two and 28.6% (n = 217) read feedback about three. Appendices 9 and 11 show that, of the 147 participants who wrote feedback about a health-care organisation, a health professional or a treatment or test, 53% (n = 79) wrote feedback about one of them, 26% (n = 39) about two and 20% (n = 29) about three. In comparing readers and non-readers with writers and non-writers, we first found that 7% of the whole sample of internet users (128/1824) had both read and written a review. Of the 760 participants who read feedback, 83% (n = 633) had not written a review. Of the 147 participants who wrote feedback, 13% reported not reading feedback. Fifty-seven per cent of the whole sample of internet users (1044/1824) had not read or written feedback over the past year.
Websites on which online feedback of health services was read and written
The most frequently used formal review website for both reading and writing feedback was NHS Choices (used by 49% of ‘readers’ and 35% of ‘writers’), followed by WebMD (15% and 5%, respectively) and Care Opinion, formerly Patient Opinion (6% and 9%, respectively) (see Appendix 12). The most frequently used social media outlets for reading and writing online feedback were Google Reviews (Google Inc., Mountain View, CA, USA) (31% and 14%, respectively) and Facebook (25% and 23%, respectively).
Reasons for using online feedback of health services
Table 5 shows the most frequent reasons among 760 ‘readers’ for reading reviews: finding out about a drug, treatment or test (41%); choosing where to have treatment (19%); or choosing a health-care professional (17%). The most common reasons for providing reviews were to inform other patients (39%), praise a service (36%) or improve standards of NHS services (16%). Of the total sample, only 112 (6%) participants had been asked to write a review. Of those people who were asked to write a review, only 28 (25%) had written a review. The eight people who said they had often been asked to write a review had not done so.
n | % | |
---|---|---|
Reasons for readinga (N = 760) | ||
To find out about a particular drug, medical treatment or test | 313 | 41 |
To choose where to have my treatment | 145 | 19 |
To choose a health-care professional | 134 | 18 |
Before booking an appointment, to find out about which NHS services were available | 84 | 11 |
After an appointment, I wanted to compare my NHS experience with others | 67 | 9 |
Example for writing my own online review | 22 | 3 |
Was looking for general information/just browsing | 16 | 2 |
Used it to research my medical condition/symptoms | 11 | 2 |
Used it for professional reasons/work/study | 11 | 2 |
Came across it accidentally/was not looking for it | 7 | 1 |
Was looking for general feedback | 5 | 1 |
Was looking for information for a friend/someone else | 3 | .4 |
Other | 47 | 6 |
Do not know | 60 | 8 |
Reasons for writinga (N = 147) | ||
To inform other patients | 57 | 39 |
To praise the service received from my doctor or other health-care professional | 53 | 36 |
To improve standards of care in the NHS | 23 | 15 |
To complain about a NHS service | 9 | 6 |
To complain about a treatment | 7 | 5 |
Do not know | 6 | 4 |
To complain about a health-care professional | 5 | 4 |
Asked to by a medical professional | 3 | 2 |
I was asked to (unspecified by who) | 3 | 2 |
Other | 12 | 9 |
Asked to write (N = 1824b) | ||
No | 1711 | 94 |
Yes | 112 | 6 |
Asked to write and written a review (N = 28) | ||
Asked once | 20 | 71 |
Asked a few times | 8 | 29 |
Often asked | 0 | 0 |
Asked to write and not written a review (N = 84) | ||
Asked once | 41 | 49 |
Asked a few times | 35 | 42 |
Often asked | 8 | 9 |
For 147 ‘writers’ (36%), writing a review to provide praise for a service was a far more common motivation than to complain about a service (6%), treatment (5%) or professional (4%).
Discussion
The striking findings from this work are that about 1 in 12 members (8%) of the general population who use the internet had provided online feedback about some aspect of health care in the last year, and two in five people (42%) had read such online feedback in the past year. To the best of our knowledge, this survey provides the first representative UK population data on the use of online feedback about health care. As such, it provides key baseline prevalence data for future engagement with online feedback by patients. Although the majority of the population had not used online feedback of health services over the past year, these figures show that this phenomenon can now be considered a mainstream activity for many people, and, although writing feedback remains unusual (but not rare), the frequency of reading feedback suggests that this user-generated content has the potential to have a wide influence. As might be expected, the least represented users of online feedback of health services were people aged ≥ 65 years, without formal qualifications, at lower social grades, accessing the internet less often than once a day and those living in rural areas.
The findings of this survey are representative of the general population of internet users in the UK. Not everyone in the general population uses health services in a 1-year period, so it is not surprising that reading feedback is not universal. Overall, people are still far less likely to read and write reviews of health services than they are to do so for non-health-related commercial services. 112 On average, 42% of internet users in our survey read online feedback on some aspect of health care in our study. This is higher than shown in previous studies. 36,42 For example, the previous work in the UK, from 2012, had shown very low awareness (15%) and usage (3%) of doctor rating sites in a convenience sample survey of 200 people in London. 75 More recently, a study by Patel and colleagues,82 conducted in 2016, looked only at the use of rating sites in relation to GPs and showed a low prevalence (0.4%) for this very specific form of online feedback. The higher figures found in our survey compared with previous work can be explained by our broader scope across the whole of health care, as well as by increasing use over time.
Our findings on age and sex are in line with those of a German study87 that examined the characteristics of patients using a national public reporting instrument to leave feedback on their health-care experiences. This study87 found that 60% of 107,148 patients rating physicians were female and 51% were aged 30–50 years. Only 14% of writers in our study left feedback to complain, which is in line with a survey in the USA,36 in which 9% of 854 patients provided an unfavourable review. Likewise, the German study87 found that only 3% of 127,192 ratings of 53,585 physicians were rated with an insufficient score and 5% with a deficient score in their overall performance, and in a UK study78 the NHS services received three times more positive (total 223,439) than negative (total 73,363) reviews.
About 1 in 10 people did not use the internet in our study, which is in line with Ofcom data112 and shows an increase in use of the internet compared with the OxIS conducted in the UK in 2013, in which about 2 in 10 people were non-internet users. 14 In line with previous research, people with a lower level of education, lower income or social grade, of older age or living in rural areas were less likely to be regular internet users. 111 We also found that these variables were associated with lower use of reading online feedback. It may be that people in urban areas use feedback more, as they have more genuine choice in terms of health-care provider in their locality.
Strengths and weaknesses of the study
To the best of our knowledge, this is the largest representative general population survey conducted across the UK. It addressed an evidence gap in a fast-moving and under-researched area. This survey method relies on participant self-report to a face-to-face questionnaire; for this reason it may be influenced by recall bias, presentation bias and social desirability bias. Cognitive interviews with members of the public were conducted to optimise the design of questions, with the aim of minimising other response bias caused by question wording or item order. As a result, we had a relatively small number of ‘other’ and ‘do not know’ responses. Non-English speakers were excluded as the survey was conducted in English. Data from cross-sectional surveys can be used only to investigate associations between variables, not causation, and the nature of quantitative findings means that, although we can identify prevalence of use, in this study we cannot provide any deeper, qualitative understanding of the phenomenon of using online feedback of health services.
Conclusion
To the best of our knowledge, we have provided the first UK-wide representative data on the use of online feedback, which show that although many people (> 40% of internet users) read online feedback about health care, fewer currently provide it and very few have been asked to provide it. Encouragingly, users are motivated to become more informed, to make choices, to provide praise and to improve standards of care.
Chapter 4 Cross-sectional surveys of doctors and nurses to identify UK health-care professionals’ attitudes to and experiences of online feedback
Summary
We conducted cross-sectional self-completed online questionnaires of 1001 registered doctors and 749 nurses or midwives involved in direct patient care in the UK, and a focus group with five allied health professionals (AHPs). A total of 27.7% of doctors and 21% of nurses were aware that patients or carers had provided online feedback about an episode of care in which they were involved, and 20.5% of doctors and 11.1% of nurses had experienced online feedback about them as an individual practitioner. Feedback on reviews or ratings sites was seen as more useful than social media feedback to help improve services. Both types were more likely to be seen as useful by nurses than doctors, and by hospital-based professionals than community-based professionals. Doctors were more likely than nurses to believe that online feedback is unrepresentative and generally negative in tone. The majority of respondents had never encouraged patients or carers to leave online feedback. The findings from the focus group and from free-text comments in the survey showed concerns about representativeness and a reported lack of communication from management about what feedback is for, if it is received, and how it should be used. Despite enthusiasm from policy-makers, many health-care professionals have little direct experience of online feedback, rarely encourage it, and often view it as unrepresentative and with limited value for improving quality of health services. Differences in opinion between doctors and nurses have the potential to disrupt use of online patient feedback.
This chapter is based on material reproduced from Atherton and colleagues. 110 This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/. The text below includes minor additions and formatting changes to the original text. It is also based in part on material under review: Turk A, Fleming J, Powell J, Atherton H, University of Warwick and University of Oxford, 2019.
Introduction
In this chapter we address the question of how health-care professionals, who may be subject to both institutional- and personal-level feedback, regard and interact with online patient feedback. There is some evidence to suggest that medical professionals, including GPs, hospital doctors and surgeons, appear cautious about the value of online content, particularly in relation to validity of feedback and representativeness of the patient population and concerns about a lack of fundamental relationship between subjective patient experience and objective care quality. 31,70,81,88 Nurses’ or midwives’ attitudes towards online patient feedback have not previously been reported. Given that the attitudes held by health-care professionals are a major influence on the speed and success of adoption of new technological initiatives in health-care settings, there is a need to understand their viewpoints and establish current usage,113,114 and therefore to guide both practitioners and policy-makers in responding to this new form of feedback. Guidance to date has focused on patient experience data gathered using traditional methods, including surveys and focus groups. 115
We therefore conducted surveys of UK doctors, and of nurses and midwives, and a focus group with AHPs, with the aim of defining the characteristics, attitudes and self-reported behaviours and experiences of health-care professionals towards online patient feedback.
Our objectives were, in the case of doctors, nurses and midwives, to outline attitudes, behaviours and experiences and to determine whether or not these differed by clinician type, professional setting and according to demographic variables, including age and sex. For AHPs, we sought to explore their attitudes, behaviours and experiences in the context of their role working alongside other health-care professionals.
Methods
Ethics approval and consent
The survey of doctors was approved by the Central University Research Ethics Committee at the University of Oxford. The survey of nurses and midwives was approved by the Joint Research Compliance Office at Imperial College London and Imperial College Healthcare NHS Trust. The questionnaire started with a statement regarding consent, with an option to give consent or to decline. All responses were anonymous.
The focus groups were approved along with the other elements of project 5, the organisational case studies. Approval was by the Medical Sciences Interdivisional Research Ethics Committee (reference R32336/RE001) and the Health Research Authority. Participants provided written informed consent.
Study design
We conducted a cross-sectional self-completed online questionnaire design. The survey was administered to doctors and to nurses or midwives using different routes. We also conducted a focus group with AHPs.
Participants
Participants in the survey were registered UK doctors, nurses and midwives currently practising in the UK and involved in direct patient care. Participants in the focus group were AHPs working at one of the case study sites from the project 5 element of the INQUIRE study (see Chapter 6).
Survey variables
The survey was designed to identify who uses or has had experience of using online sources of patient feedback and their attitudes towards this type of commentary. We drew on previously conducted research49,81 and on policy documents and reports by online feedback organisations,116 to determine the key elements. The survey comprised eight questions on demographic and professional characteristics and six topic-based questions related to online feedback (see Appendix 13). For the doctors’ survey, there was an additional free-text question, ‘If you would like to leave a comment about online patient feedback please do so here’. It was not possible to add this question to the survey of nurses (see Recruitment and data collection).
Attitudinal questions used Likert scales. The survey questions were piloted in two ways: (1) the survey company commissioned to administer the survey (see Recruitment and data collection for details) to doctors provided guidance and feedback on the survey questions and possible response options based on its extensive experience of surveying doctors on a range of topics; and (2) individual local clinicians provided feedback on the wording and order of questions through various iterations of the survey. Our lay co-investigator provided feedback on the survey questions at each iteration.
Recruitment and data collection
The online survey of doctors was administered by Doctors.net.uk, a UK online portal and network for the medical profession with around 200,000 members. Doctors.net has been widely used in academic surveys of doctors. 117,118 The survey was administered online via this platform to a quota-sampled119 representative group of secondary care (across specialties) and primary care doctors. Doctors received a direct invitation via e-mail, based on information from their individual Doctors.net profile. Doctors were sent the invitation until 1000 participants were recruited. All study participants were entered into a prize draw.
There was no equivalent route available to survey nurses and midwives. Instead, the same survey questions were included in a wider survey about how nurses and midwives use digital technologies. The online survey link was distributed by the Royal College of Nursing (RCN) via targeted e-mails sent to RCN nursing forums for e-health, midwifery, district nursing and RCN children and young people. It was also distributed via RCN online bulletins and the RCN Twitter feed (@theRCN; Twitter, Inc., San Francisco, CA, USA). In order to bolster the sample, the link to the survey was distributed to 10,000 people registered with the Nursing Times. The survey ran from 17 May to 29 September 2016.
The funders of this research requested that we try to capture the views of AHPs, in addition to those of doctors and nurses. We did not have a convenient sampling frame to conduct a survey with AHPs, and with no extra resource in order to gain the views of this group, and with the agreement of the funders, we proposed to hold focus groups. After pursuing various routes for AHP recruitment with no success, including invitations within NHS trusts and through professional networks, and after taking the advice of our SSC that we had undertaken all reasonable efforts and that further attempts at recruitment were unlikely to succeed, we eventually held a single focus group in the one NHS trust that agreed to participate in this element of the study. Participants were recruited via an e-mail to AHP employees from the ‘head of therapy services and lead AHPs’ at the participating NHS trust. The e-mail invited AHPs to participate in a focus group at lunchtime, at the education centre on site at the trust. Interested persons contacted the researcher to register participation. They were then sent the information sheet and consent form to read before coming along to the focus group. Written informed consent was then obtained on the day before the group commenced.
The focus group topic guide utilised the domains of the survey (see Appendix 14 for the topic guide). We provided the group with a brief summary document, using images, to introduce the topic. The findings of the survey were put into an infographic, which was shared with the group during the discussion. Two researchers facilitated the focus group. The focus group was digitally recorded using an encrypted recorder. The files were transcribed verbatim by a professional transcription service.
Study size
The survey of doctors aimed to quota sample 1000 doctors in total (500 primary care doctors and 500 secondary care doctors). The survey of nurses aimed to sample at least 500 nurses. We intended to conduct up to four focus groups of AHPs, each comprising six to eight people.
Data analysis
Data from the two survey populations were merged for analysis. Descriptive statistics are presented for variables related to use and chi-squared associations are presented for differences between health-care professional groups. We present chi-squared associations between key variables, including attitudes and having been the subject of online feedback. We used SPSS for data analysis.
Multivariate logistic regression was used to investigate the way in which different factors were associated with attitudes regarding online comments from patients. Dichotomous variables were created prior to analyses by collapsing the disagree and strongly disagree categories, and the agree and strongly agree categories, excluding those who neither agreed nor disagreed, or by collapsing ‘never/rarely/sometimes’ and ‘more often than not/all the time’. We conducted analyses on seven different dependent variables relating to attitudes and behaviours. Predictor variables that were considered to be relevant to explain attitudes and behaviours included age, sex, health-care professional type (doctor vs. nurse) and setting (community vs. hospital). Community setting included those working in general practice, a hospice, care home or describing themselves as working in the ‘community’. We did not impute missing data. ORs and CIs are presented for each independent variable (tables are presented in Appendices 15–17). For the purposes of presenting the data, we will refer to ‘nurses’ throughout; however, this includes midwife participants.
Descriptive thematic analysis was applied to the verbatim free-text comments provided by doctors. The data were organised and coded by two researchers using NVivo 11 (QSR International, Warrington, UK). We developed an initial coding framework based on the domains of the survey and knowledge of the existing literature. This was iteratively developed during analysis. Once coding was complete, conceptual maps were developed and discussed and key themes were then identified and explored.
The focus group data were analysed using a descriptive thematic analysis. Two researchers (HA and VW) read the data and independently devised codes. During a meeting to discuss the data, a coding frame was created and the data were coded. Thematic analysis was used to examine and record patterns across the data. We drew on the thematic analysis of the free-text comments from the survey (see Nature of content), the results of the scoping review (see Chapter 2) and the results of the survey of the public (see Chapter 3) when interpreting the data.
Results
Participants and descriptive data
There were a total of 1750 respondents: 1001 were the quota-sampled doctors (n = 501 in primary care; n = 500 in secondary care) and 749 were nurses (n = 715) or midwives (n = 34). The characteristics of respondents are shown in Table 6.
Whole sample (N = 1750), % (n) | Respondents, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | ||
---|---|---|---|---|
Doctors (N = 1001) | Nurses/midwives (N = 749) | |||
Sex | ||||
Male | 41.0 (717) | 64.8 (649) | 9.1 (68) | p < 0.001 |
Female | 59.0 (1033) | 35.2 (352) | 90.9 (681) | |
Age (years) | ||||
< 30 | 4.0 (71) | 0.9 (9) | 8.3 (62) | p < 0.001 |
30–39 | 25.5 (446) | 33.7 (337) | 14.6 (109) | |
40–49 | 31.5 (551) | 36.1 (361) | 25.4 (190) | |
50–59 | 32.0 (559) | 22.6 (226) | 44.5 (333) | |
≥ 60 | 7.0 (123) | 6.8 (68) | 7.3 (55) | |
Working hours | ||||
Full time | 70.1 (1227) | 74.2 (743) | 64.6 (484) | p < 0.001 |
Part time | 29.9 (523) | 25.8 (258) | 35.4 (265) | |
Time in practice (years) | ||||
< 5 | 6.6 (115) | 2.9 (29) | 11.5 (86) | p < 0.001 |
5–10 | 13.4 (234) | 17.3 (173) | 8.1 (61) | |
11–20 | 34.6 (606) | 45.0 (450) | 20.8 (156) | |
21–30 | 24.4 (427) | 24.0 (240) | 25.0 (187) | |
31–40 | 18.2 (319) | 10.0 (100) | 29.2 (219) | |
> 40 | 2.8 (49) | 0.9 (9) | 5.3 (40) | |
Setting | ||||
General practice | 30.1 (527) | 50 (501) | 3.5 (26) | p = 0.004 |
Hospital | 51.6 (903) | 50 (500) | 50.1 (403) | |
Communitya | 14.5 (254) | N/A | 33.9 (254) | |
Other | 3.8 (66) | N/A | 8.8 (66) |
The focus group included five AHPs, including a dietitian, a physiotherapist and three therapy assistants.
Survey
There were differences between doctors and nurses: more doctors were male (64.8%) and the majority of nursing respondents were female (90.9%). Most doctors were aged between 30 and 49 years (69.8%). For nurses and midwives, the most common age group was 50–59 years (44.5%). These proportions are broadly in line with the working population of doctors and nurses in the UK, although the nurses in our sample were slightly older than the general population of nurses (our sample had 51.8% of nurses aged > 50 years; UK data show that 46% of nurses are aged > 45 years). 120 There were more nurses/midwives (50.1%) working in hospital settings and around one-third (33.9%) were working in community settings; this compares with our quota-sampled 1 : 1 split among doctors [501 doctors working in general practice (community) and 500 doctors based in hospitals].
Feedback on an episode of care
There was a significant difference between doctors’ and nurses’ experiences of receiving online feedback about an episode of care in which they were involved (p = 0.004) (Table 7). A total of 27.7% (277/1001) of doctors and 21% (157/749) of nurses said that they were aware that patients or carers had provided online feedback on an internet review or ratings site about an episode of care in which they were involved. However, 43.2% (432/1001) of doctors and 49.1% (368/749) of nurses did not know.
All (N = 1750), % (n) | Respondents, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | ||
---|---|---|---|---|
Doctors (N = 1001) | Nurses/midwives (N = 749) | |||
Feedback from patient/carer about an episode of care they were involved in | ||||
Yes | 24.8 (434) | 27.7 (277) | 21.0 (157) | p = 0.004 |
No | 29.5 (516) | 29.2 (292) | 29.9 (224) | |
I do not know | 45.7 (800) | 43.2 (432) | 49.1 (368) | |
Feedback from patient/carer about them as an individual practitioner | ||||
Yes | 16.5 (288) | 20.5 (205) | 11.1 (83) | p < 0.001 |
No | 37.3 (653) | 37.3 (373) | 37.4 (280) | |
I do not know | 46.2 (809) | 42.3 (423) | 51.5 (386) |
Feedback on an individual
A total of 20.5% (205/1001) of doctors and 11.1% (83/749) of nurses said that they had experienced feedback on an internet review or ratings site about them as an individual practitioner. There was a significant difference between doctors and nurses (p < 0.001) (see Table 7). Around half (386/749, 51.5%) of nurses and 42.3% (423/1001) of doctors did not know if any online patient feedback had ever been left about them as an individual practitioner.
Usefulness
When asked to what extent they thought ‘online patient feedback on experiences of NHS care which is captured on internet reviews and ratings sites is useful to help the NHS improve services’, only 6% (60/1001) of doctors strongly agreed and 32.8% (328/1001) somewhat agreed that it was useful. However, 25.3% (253/1001) somewhat disagreed with this statement and 15.6% (156/1001) strongly disagreed.
Views among nurses were more positive, with the majority either somewhat (393/749, 52.5%) or strongly agreeing (158/749, 21.1%) and with the minority somewhat (48/749, 6.4%) or strongly disagreeing (19/749, 2.5%). Overall, there was a difference between doctors’ and nurses’ views (p = 0.000) (Table 8).
Agreement, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | |||||
---|---|---|---|---|---|---|
Strongly disagree | Somewhat disagree | Neither agree nor disagree | Somewhat agree | Strongly agree | ||
Internet reviews and ratings sites | ||||||
Doctors (N = 1001) | 15.6 (156) | 25.3 (253) | 20.4 (204) | 32.8 (328) | 6 (60) | p = 0.000 |
Nurses/midwives (N = 749) | 2.5 (19) | 6.4 (48) | 17.5 (131) | 52.5 (393) | 21.1 (158) | |
Social media | ||||||
Doctors (N = 1001) | 26.6 (266) | 33.3 (333) | 16.2 (162) | 21 (210) | 3 (30) | p = 0.000 |
Nurses/midwives (N = 749) | 5.2 (39) | 16.7 (125) | 25.0 (187) | 42.2 (316) | 10.9 (82) |
The same question was asked in relation to the use of social media. Over half of doctors either somewhat (333/1001, 33.3%) or strongly (266/1001, 26.6%) disagreed that this kind of feedback was useful to help improve NHS services. Conversely, over half of nurses either somewhat (316/749, 42.2%) or strongly (82/749, 10.9%) agreed that this kind of feedback was useful to improve NHS services. Overall, there was a difference between doctors’ and nurses’ views (p = 0.000) (see Table 8).
When explored using multivariate logistic regression, doctors were less likely than nurses to agree that ‘online patient feedback on experiences of NHS care which is captured on internet reviews and ratings sites is useful to help the NHS improve services’ (OR 0.101, 95% CI 0.070 to 0.146; p = 0.000) and community-based health-care professionals were less likely than hospital-based professionals to agree (OR 0.315, 95% CI 0.242 to 0.410; p = 0.000). There was no difference according to age or sex (see Appendix 18).
The same response pattern was observed for social media, with doctors less likely than nurses to agree that it was useful (OR 0.162, 95% CI 0.119 to 0.220; p < 0.001), and community-based health-care professionals were less likely than hospital-based professionals to agree that it was useful (OR 0.448, 95% CI 0.351 to 0.572; p < 0.001). There was no difference between the groups according to age or sex (see Appendix 18).
The presence (or absence) of positive attitudes towards the benefit of online patient feedback was not associated with whether or not a health professional had experienced feedback about an episode of care that they were involved in and whether feedback was received through a review website (p = 0.292) or social media (p = 0.251). For example, there were similar proportions of health professionals with positive attitudes, regardless of if they had received patient feedback through a review website (229/434, 52.5%), had not received patient feedback (290/516, 56.2%) or did not know (420/800, 52.5%).
Representativeness of online patient/carer feedback
Two-thirds of doctors thought that online patient/carer feedback was unrepresentative, with 26.2% (262/1001) saying it is very unrepresentative and 40.1% (401/1001) saying that it is somewhat representative. Only 1% (10/1001) thought that it was very representative and 18.7% (187/1001) thought that it was somewhat representative. Views were again different in nurses. Only 4.4% (33/749) of nurses thought that it was very unrepresentative and 19% (142/749) thought that it was somewhat unrepresentative. Although only 2.8% (21/749) of nurses thought that it was very representative; 44.6% (334/749) thought that it was somewhat representative. Overall, there was a difference between doctors’ and nurses’ views (p < 0.001) (Table 9).
Representativeness, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | |||||
---|---|---|---|---|---|---|
Very unrepresentative | Somewhat unrepresentative | Neither unrepresentative nor representative | Somewhat representative | Very representative | ||
Doctors (N = 1001) | 26.2 (262) | 40.1 (401) | 14.1 (141) | 18.7 (187) | 1 (10) | p < 0.001 |
Nurses/midwives (N = 749) | 4.4 (33) | 19.0 (142) | 29.2 (219) | 44.6 (334) | 2.8 (21) |
Nature of content
When asked to what extent they thought ‘online patient feedback on experiences of NHS care which is captured on internet reviews and ratings sites is generally negative’, over half of doctors either somewhat (420/1001, 42.0%) or strongly (154/1001, 15.4%) agreed and less than one-fifth of doctors either somewhat (158/1001, 15.8%) or strongly (16/1001, 1.6%) disagreed. The views of nurses were different. Most nurses (44.5%) ‘neither agreed nor disagreed’ that it was negative. One-third either somewhat (220/749, 29.4%) or strongly (35/749, 4.7%) agreed. Overall, there was a difference between doctors’ and nurses’ views (p = 0.000) (Table 10).
Agreement, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | |||||
---|---|---|---|---|---|---|
Strongly disagree | Somewhat disagree | Neither agree nor disagree | Somewhat agree | Strongly agree | ||
Internet reviews and ratings sites | ||||||
Doctors (N = 1001) | 1.6 (16) | 15.8 (158) | 25.3 (253) | 42.0 (420) | 15.4 (154) | p = 0.000 |
Nurses/midwives (N = 749) | 2.5 (19) | 19.0 (142) | 44.5 (333) | 29.4 (220) | 4.7 (35) | |
Social media | ||||||
Doctors (N = 1001) | 1.4 (14) | 9.4 (94) | 23.8 (238) | 45.2 (452) | 20.3 (203) | p = 0.000 |
Nurses/midwives (N = 749) | 2.4 (18) | 17.2 (129) | 45.5 (341) | 28.7 (215) | 6.1 (46) |
In relation to use of social media, 65.4% (655/1001) of doctors either somewhat or strongly agreed that feedback is generally negative, compared with 10.8% (108/1001) who either somewhat or strongly disagreed with this statement. Again, the views of nurses were different: 45.5% (341/749) ‘neither agreed nor disagreed’. Overall, there was a difference between doctors’ and nurses’ views (p = 0.000) (see Table 10).
When logistic regression was applied, doctors were more likely than nurses to agree that ‘online patient feedback on experiences of NHS care which is captured on internet reviews and ratings sites is generally negative’ (OR 1.887, 95% CI 1.324 to 2.689; p = 0.000) and community-based health-care professionals were less likely than hospital-based professionals to agree (OR 2.835, 95% CI 2.142 to 3.753; p = 0.000). There was no difference between groups according to age or sex (see Appendix 15).
For social media, again, doctors were more likely than nurses to agree that ‘online patient feedback on experiences of NHS care which is captured on social media is generally negative’ (OR 3.645, 95% CI 2.463 to 5.394; p < 0.001) and community-based health-care professionals were more likely than hospital-based professionals to agree (OR 2.450, 95% CI 1.792 to 3.348; p < 0.001). There was no difference between the groups according to age or sex (see Appendix 15).
Behaviours
The majority of doctors never (436/1001, 43.6%) or rarely (283/1001, 28.3%) encourage their patients and/or their carers to leave feedback on internet reviews and ratings sites. Fewer than 1 in 10 doctors encourage patients all the time (18/1001, 1.8%) or more often than not (65/1001, 6.5%). Behaviours were similar in nurses, with the majority reporting that they never (296/749, 39.6%) or rarely (171/749, 22.9%) encourage their patients and/or their carers to leave feedback on internet reviews and ratings sites. Only a small proportion of nurses encourage patients all the time (41/749, 5.5%) or more often than not (75/749, 10%). Overall, there was a difference between doctors’ and nurses’ behaviours (p = 0.000) (Table 11).
Engagement, % (n) | Overall difference (doctors vs. nurses/midwives) p-value | |||||
---|---|---|---|---|---|---|
Never | Rarely | Sometimes | More often than not | All the time | ||
Encourage patients to leave feedback | ||||||
Doctors (N = 1001) | 43.6 (436) | 28.3 (283) | 19.9 (199) | 6.5 (65) | 1.8 (18) | p = 0.000 |
Nurses/midwives (N = 748) | 39.6 (296) | 22.9 (171) | 22.1 (165) | 10.0 (75) | 5.5 (41) | |
Make a change to practice | ||||||
Doctors (N = 1001) | 25.9 (259) | 32.5 (325) | 33.2 (332) | 6.8 (68) | 1.6 (16) | p = 0.000 |
Nurses/midwives (N = 748) | 25.3 (189) | 18.2 (136) | 29.9 (224) | 19.3 (144) | 7.4 (55) |
In terms of doctors reporting actually making a change to their practice due to feedback from internet reviews and practice, only 1.6% (16/1001) reported doing so all the time and 6.5% (65/1001) reported doing so more often than not, whereas 33.2% (332/1001) did sometimes and over half did so rarely (325/1001, 32.5%) or never (259/1001, 25.9%). More nurses made a change to their practice due to feedback from internet reviews and ratings; one-quarter reported doing so more often than not (144/749, 19.3%) or all the time (55/749, 7.4%), whereas 29.9% (224/749) did sometimes and 18.2% (136/749) rarely. One-quarter of nurses (189/749, 25.3%) said that they never made a change to practice. Overall, there was a difference between doctors’ and nurses’ behaviours (p = 0.000) (see Table 11).
There was a difference in the relationship between thinking that patient views are representative and making a change to practice due to feedback (p = 0.000). Among the 838 health professionals who felt that views are unrepresentative, 787 (93.9%) never, rarely or sometimes made a change to practice (low feedback use), compared with 51 (6.1%) who made a change to practice all of the time or more often than not (high feedback use).
In the logistic regression, doctors were less likely than nurses to have ‘encouraged patients/carers to leave feedback on internet reviews and ratings sites’ (OR 0.537, 95% CI 0.359 to 0.803; p = 0.002), as were those working in a community setting (OR 0.559, 95% CI 0.405 to 0.771; p = 0.000). There was no difference between the groups according to age or sex. Doctors were less likely to agree that they had ‘made a change to practice because of feedback from internet reviews and ratings sites’ (OR 0.328, 95% CI 0.229 to 0.470; p < 0.001). The same pattern was observed for those working in a community setting (OR 0.550, 95% CI 0.414 to 0.730; p < 0.001). There was no difference between the groups according to age or sex (see Appendix 16).
Free-text comments
Out of the 1001 doctors who completed the survey, 378 left a free-text comment. Free-text comments on online feedback were not collected in the survey of nurses. The sample characteristics very closely matched those of the entire population of the 1001 doctors who responded to the survey. There were more males (64%) than females (36%) leaving free-text comments and 47.6% of the sample were GPs. There were differences in relation to age (< 30 years, 1.1%; 30–39 years, 28.8%; 40–49 years, 36.8%; 50–59 years, 25.1%; and ≥ 60 years, 8.2%), working hours (full time, 74.1%; part time, 25.9%) and time in practice (< 5 years, 2.1%; 5–10 years, 12.2%; 11–20 years, 47.1%; 21–30 years, 25.9%; 31–40 years, 11.4%; > 40 years, 1.3%). Of those leaving a comment, 102 (27%) said that they had personally received online feedback, 127 (33.6%) had not and 149 (39.4%) were unsure.
The comments were grouped thematically and related to (1) anonymity, (2) representativeness, (3) confidentiality, (4) type of platform and (5) moderation and regulation of online feedback. Notably, a common thread across these five thematic areas was a general caution and concern about the value of online feedback, as described briefly under each of the five headings below.
Anonymity
The anonymous nature of much online patient feedback was a prominent theme throughout the comments. Anonymity was associated with negative connotations, encouraging negative or aggressive feedback, being difficult to verify, address and contextualise:
Anonymous feedback is difficult as we are unable to identify the patient and clarify the problem.
Representativeness
Respondents questioned if online patient feedback represents the opinions and experiences of the general population and whether or not, in turn, it is a valid basis on which to make health service changes. There was a belief that online patient feedback tends to be an outlet for the sharing of negative feedback or that it reports only the extremes of patient experiences (only very positive or very negative accounts are presented). It was also perceived that online patient feedback does not represent the views of those who may not regularly use the internet, such as the elderly:
It is a very blunt tool and tends to be used to make a point by those with an axe to grind.
Confidentiality
A key theme was the issue of patient confidentiality and its inextricable link with their ability to respond to online comments. Directly responding to patient feedback on public sites through contextualising, explaining and addressing it risks breaching patient confidentiality. This was expressed as a concern with the perception being that doctors do not have a right to reply:
Due to confidentiality we cannot respond adequately to reassure other patients and the comments can be very damaging to staff and doctor patient interactions.
Type of platform
The potential and validity of online patient feedback as a driver of positive transformative change was recognised, particularly if it was received through an official NHS platform. There were concerns that feedback from other sources, such as social media sites, are difficult to keep track of and too public to address without compromising confidentiality:
Feedback from formal systems in clinic for appraisal purposes are very useful. My experience of online feedback is less helpful.
Moderation and regulation of online feedback
There was an expressed need for online patient feedback to be moderated or regulated. This was largely to prevent deliberately harmful or offensive comments (trolling), or to verify the validity of the accounts posted online:
It is often seen as threatening, but my feedback has been universally positive on RateMDs website. It would be good to mandate a right of reply, like TripAdvisor, and for nefarious criticism to be removed.
Focus group
The focus group included five AHPs and was conducted at one of the four NHS trusts included in the ethnographic fieldwork outlined in Chapter 6. The attending AHPs included a dietitian, a physiotherapist and three therapy assistants. Four key themes arose from the focus group data. In relation to the online nature of feedback, this formed just a small element of the discussion, which focused on feedback as a concept and its many forms.
The feedback landscape in the workplace
As AHPs, participants questioned if the collection of feedback and subsequently acting on feedback were part of their role. It was not clear for them how it fitted into their caring profession:
As front-line staff, I definitely want that personalised feedback to go ’when I saw you last week, it was great, and I really – I’m really grateful for what you did’. What people say about the hospital is wonderful, but it doesn’t mean anything to me, because it’s not feedback for me and my practice, and the service we deliver.
Their experiences of feedback at an organisational level were that feedback that captured the attention of management was that when staff had gone ‘above and beyond’ rather than the quality that care health professionals would deliver on an everyday basis. In addition, it was unclear to participants what mattered to the NHS trust nor what should matter to teams and individual health-care practitioners:
Often it’s about things that have gone the extra mile, rather than just things that have gone well. It’s not just ’thank you very much, you did a great job looking after my baby’. Much more ’because you did this’. [um] And it’s about that extra mile feedback. But I don’t know the source of that feedback.
Workplace culture was deemed to have an impact on the giving and receiving of feedback, with participants referring to the specialist nature of the setting and to the nature of their role as AHPs. As AHPs, they felt that they delivered holistic care, in which continuity and relationships were key:
I think health professionals work in a much more holistic way anyway. So they’ll see the bigger picture. They don’t see the person as a heart, or a gut, or a leg, or whatever else it is. And I just think that that’s part of our [um], our culture.
With this in mind, the participants talked about the different ways in which health-care practitioners may be receiving and dealing with feedback depending on their professional background and relationship with patients.
Communication with staff about feedback
Communication about feedback was a key theme. At a local level, the participants did not know if feedback was left about themselves or their team and, even if it was, if it was intended to reach them at all:
And when I said to something – I had a comment [um], and a colleague showed it to me. But it doesn’t count in any stats or anything else. But I would never even have known it existed had [laughing] a colleague not point it out to me. Right, [X]?
When asked by a researcher about a new website at the trust that encourages people to leave feedback, a participant said:
I didn’t even know that existed until you just told us.
There was a perception of feedback ending with the organisation, should it arise at all. This lack of communication raised questions about whether or not soliciting feedback from patients online was appropriate, if it was simply happening to meet a target, and raised wider questions around the nature of online feedback and what it is for, given that all participants regularly received verbal or personal written feedback (such as cards).
Usefulness of feedback
This lack of communication led, in turn, to uncertainty about the usefulness of feedback and whether or not it has the potential to invoke change. There was a perception of the ‘futility’ of feedback – what makes it meaningful and actionable. Is feedback useful if it concerns something about the health services that cannot be changed? Asking patients to leave feedback on areas that cannot be changed or improved may be unethical and inappropriate:
It’s usually just ’yes, we like this a lot – tick the box’, and then on the back you’d be like, ’you’ve got lots of toys’, or ’I really like that person I saw today because they gave me a sticker’. Nothing is ever in context, or with enough detail to go ’oh OK, that was a problem and we need to fix it’. Or ’that was really nice, that’s brilliant, we should do more of that’. It’s like ’thanks for having a good reception space’. It’s like ’OK, well thanks, that’s great’.
The concept of legitimate feedback included concerns around whether or not it was representative of the patient population and how it was determined when feedback should be acted on – for one person or more:
. . . if you have one complaint about something, is that a point you action? If you have 20 complaints? What’s the cut-off? Is it 1, or 20? Or 50? You know, because actually one person said ’we didn’t like the colour of the walls’. We’re not going to repaint the walls. You know? So I think that that’s the other thing about – it’s about – well we might say one person said, ‘actually, I had a near death experience in ITU [intensive therapy unit] because somebody gave me the wrong medication’. We absolutely have to do something about that.
Participants described a desire to receive authentic and tangible feedback, but were not sure how they should receive it. Concerns about unboundaried feedback, without context or detail, related to the perceived usefulness of what might be fed back by patients.
Nature of online feedback
Online feedback was regarded as ‘real-time feedback’, less boundaried and occurring in different forms. Social media, an unsolicited and unboundaried form of feedback, was regarded as being more personal and ubiquitous – something that was not avoidable.
About social media, one participant said:
I think it’s more personal. It’s more – It’s an opportunity for them to leave much more personal feedback about their experience. Rather that ‘would you recommend this place to somebody else’.
This was at odds with the discussion around usefulness.
Comparison with themes from free text
We observed only a small amount of overlap between the themes identified in the thematic analysis of the free-text comments and those identified from the focus group. Concerns about representativeness were evident in both, as was a lack of communication from management about what feedback is for, if it is received and how it should be used. Doctors clearly had different perspectives on online feedback and what it means for them, their practice and their patients.
Discussion
To the best of our knowledge, this study presents the first large-scale UK survey exploring the attitudes and behaviours of health-care professionals towards online patient feedback. The majority of doctors felt that the feedback was not representative. This was in direct contrast to the majority of nurses, who thought that it was representative. All health-care professionals felt that formal internet reviews and ratings sites had more potential to be useful in shaping health services than unstructured feedback in social media. We observed that the majority of doctors or nurses rarely or never encourage their patients or their carers to leave feedback on internet reviews and ratings sites, and when feedback is received, the majority of doctors do not change their practice, although nurses were more likely to change their practice in response to feedback. When it came to being subject to comment, either on an episode of care or on them as a practitioner, this was happening in both groups, but was more common among doctors than nurses. The majority of participants were unaware whether or not feedback had ever been left about them. We found a difference in attitudes between doctors and nurses, with nurses being more positive than doctors about the potential of online patient feedback for health service improvement. We also observed a difference between hospital- and community-based health-care professionals, with hospital-based staff regarding online feedback more positively.
Allied health professionals focused on the concept of feedback and how that fitted into their role. This was in contrast to the doctors whose commentary focused on the ‘online’ nature of the feedback and what the online nature of the feedback meant for them.
Difficulty in determining how health-care professionals might optimise online feedback does not seem to be limited to feedback left online. More broadly, concerns about online feedback identified in this study reflect wider concerns about the provision and collection of patient feedback in general. UK-based work on the collection of patient experience data in primary care found that staff were sceptical about the value of paper-based patient surveys and their ability to support service reconfiguration and quality improvement. 121–123
It is particularly evident that health-care professionals in community settings may require more convincing than their specialty-based colleagues that there are potential positive uses for online feedback. Mirroring our own findings, a survey103 in Germany found that physicians reporting that they had taken measures to improve patient care because of online ratings were more likely to be specialists (946/1637, 57.79%) than GPs (207/413, 50.1%) or other providers (137/310, 44.2%) (p < 0.001). Linked work, also in Germany, explored the use of responses to online feedback by physicians, finding that just 1.58% (16,640/1,052,347) of comments on a patient review website had received a response from a physician. 124
In Chapter 3, we showed that many patients are now using online feedback and their main motivations were to inform other patients, to improve standards of NHS services and to praise a service. 42 This is an interesting juxtaposition to the attitudes of many health professionals in the present study who view online feedback as generally negative in content. The public survey did confirm the belief that the people who provide feedback are not representative of the general population. 23
Strengths and weaknesses of the study
This work is limited by our sampling frames and recruitment strategies. Our survey of doctors used quota sampling and an online invitation and the survey of nurses used an opt-in approach to a widely advertised survey invitation. We used online surveys and it is likely that participants were more familiar and comfortable with online technologies than non-participants, especially in the nurses’ survey, which was specifically framed to potential participants as being about ‘nurses and technology’. These approaches were taken as there was no nationally representative sampling frame available for approaching these professional groups, and online survey methods using quota sampling or clicking through from an advertisement are low cost. Reassuringly, the characteristics of the samples broadly reflect the characteristics of doctors, nurses and midwives in the UK in relation to age and sex. 22 Nurses and midwives were grouped for the purpose of the analysis; however, only 34 of 749 participants were midwives. This was due to the criteria for the survey, which did not exclude midwives but did not target them directly in the recruitment strategy.
As a cross-sectional study, we are identifying associations rather than causation and these may be indirect owing to a common factor unaccounted for in the current analysis. Furthermore, any self-reported measure is subject to potential response bias, particularly those questions relating to behaviour; however, the anonymity of the survey may have reduced this. The topic of online patient feedback is new and we developed the topic-based questions in the survey ourselves. It is best to use validated questions when conducting a survey, but in the absence of these we based our questions on existing surveys, obtained input from the survey company administering the doctors’ survey and conducted piloting of the survey.
The study findings are specific to doctors and nurses working in the UK and, for this reason, are likely to be specific to the context of the NHS. This is important, as online feedback may have less of a role in driving competition in a nationalised model of health care than in other health systems or other sectors.
As we conducted only one focus group, we are limited to a descriptive summary from this single group. Thus, although this offers insights over and above that gained from our surveys, it is only a guide as to what might be the important issues for AHPs. This hypothesis generation is useful in the context of future studies in this area. We had intended, as per our protocol, to conduct four focus groups in the trust sites participating in our ethnographic work (described in Chapter 6). However, arranging focus groups within these trusts proved extremely difficult. The key challenges were in identifying the person responsible for AHPs so that we might advertise and recruit this group of people, finding meeting space within the hospital that was free and in a suitable location for staff who were working across the entire site, and finding times that worked for this diverse group of professionals. This was especially the case for one of our trusts, a mental health trust in which AHPs worked off site. We did make repeated attempts over several months to hold multiple groups, but we were not successful. Our SSC eventually recommended that we close this study with only the one group conducted.
Conclusion
Many health-care professionals view online feedback from patients as unrepresentative and with limited value for improving health services, especially feedback derived from social media. Doctors had more negative attitudes towards online feedback than nurses, as did community-based health-care professionals compared with those working in hospital care, and this has implications for how this feedback is solicited and utilised. We identified a very low proportion of professionals who encourage patients to leave feedback and this may have implications for the successful introduction of feedback systems, especially if these do not engage front-line staff in how such feedback systems are to be promoted and integrated into everyday health service delivery.
Chapter 5 Conversations about care: interview study with patients and their family members to explore their perspectives on and experiences with online feedback about NHS services
Summary
There is little qualitative research on people’s experiences of providing and using online health-care-related feedback, ratings and reviews in different contexts. In this chapter we explore how and why patients and their family members provide and use online health-care-related feedback in the UK.
We conducted 37 qualitative semistructured interviews with people who had read others’ health-care service reviews and/or provided their own. A thematic analysis of the data was carried out, focusing on interviewees’ self-reported motivations for and experiences of reading others’ health-care experiences and sharing their own.
Interviewees described multiple overlapping motivations. In spite of this diversity, online feedback was persistently framed as a means of improving health-care services, supporting staff and other patients – conceptualised here as ‘caring for care’. The metaphor of engaging in a ‘conversation’ with health-care service providers was frequently evoked as the key mechanism through which online ratings, reviews and feedback could be used to improve health-care services.
Framing online feedback as ‘care’ opens up new ways of thinking about the meanings and consequences of these practices from the patient perspective, in the context of public health-care services and the NHS specifically. Moreover, it adds an important dimension to academic work on online feedback, which typically conceptualises ratings, reviews and feedback in terms of ‘choice’ or ‘voice’. We suggest that thinking of feedback in terms of ‘care’ and ‘conversation’ opens up productive ways of engaging with the sharing of health-care experiences online.
Introduction
This chapter explores the motivations, experiences and recommendations of people who have read comments about, reviewed or rated NHS health-care services online. This interview study was designed to illuminate how and why people provide and use online health-care-related feedback – an area in which there has been much speculation but little research.
Existing research on online patient feedback, reviews and ratings in health care rarely focuses on user perceptions and experiences, and the work that has been done on this tends to use relatively high-level questionnaire approaches. 82 In contrast, we took an in-depth qualitative approach to analyse (1) people’s self-reported motivations for why they read about other people’s health-care experiences and share their own online; (2) their experiences of using a variety of platforms to do so, including the responses they received and the implications this had for them; and (3) their recommendations for how the NHS should deal with online patient reviews, ratings and feedback.
Online patient feedback is often used as a catch-all term to describe a variety of practices and technologies, with associated norms and expectations. 125 Rather than attempting to provide a universal definition, we use the term loosely here to describe a range of practices through which people share their experiences of health-care services via one or more digital technologies. We include the intentional ‘feeding back’ of care experiences to health-care providers – either through the providers’ own systems or intermediaries – as well as comments, ratings and reviews shared on social media that may or may not be explicitly directed at a health-care provider but nonetheless are about them (e.g. experiences shared on Twitter or blogs). This intentional openness avoids excluding relevant practices that might not be straightforwardly considered reviews, ratings or feedback, but form an important part of the wider ‘digital patient experience economy’, within which these categories of experiential information sharing are embedded. 126 It also enables us to develop an empirically grounded understanding of what constitutes online patient feedback in practice, from the perspective of patients and other service users.
The people we spoke to had shared their own or family members’ health-care experiences and/or read about others’ across a wide variety of social media [e.g. Facebook, blogs, Twitter, YouTube (YouTube, LLC, San Bruno, CA, USA)]; commercial (e.g. Yelp) and non-profit feedback platforms (e.g. Care Opinion); and NHS and other health-care provider websites (e.g. NHS Choices, general practice websites). This included experiences of a range of health-care services across the UK (e.g. primary care, mental health services, accident and emergency, maternity services). Unsurprisingly, given this diversity, feedback described different motivations, experiences and outcomes. Rather than describe these individual platforms in detail, we draw out key themes that cut across our interviews. In particular, we focus on how, from a patient perspective in the UK, online feedback is orientated towards improving care and conceptualised, ideally, as a form of conversation. Finally, we provide a high-level overview of research participants’ suggestions for how the NHS might better organise and respond to feedback.
Methods
We undertook 37 semistructured qualitative interviews with people who had used online platforms to provide and/or read other people’s feedback about health-care experiences. Participants were recruited through a range of mechanisms. A flyer about the study was posted on the project website and relevant social media sites. We drew on the professional network of project members (including our PCPRG) and colleagues to advertise the study as widely as possible. The organisation Care Opinion circulated the study to their users who had agreed to be contacted for research purposes. Through Google searches, the researcher (SK) identified bloggers and other individuals who had commented about their health-care experiences online and those with publicly available contact details were approached to be invited to participate in the study.
People who expressed an interest in hearing more about the study were sent an information sheet. Those who agreed to take part were interviewed in their own home or elsewhere if they preferred. Interviews were audio- and/or video-recorded with permission. Consent was sought on the day of the interview. Participants were later sent a verbatim transcript of their interview to review before final consent and copyright was agreed for publication of extracts from the interviews to be used online (e.g. for teaching or service improvement purposes and the INQUIRE toolkit). Data were stored according to the University of Oxford’s institutional data-governance requirements. This was non-NHS recruitment and ethics permission was given by the Medical Sciences Interdivisional Research Ethics Committee (reference number R47871/RE001).
Through purposive sampling, we aimed for a maximum variation sample127 that included different ages, sexes and ethnicity, as well as people who had used a variety of online platforms and had commented on or read about different health-care services (including primary, emergency, maternity, chronic and specialist services). Interviewing continued until data saturation on experiences of reading and providing online feedback about health was assessed to have been reached. 128 This was assessed through an iterative process, with two researchers (SK and FM) reading, discussing and analysing the interviews throughout the data-collection process. See Table 12 for participants’ basic demographic information (age, sex, ethnicity, health condition and services). The boundary between patient and carer roles can be blurred and several participants identified with both roles: of the 37 participants, four people did not have health conditions themselves, but used or provided feedback in relation to more general issues or someone they cared for, and a further three people provided feedback both as carers and as patients.
Characteristic | n |
---|---|
Sex | |
Male, age (years) | |
20–35 | 0 |
36–50 | 2 |
51–65 | 4 |
≥ 66 | 6 |
Total | 12 |
Female, age (years) | |
20–35 | 7 |
36–50 | 9 |
51–65 | 8 |
≥ 66 | 1 |
Total | 25 |
Ethnicity | |
White British, age (years) | |
20–35 | 7 |
36–50 | 9 |
51–65 | 10 |
≥ 66 | 4 |
Total | 30 |
Other, age (years) | |
20–35 | 0 |
36–50 | 2 |
51–65 | 2 |
≥ 66 | 3 |
Total | 7 |
Health condition (main condition focused on in interview) | |
Multiple complex conditions | 10 |
Mental health | 6 |
Cancer | 4 |
No specific condition | 3 |
Diabetes mellitus | 3 |
Care of a parent or spouse | 3 |
Childbirth (no specific condition) | 2 |
Chronic pain | 1 |
Early-onset dementia | 1 |
Heart condition | 1 |
Multiple sclerosis | 1 |
Osteoarthritis and hip replacement | 1 |
Spinal problems | 1 |
Total | 37 |
A topic guide was used to explore interviewees’ general use of digital technologies and their motivations and experiences of engaging with online health-care-related feedback specifically (see Appendix 17). Transcripts were coded using NVivo software. We adopted two inter-related approaches to analysing the interviews. Informed by framework analysis,129 we developed an initial frame for structuring the data focused on the research questions that the work package set out to answer. Based on this, we created coding reports on (1) interviewees’ self-reported motivations for providing, seeking, reading and using online feedback; (2) interviewees’ actual experiences of providing and using online health-care-related feedback; (3) the perceived effects that providing or reading online feedback had for their health and well-being, their families, other patients, health-care practitioners and services; and (4) perceptions and experiences of NHS platforms specifically, including relationships with health-care services and practitioners. The first three of these coding reports were broken down according to the platform used (e.g. Twitter, blogs, Facebook, Care Opinion) to pinpoint similarities and differences across the different technologies, as well as to pull out those that cut across them. At the same time, we drew on grounded theory techniques of constant comparison and deviant case analysis to pull out emergent themes across the corpus of interviews. 130 This resulted in a number of cross-cutting themes, such as choice, feeling heard, anonymity and navigation. In addition, the researcher created a summary of each interview, focusing on interviewee background and experiences. We used this alongside the coding reports and thematic analysis to provide context and situate interview extracts.
Findings
We focus on three key higher-level categories that we developed through our analysis. First, we outline how interviewees’ overwhelmingly understood feedback as a means of contributing to, rather than undermining, the NHS. Second, feedback was framed as ‘conversation’, both explicitly and more subtly through the extensive use of conversational metaphors across the corpus of interviews. Third, interviewees spoke of needing to ‘navigate’ a fragmented feedback ‘landscape’, a process they described as complex and, at times, disheartening.
Feedback as improving and caring for NHS services
. . . there’s lots of reasons why I do it [provide online feedback]. It’s not just one. There have, in the situation that I described at the start, that was first and foremost to try and get a bloody answer out of them as about what was going to happen here next but, underlying all of this, was the sharing it with other people, letting other people know that they’re not alone and, hopefully, leading to change. But there’s been other times where my post has been purely to highlight good practice or to instigate change in some way.
INQ36, female, thirties, mental health
The excerpt above neatly encapsulates the complex set of factors that motivate people to rate, review or comment on health-care services online. Their starting point is their own or a family member’s care and they turn to the internet as a means of communicating some aspect of that experience, either frustrations about poor practice or recognition of good practice. At the same time, they share their experiences online because they want to help other patients and their families by warning or expressing solidarity with them or, crucially, through helping improve the relevant health service. Indeed, a desire to make a positive change to a specific service and the NHS more generally, in some cases at the national level, was a key motivation described by all our interviewees.
Many of our participants had a chronic health condition (sometimes rare conditions that needed specialist knowledge), had multiple health problems or were caring for someone. Thus, they had a long-term relationship with, and a sense of dependency on and commitment to, the NHS. However, less frequent health service users also expressed a sense of responsibility for the NHS, with interviewees seeing their experiences as a potentially valuable resource for informing practice and improving care:
The NHS fails, we fail, like we need the NHS to not only survive but to thrive and keep going and any feedback, certainly I’m giving and I know a lot of people in my position are, it’s constructive, not because we’re being critical but because we need this to work.
INQ16, female, mid-thirties, multiple long-term conditions
Our participants’ emphasis on feedback as a mechanism for improving care went beyond their own care and those of other patients using the same service; it included caring about the NHS and those who work within it as a valued public service. The use of online feedback as a means of caring for care – one of the few ways in which patients can enact care for the NHS – was particularly striking when interviewees spoke of providing positive feedback (something they did frequently):
And it’s really useful to practitioners. Because they feel like they’re doing the right, it helps you with your job satisfaction morale, it helps you looking at your practice and developing professionally.
INQ09, female, late forties, breast cancer
Furthermore, online feedback was seen as a way of publicly thanking staff, boosting morale, encouraging best practice and providing other patients with a positive signal about good care. Significantly, participants also saw their feedback as a valuable resource that staff might potentially use to promote, maintain or enhance the service they provided during a time of increased financial pressure and cuts. In this excerpt, the participant even makes it explicit how their feedback could be used by the Care Quality Commission (CQC):
I wanted to say thank you to the GP that, who’d been really, really good with me. And the way I did that was by e-mailing the practice manager and just saying ‘I’ve had this good experience, thank you’. Because I was hoping, I guess, that they could use that somehow with the CQC or something.
INQ22, female, mid-thirties, mental health and eating disorders
Even when critical of the care they had received, participants enacted care for the NHS in subtle ways, such as mentioning positive experiences alongside negative ones, protecting the identity of individual practitioners and acknowledging the multiple challenges the NHS faced. Although our interviewees did occasionally leave feedback in lieu of, or alongside, a formal complaint, they generally perceived it as distinct from the formal complaints process. In particular, the public and anonymised nature of online feedback were both regularly mentioned as key features that differentiated it from a complaint, which was generally perceived as needing to be dealt with privately and requiring individuals to be identified.
Although the online feedback left by our participants usually focused on specific services and experiences, some of the people we spoke to conceptualised the sharing of experiences of health-care services through online feedback as part of a wider ‘movement’ aimed at improving care through democratising the NHS and empowering patients. Thus, even if their online feedback did not have immediate effects or benefits for them or their family, they were motivated to provide it because they believed that they were, in the words of one interviewee, contributing to improving care ‘in a subtle and perhaps longer-term way’ (INQ18, male, late fifties, type 1 diabetes mellitus).
Although this focus on democratisation and patient empowerment was not platform specific, Twitter was frequently referred to as one of the best media for breaking down barriers between patients and health-care professionals, and was particularly valued as a conduit for engaging with high-level NHS representatives and policy-makers:
Patients should be listened to . . . you should ask people, who are using your service, what they think of your service. And it is something that I think is a growing movement, where people just aren’t happy that they are not being listened to anymore and it’s becoming a bit more equal between health-care professionals and patients . . . On Twitter you are on a level playing field and you can share your opinions as well as anyone else can.
INQ06, female, mid-twenties, hypermobile Ehlers–Danlos syndrome, mobility problems
With regard to reading online feedback, our participants valued being able to consult other people’s experiences. They said that it helped them to prepare for their own appointment(s) and treatment(s), assisted them in navigating the system and gave them a sense of what to expect. Much of the (primarily US-based) literature on online patient feedback, ratings and reviews has foregrounded it as a means of enhancing patient choice (for more information, see Chapter 2). Of course, choice may be more apparent in a US-style private insurance-based health system, but there is still rhetorical emphasis placed on choice in health policy discourse even within the NHS. 24 However, the people we spoke to rarely said that reading online feedback influenced their own choices. Although they acknowledged that, in theory, other people’s experiences had the potential to inform choice, in practice this was rarely the case. This was contrasted with consumer services, such as hotels or restaurants, in which they felt that they did genuinely have a choice. Our participants’ perceived lack of choice was due to numerous reasons: the nature of the condition, with ‘choosing’ a service not being an option with acute and emergency situations; geographic region, with people in rural locations feeling that they had less choice than those in cities; the need for specialist care that was available at only a limited number of places; practical considerations, such as waiting lists, transport, work responsibilities and childcare; and a lack of the kinds of information that would enable them to make truly informed choices, most notably information about specific practitioners. Finally, some patients even reported feeling actively discouraged by health-care staff from making choices.
Feedback as conversation
Regardless of the specific technology used, conversational metaphors, such as being ‘listened to’ and being ‘heard’, were pervasive across our interviews, with participants frequently stating that they provided online feedback in the hope that they might, ideally, have a ‘conversation’ with the NHS. The metaphor of feedback as ‘conversation’ was more than simply a figure of speech; it constituted an overarching framework that structured understandings of what health-care feedback could and should be.
One key dimension of feedback as ‘conversation’ was being able to express your experiences freely through unstructured text. Our interviewees said that being able to share their ‘story’ in their own words enabled them to focus on the aspects of care that were important to them, rather than responding to predefined categories. They recognised the value of more structured formats, such as check boxes, but suggested that they were too restrictive to truly communicate health-care experiences on their own and ran the risk of being ‘tokenistic’. In other words, being able to ‘speak freely’ was not only seen as important for successful online feedback; systems that enabled health-care users to share their experiences in this way were seen as indicative of a genuine commitment to patient-centred care.
When it came to using others’ feedback, interviewees were familiar with online feedback, ratings and review technologies in other sectors, with TripAdvisor (www.tripadvisor.co.uk), eBay (www.ebay.co.uk) and Amazon (www.amazon.co.uk) being key examples. Yet, they repeatedly stressed that health care was a highly specific domain that should import practices from other sectors only after careful consideration. ‘Star rating’ systems were deemed to have some use (especially if comparing a large number of reviews or if users were able to rate different aspects of their care), but they were also critiqued as unable to capture the complexity of health care. Thus, numeric ratings were rarely seen as sufficient in themselves. Rather, they were juxtaposed and cross-referenced with free-text comments:
It’s [NHS Choices] a bit like an Amazon thing or a TripAdvisor thing where people can, you know, their view of the five-star model but everybody may have a slightly different interpretation. So I didn’t find that terribly helpful or comparable, but I thought a lot of comments gave me a flavour of what the patient experience was like.
INQ08, female, early fifties, osteoarthritis, hip replacement
A second key dimension of feedback as conversation was the question of audience and interlocutor: with whom did our interviewees think they were initiating a conversation with when providing feedback? Patients, carers and their families were generally seen as the primary audience for health-care experiences shared over social media, especially health forums and Facebook (although in some cases health-care practitioners did take part in these conversations, but this was usually in a limited form). Twitter and, to a lesser extent, blogs were often used to communicate with health-care professionals and service providers, including policy-makers and opinion leaders, while at the same time being accessible to the wider public. However, when our interviewees wanted to explicitly ‘feed back’ their experience to service providers and practitioners, they usually turned to either NHS websites – such as NHS Choices or local websites (these differed depending on the trust and/or service in question) – or third-party platforms, such as Care Opinion or iWantGreatCare.
In principle, our interviewees were happy for feedback platforms to be curated by third parties – some even preferred this – but they felt that there was insufficient clarity and transparency about who owned and moderated different platforms, their relationship to the NHS (an inherent part of it, an intermediary or a completely unconnected organisation) and who actually received the feedback they collected. Moderation emerged as an important and contested theme here. On the one hand, there was a preference for minimum interference in the interests of keeping the experience as ‘real’ and ‘authentic’ as possible. On the other, interviewees recognised that moderation work was needed for the feedback to be discoverable, readable and useable (see Ziewitz131 for more on online feedback moderation).
A third related dimension of feedback as conversation was the extent to which participants expected a response. The idea of a response had different meanings for our interviewees. For some, a response required an action on the part of the NHS (e.g. the resolution of a problem, an indication of intent to instigate change or the utilisation of feedback as a learning tool). A response might also communicate the next stage of the feedback process to ensure that people had clear expectations and an understanding of how their feedback would be dealt with. Regardless of the specifics of what constituted an appropriate response, our participants stressed the importance of feedback, including positive feedback, being acknowledged and for there to be transparency about who had received it and when. Moreover, our interviewees generally preferred individually tailored responses over generic ones, but there were cases in which they considered less personalised responses appropriate (e.g. to protect anonymity). Whatever the response, they felt that it should be timely, feel genuine, rather than formulaic, and that any promises should be followed through (see Baines and colleagues132 for similar findings).
Navigating the complex online feedback landscape
As already discussed, the people we spoke to saw online feedback as a valuable resource for health-care service providers, for themselves and for other service users. Furthermore, internet technologies were an integral part of our interviewees’ lives, with most of them being online throughout the day. Yet, despite this, many initially had little knowledge of where or how they could rate, review or provide feedback about the NHS, and few reported that health-care providers had offered any guidance or encouragement for them to do so (this echoes the survey findings reported in Chapter 3). In the majority of cases (the situation was different for our three Scottish interviewees who were aware of Care Opinion as the preferred feedback mechanism for NHS Scotland), our participants painted a picture of a fragmented and uneven feedback ‘landscape’ populated by both very good and poor practice. Thus, in contrast to the ideal type of feedback ‘conversation’, our interviewees’ actual experiences of giving feedback were divergent and often highly unsatisfactory.
The first challenge our interviewees described was deciding where (and by association to whom) to provide feedback. They often spent a considerable amount of time – searching the internet, speaking to family and friends, consulting other patients and health-care professionals – trying to source an appropriate avenue for sharing their experiences. Not only was this time-consuming and often frustrating, it could be disheartening, especially if they were simultaneously dealing with health problems and their wider consequences. Moreover, their confusion about how to provide feedback was increased by what they perceived as poorly designed NHS websites (e.g. those of particular trusts, hospitals, primary care practices), which often lacked clear signposting of where feedback could be left and who it would reach:
I think it’s impossible to know, in a lot of situations. The staff are not listed. The service leads are not listed. Contact details are not given, except for generic phone numbers, which takes you to the main switchboard. I think, in many cases, actually, it’s impossible [to provide feedback], I would go that far.
INQ15, male, 39 years, mental health and orthopaedic treatment
Depending on what information they found, their situation, preferences and, importantly, their intended audience (who they wanted to share their experiences with), our participants used one or more of a number of feedback systems. This included Care Opinion; NHS Choices; local Healthwatch; trust and general practice websites; the Patient Advice and Liaison Service (PALS); the Friends and Family Test; and inpatient surveys administered by NHS staff (e.g. to inform the CQC). Although social media are not feedback platforms per se, our interviewees often conceptualised the sharing of health experiences via social media as a form of feedback:
So people, you know, now rely more on social media for feedback than something you’ve constructed where it says, you know, please fill in this form and people don’t want to be bothered with that. So it’s much easier for them to go and post something on Twitter or even write a blog and post a link to it than sit there filling in a feedback form that wants your address and phone number and the like.
INQ01, male, mid-sixties, infrequent health service user
We indicated earlier that a key motivation for many people who provide online feedback is a desire to make a positive change to a specific service or to the NHS more generally. In cases in which interviewees expected some action to be taken, they expressed high levels of frustration and disappointment when they felt that no changes were made. This could be heightened when they were aware of cases in which others’ feedback had received a response, giving them a feeling that other people were being listened to in some situations by some trusts and/or service providers. Such negative experiences could undermine trust in the health-care system, as well as demotivate people from engaging with it in the future:
It’s dispiriting if you if you . . . something has gone wrong and you’ve kind of alerted them and there’s no sign that anything has changed or there’s no reason given as to why actually, that’s not that much of a problem, it’s just a one off for you, yeah, it’s dispiriting because you feel you can’t make any difference or improve things.
INQ17, female, mid-thirties, mental health services
On the other hand, there were examples when interviewees felt that sharing their experiences had made a very positive difference, either to their own care or to a service more generally. Experiences such as these could be transformative for individuals, radically changing their perception of a service and their relationship with the staff working in it. For example, after trying to access a particular treatment for over 1 year, one of our interviewees turned to Care Opinion with the following result:
I got a response pretty quickly, actually. I think it was within maybe 24 hours but maximum 48 hours and I got a response basically saying, contact us and we’ll look into it. And I did contact them and they did look into it and . . . the problem was solved and I’m now able to access the correct care and treatment that I was told wasn’t available at all, and never would be . . .
INQ36, female, late thirties, mental health services, NHS Scotland
However, cases such as this were contingent on the health-care service provider – usually via patient experience leads or other designated staff members – actually engaging with the platform, which differed between trusts, services and the NHS in different countries (i.e. England, Scotland, Wales and Northern Ireland). Interestingly, for those who used it, Care Opinion, which keeps track of who has received the feedback and the responses given, was seen not only as a way to leave feedback, but also as a mechanism for keeping track of how responsive different services were. In other words, seeing responses and actions recorded on Care Opinion had the potential to influence how patients and their families perceived the service in question.
Participants’ recommendations for how the NHS should manage online patient feedback
The researcher undertaking the interviews asked each participant if they had any suggestions for what might improve their experiences of NHS feedback systems. In responding to this, many people drew on their experiences of feedback processes in other industries and domains to demonstrate what they felt worked best:
Organisations that gather information and do it well respond to the comments and demonstrate that they do something about the issues and that encourages a virtuous cycle, doesn’t it.
INQ08, female, early fifties, orthopaedic services
Our interviewees had a number of suggestions that they felt would enhance their experience of NHS feedback systems. First, they wanted there to be a clear feedback process and to be signposted to a dedicated feedback platform, where they could post their own feedback, receive responses and read other people’s comments:
[I]f they had their own little part of the internet that I could go on and say, ‘Look, you’re failing in doing this’. It would, I would love it, even if I went on every day and said, ‘Look, once again, this has happened’. Or, even if I went on once a year, I, to know that they’ve got their own platform that they’re reading . . . people from the trust taking note of what’s being said because I don’t think it would be abused, in that everyone is just going to go on and start slating them. I think, I don’t I don’t think that would happen really but if my trust did have that sort of platform, then I’d definitely use it and I’d definitely feel a lot more heard.
INQ19 female, early thirties, mental health service user
Although our interviewees’ expected audiences to include other patients and service users, as well as health-care staff, they expected feedback directed at staff to be acknowledged, taken seriously and (when appropriate) acted on. As outlined above, the response sought by our interviewees had the form of a ‘conversation’ involving the exchange of information and ideas, rather than a one-way mechanism for the reporting of a problem or positive experience. Concern was expressed when feedback was not responded to:
I’ve posted one or two on NHS Choice but my trust is particularly bad at ignoring them, so it seems pretty pointless doing that.
INQ20, female, mid-sixties, mental health and chronic back pain
The frustrations and difficulties people experienced when trying to give feedback were reflected in their recommendation that, as well as having more effective systems for collecting feedback, the NHS must improve the culture around comments, ratings, reviews and feedback:
When you say, ‘Oh we’d really like to know what your experience is’. You have to mean it and the only way that people really believe you mean it is if you do something with the information and then you tell them, you know, about the improvements that you’ve made . . . you just have to be, you know, willing, interested to listen.
INQ08, female, early fifties, orthopaedic services
Interviewees said that, from their perspective, the NHS did not always value online feedback or respond to it effectively enough. They recommended that NHS staff be trained to recognise the importance of online feedback, to embed it within their practices and for there to be mechanisms to ensure that specific feedback reached the appropriate teams and be translated into service improvement. They urged health-care professionals to regard negative comments as a learning tool. Equally, they thought that it was important that positive comments and feedback are used as exemplars of good practice and to boost staff morale:
I think it’s probably just like embedding it more and it becoming more an integral part of what they do . . . there’s huge potential to sort of just maybe ask more people and try and gather up more information and then you’re moving away from lots of kind of paper-based stuff, so it should be easier for people to analyse it and try and find themes and things, so they can help improve services.
INQ25, female, early thirties, Ehlers–Danlos syndrome and other chronic health conditions
A few of our interviewees highlighted examples of when they felt that NHS staff were genuinely trying to engage with patient feedback, either through a specific feedback platform or via social media:
@WeCommissioners and they run Twitter chats every week with a different focus and it creates such a platform levelling out having direct contact to people, you know, you, you can be tweeting with a patient, you can be tweeting with a, you know, chief exec [chief executive] of the NHS . . .
INQ09, female, late forties, breast cancer
These initiatives were valued and seen as indicators that NHS staff working at different levels and with different functions were taking patient feedback seriously and, in cases such as the example above, actively working towards democratising the NHS. There was widespread recognition that responding to online feedback consistently and systematically across the NHS would require considerable resources and create additional work for health-care professionals. Although our interviewees felt that the NHS had an obligation to engage with, and respond to, feedback provided via their own websites and platforms, such as NHS Choices, Care Opinion and iWantGreatCare, they expressed different opinions about how much and in what ways the NHS should draw on social media. A few felt that relevant social media platforms should be regularly monitored:
[T]he monitoring of social media by NHS organisations is, would be a really good thing . . . it’s very direct feedback. It’s feedback that everybody can access and I think an organisation these days, a health service organisation, as some do, needs to work out how they respond to that social media comment . . . I think it’s beholden on the NHS to have a strategy to deal with a better informed set of customers than previously was the case.
INQ01, male, mid-sixties, infrequent health service user
However, this was not a universal sentiment. There was widespread recognition that monitoring social media was unlikely to be feasible given staff and resource constraints and that a significant amount of social media content (especially on patient forums and Facebook) was not aimed at health-care practitioners and services, but intended for patients and their families. One important exception was Twitter, which was seen as a ‘public’ platform that the NHS should keep track of and respond to. Twitter was particularly valued as a ‘real-time’ source of information as opposed to more traditional modes of collecting feedback through surveys and questionnaires:
I think they should pay, you know, serious attention to Twitter. So if you see, if you see a photo of a toilet or an area, which is in a very bad state, I think that should be taken seriously and they should they should do something about it . . . One of the great things about social media on the flip side is that it’s real-time stuff, you know, instantly, you know, tells you what’s going on not in 7 days or in a year’s time, so it’s now, so the good thing is that things can be done if there’s a problem.
INQ15, male, late thirties, mental health and orthopaedic services
Discussion
As we have already shown in Chapter 3, although only a relatively small number of people rate, review or provide feedback on their health-care experiences online in the UK, many more seek out and read this type of feedback. 77 This means that those who do choose to share their experiences online can have a powerful influence on public and patient perceptions of a particular service. Considerable concern has been expressed about the negative potentials of this influence, especially by doctors (see Chapter 4 and Menon133). However, research shows that online feedback is not dominated by disgruntled service users and complaints. 26,77 Rather, as we have elaborated throughout this chapter, people’s motivations for, and experiences of, providing and using online feedback are complex and multifaceted.
This complexity and variation notwithstanding, our research participants consistently framed their provision of online feedback as a means to improve and support, rather than to criticise or complain about, health-care services. At the same time, they found the landscape of online feedback a fragmented one that is difficult to navigate. Based on their desire to contribute to the service, we suggest that online feedback should be understood as one of the rare ways that patients and the public can perform care for the NHS. What we have conceptualised as caring for care reflects a particular orientation towards health-care services in which users, even when voicing complaints, expressing disappointment or anger, do so as a means of improving health-care services, supporting (rather than undermining) staff, other patients and their families. This is significant for a number of reasons. An emphasis on online feedback as care (as opposed to dominant alternatives such as ‘choice’ and ‘voice’16) foregrounds specific relations and moral commitments,134 in this case people’s symbolic association with, and actual relationship to, the NHS and particular services within it. Furthermore, understanding online feedback as a form of care recognises that digital technologies are now a part of, rather than standing outside, contemporary health care. 135 Finally, and importantly, the notion of caring for care brings the mutuality of care to the fore, prompting further questions and research about the different ways that patients and publics perform care for their health-care services (e.g. through campaigns, volunteering, bequests and donations, and so on).
An important way in which our participants saw their feedback, ratings and reviews as having the potential to enact care was through ‘conversation’, with frequent references to ‘talking’, ‘being listened to’ and ‘feeling heard’. As with the emphasis on care, the metaphor of feedback as ‘conversation’ was significant, as it foregrounds the highly specific and relational aspects of online experiential information sharing. This ‘conversation’, of course, had particular characteristics. It typically involved multiple audiences (health-care service providers and professionals, other patients and their families), was usually public and often anonymous (at least on the part of the patient or service user) and was enabled through free-text, as opposed to check boxes and ratings. Certain technologies were seen as facilitating different kinds of conversation; for example, Twitter was framed as a ‘leveller’ that was especially effective at breaking down traditional power hierarchies within health care.
Strengths and weaknesses of the study
As far as we are aware, this is the first qualitative study to explore patients’ and their family members’ actual experiences of reading and writing online feedback about the NHS. Moreover, in recruitment, data collection and analysis we paid particular attention to perceptions and practices of providing online feedback, a difficult area to research that has, to date, been overlooked. A key strength of the study is that it explores online feedback as a complex, multifaceted and situated phenomenon. This has enabled us to generate new ways of thinking about online feedback in the context of public health-care services in the UK, most notably through (1) proposing that online feedback is understood as a way in which patients and publics enact care for their health-care services and (2) unpacking the metaphor of feedback as ‘conversation’.
Clearly, as with any qualitative study, our findings are not statistically generalisable. We aimed for a maximum variation sample. We were able to recruit across a range of health-care services and platforms (including all major feedback platforms). We also managed to recruit older participants, people caring for family members and those from ethnic minority backgrounds. However, our sample is skewed towards women (n = 25) and we struggled, in particular, to recruit younger men. This may be a reflection of who provides feedback online and/or was influenced by our recruitment methods. Our sample was also dominated by people experiencing long-term interactions with the health service, often with chronic conditions, and we have less to say about feedback on single acute episodes. Furthermore, more detailed research is required on how different groups provide and use, or do not provide and use as the case may be, online feedback.
As this was an initial exploratory study, we were not able to examine differences between platforms, conditions and/or types of services, regions or the four countries (England, Scotland, Wales and Northern Ireland) in any depth. More research is needed on the relationship between platforms and online feedback, for example on how platform structures and affordances shape the feedback provided and the effects that it may have. Similarly, further research is needed on how the approaches adopted by different national health services shape online feedback practices, expectations and experiences.
Conclusion
Despite our interviewees wanting to engage in ‘conversation’ with the NHS, in practice they often struggled to do this. As well as the challenges they encountered in knowing where and how to feed back, they often felt dissatisfied and frustrated with the response(s) or lack of response that they received. When interviewees found ways to develop the conversations about care that they wanted, they felt that they were able to make changes to their own health care, that of other patients and as a service more generally. In such cases, their relationship with the health-care service provider was often strengthened and, in some cases, even transformed through online feedback processes. It is widely acknowledged that online patient and service user feedback has the potential to play an important role in improving health-care services. In this chapter, we have added to this by showing that, from the perspective of patients and their family members, the appropriate management of online feedback constitutes a service improvement in and of itself: communication as instead of simply for service improvement.
Chapter 6 Responsibility, response-ability and responsivity: the new characteristics of accountability in the face of online patient feedback – ethnographic case studies in four NHS trusts
Summary
In this chapter, we describe ethnographic case studies carried out at four NHS trust sites across the UK to examine individual- and organisational-level issues in relation to online patient feedback on health services, focusing on various NHS staff groups. The insights from this work show how online patient feedback has, in various ways, shifted the ways in which trusts are held accountable and to whom. We show how online patient feedback and expectations around it are changing work practices and shifting the locus of responsibility to include new forms of response-ability (having the infrastructure in place to deal with multiple channels of and increasing amounts of online feedback) and responsivity (ensuring that responses are swift and publicly visible, and thus accountable).
Introduction
We have presented findings from our evidence synthesis work (see Chapter 2), our public survey (see Chapter 3), our survey and focus group with health-care professionals (see Chapter 4) and our qualitative interviews with patients (see Chapter 5). Each of these parts of our study captures a different aspect of the role and impact of online patient feedback. The final part of our study, and the focus of this chapter, draws on four ethnographic case studies of different NHS trusts within the UK, to examine individual- and organisational-level issues in relation to online patient feedback on health services, focusing on various NHS staff groups. The case studies examined organisational and workforce factors, including the mechanisms in place for eliciting, gathering, moderating, recording and processing user comments, an area that has thus far been under-researched.
Although there have been studies examining how patients navigate and are affected by online health reporting and feedback,17,136,137 there are comparatively fewer studies focusing on the institutional side, particularly on how institutions are held to account through online patient feedback, and how their responses are based on a number of competing motivations and factors. In this study, we aimed to fill this gap and examine what is being done with online patient feedback in trusts, and how this particular mode of leaving feedback is changing accountability practices within trusts.
Drawing inspiration from Garfinkel’s138 ethnomethodological treatment of ‘accountability’, we begin from the idea that social action is carried out in such a way that it is rendered visible. As Button and Sharrock139 note, in this view of behaviour, social actions ‘are not only done, they are done so that they can be seen to have been done. The study of “accountability” therefore focuses upon the way actions are done so as to make themselves identifiable within the social setting’. We see online patient feedback and the response to it through this lens. In contrast to traditional modes of communication and complaints processes, such as letters to management, online patient feedback is often made public for all to see. What effect does this have on how trusts respond to this feedback? How do their practices of dealing with feedback change and shift as a result?
We argue that responding to feedback is crucial for NHS organisations, although doing so is not a straightforward task. In protecting patient anonymity and confidentiality, trusts must act (and be seen to act) responsibly, such that there is no breach of trust or privacy. In addition, there are infrastructural issues that must be dealt with; there are many platforms and websites where patients may leave their feedback and so the trust needs to ensure that it is aware and able to respond to these. Finally, the trust needs to ensure that it responds to feedback in a timely manner, not only to meet the expectations of the patient, but also because these responses are visible by all potential and imagined audiences, and they feel the need to show that they are good at responding and maintaining a good reputation. So we argue that although trusts need to constantly be aware of and manage their responsibility, they must now also be cognisant of their response-ability (their ability to respond) and their responsivity (ensuring that responses are timely as well as visible).
Method
Four ethnographic case studies were conducted by one researcher (FD) at NHS trusts across the UK. A qualitative and focused ethnographic approach was chosen because of the rich engagements it allows with the subjects of the study, especially as the topic of the study (online patient feedback) is a nebulous term that is often referred to in different ways by different actors. 125 The close proximity with people within the trusts who were engaging with online patient feedback allowed the researcher not only to examine what various staff members thought about it based on what they said, but also to observe how online patient feedback was being dealt with (or not) in practice. This methodological approach included the gathering of data collected from various sources, such as face-to-face interviews, observations in meetings and workshops, online patient feedback demonstrations, documents, and noticeboard and television screen presentations. Rich data were thereby obtained in order to gain multiple perspectives and build a holistic picture.
The four NHS trusts were selected using several criteria. We wanted to include both acute and community settings, as well as those that had some track record of doing ‘well’ with patient experience and those that had previously struggled, within the resource constraints that limited fieldwork to four sites. The four selected sites were two mental health and community trusts, one large acute trust and one specialist trust. Of these, one had their own customised platform as the primary source for soliciting and receiving online feedback, one had outsourced to an external platform that provided them with analysis (taking the data from their collected feedback and turning this into graphs and statistics) and two relied mainly on pre-existing and public feedback websites, such as iWantGreatCare and Care Opinion (Table 13). The trusts are referred to throughout in this paper as site 1 community trust, site 2 community trust, site 3 acute trust and site 4 specialist trust, to protect the anonymity of the actual sites. Ethics approval was given by the Medical Sciences Interdivisional Research Ethics Committee and the Central University Research Ethics Committee (reference R32336/RE001).
Trust | Type of hospital | Primary feedback-gathering mechanisms |
---|---|---|
Site 1 | Community trust | Mainly uses pre-existing web-based public platforms (iWantGreatCare, Care Opinion, NHS Choices, etc.) |
Site 2 | Community trust | Outsourced to an independent external platform |
Site 3 | Acute trust | Uses its own platform, but also uses pre-existing web-based public platforms (iWantGreatCare, Care Opinion, NHS Choices, etc., as well as own Facebook page) |
Site 4 | Specialised trust | Fully customised self-built platform, NHS Choices, Care Opinion |
During the fieldwork, the researcher spent between 6 and 10 weeks at each of the four sites. In each trust, we had a point of contact (a champion and supporter of our research) who acted as the key local research sponsor or gatekeeper in providing initial access and introductions to other key members of the trust. In line with our ethics approval, the researcher first visited each of the sites and put up study information posters on noticeboards in the common areas. She also sent an e-mail to each local research sponsor and met with them to introduce the study in more detail. Our research sponsors worked in either patient experience or strategy roles for the trust, in other words they were managerial rather than ‘front-line’ clinical staff and had teams for whom they were responsible. After this initial meeting, the researcher asked to be introduced to others in the trust who were involved in some capacity with patient feedback. The range of people who the local sponsor considered to be involved with patient feedback was interesting in itself (see Table 14 for more information on the types of people who participated in the study).
General job title | Number of interviewees | ||
---|---|---|---|
Male | Female | Total | |
Patient experience/feedback lead/team | 2 | 16 | 18 |
Medical director/clinical lead/chief nurse | 5 | 8 | 13 |
Senior matron | 0 | 3 | 3 |
Head of performance | 1 | 0 | 1 |
Head of quality improvement/assurance | 1 | 2 | 3 |
Communications manager/team | 3 | 5 | 8 |
PALS manager/team | 0 | 1 | 1 |
Corporate management | 3 | 2 | 5 |
Other | 2 | 6 | 8 |
Total | 17 | 43 | 60 |
Once the researcher had been introduced to various members of staff via e-mail by the research sponsor (who also sent them a copy of a participant information sheet), she then contacted each of them individually to arrange an interview. These interviews took place in a number of different settings: personal offices, hospital cafeterias, common rooms and meeting rooms. When possible, the interviews were recorded; otherwise, notes were taken throughout the conversation, as well as immediately afterwards. Many interviewees took the researcher on a short tour of their office corridors, of the noticeboards displaying examples of patient experience initiatives, such as ‘You Said We Did’ posters, and television screens displaying trust information. They also introduced her to other teams nearby, or other members of staff to whom they thought it would be useful to speak. This snowball sampling approach to the ethnography enabled the researcher to meet and speak to many more people in an informal setting and get more of an idea about what was deemed important with regard to patient feedback and, specifically, to online patient feedback.
In total, the researcher interviewed or spoke one to one, or in one-on-two/three situations, with 60 members of staff across the four trusts (see Table 14). Of these, 36 formal interviews were recorded, transcribed, anonymised and uploaded to NVivo software to support the organisation of data and coding. The conversations with the other 24 interviewees were not audio-recorded, either because the setting did not allow for it or because it was not deemed appropriate in that particular environment. Instead, these conversations and encounters were written up as field notes as soon as possible after the interaction, and alongside written observations of the sites and participation in various team meetings and workshops. The field note journal extended to > 30,000 words. These two sources were also coded in NVivo to examine the themes and patterns that emerged from the data (see Lockyer140). Various documents and written material that were obtained from the trusts were also included in the analysis. These documents included ‘You Said We Did’ sheets, patient feedback analysis sheets and trust-wide patient experience strategy documents. Taken together, these documents give a detailed sense of the material being used in patient feedback conversations and which kinds of feedback were included in the analyses by different trusts.
The data were openly coded with a focus on content related to patient feedback. To support validity and reliability, selections of transcripts and field notes were also read by the supervising co-investigators and we held analysis meetings to discuss the emergent findings. There was agreement about the coding and main themes. From the initial creation of over 400 unique codes, it was apparent that these converged around three prominent themes in relation to how the trusts interacted with online patient feedback, which we characterised as (1) responsibility; (2) the ability to respond at all, given limited resources (response-ability); and (3) the speed with which a response could be visibly made (responsivity). In the next sections, we discuss each of these themes in more detail, drawing on vignettes from the interviews and ethnography to highlight particular points.
Responsibility and accountability
It is clear that ‘online patient feedback’ does not refer to just one specific activity, practice or place. During our research, we encountered many definitions of online patient feedback. Different actors co-opt different definitions and understandings depending on the context. 125 In particular, there seems to be a mismatch between the sites regarding where patients leave feedback and where trusts go to seek it. Although organisational routines and practices constrain trusts to utilise specific avenues and media when soliciting and seeking feedback, patients have no such restrictions. Patients will generally go to the avenue that is most convenient, be this Care Opinion, iWantGreatCare, YouTube, Facebook, Twitter, their own personal blog or whatever is available and desirable to them. Trusts, on the other hand, may have specific websites to which they have paid subscriptions, customised surveys that they have designed or a unique system that they have built themselves. So their focus is likely to be on these particular channels.
The first main finding in the data is the ways in which notions of responsibility and accountability were reconfigured in the face of online patient feedback. In some trusts, there was a centralised ‘patient experience lead’ (or equivalent title) whose entire role was based around soliciting, collecting, analysing and communicating patient feedback, whereas, in other trusts, this responsibility was more dispersed. In particular, at an individual level, many staff felt that online patient feedback was not their responsibility or something they knew much about, but they suggested colleagues who would know more. However, when contacting these other recommended members of staff, the researcher often got the same response, that they were not, in fact, the ones who were best placed to talk to us about this, but it was someone else, and so it went on. This has interesting parallels with the case of the Lue, a tribe studied by Moerman141 in the mid-sixties, whereupon finally meeting one of the elusive Lue tribespeople, he was told that they were not in fact the Lue, but it was the other people further down the river who were the real Lue. When Moerman got further down the river, these other people exclaimed that they were also not the Lue and the actual Lue would be found further along, and so on. Moerman soon realised that though by his own estimations, he was looking for the ‘genuine’ Lue, the category was much more elusive than that, and they themselves did not identify with, or feel comfortable with, being labelled as such. In our case, patient feedback was seen as very important and this was evidenced by the existence of staff whose very role was to oversee this aspect of care (patient experience managers, patient experience leads, etc.), but this was sometimes accompanied by a diffusion of responsibility about who could, and could not, adequately speak for the ways in which practices involving patient feedback featured in the trust.
Against this background of diffusion and uncertainty of responsibility, there was also a feeling of disempowerment in terms of being able to take action to address the issues raised. In large, part of this stemmed from one of the unique features of much of online feedback: anonymity. Anonymity is often heralded as a way of encouraging patients to speak honestly about their experiences without revealing their identity and fearing adverse consequences for their care (e.g. Speed and colleagues84). It is also seen as protecting patient confidentiality (so individuals do not reveal their health condition) and on some patient feedback sites staff names are specifically removed through ‘moderation’ to protect staff (and the host website) from defamation issues or malicious content. However, anonymity also makes it very difficult for staff members to address the issue directly and prevent it from happening again:
So, I think [um] one of the downfalls to online patient feedback is a lot of the time it’s anonymous. And that’s really hard. If someone’s making a really specific comment, they’ll say, you know, ‘My care co-ordinator has failed to come and see me for 3 months on the trot’. We can’t say, ‘OK, well let’s, you know . . . see all care co-ordinators and say, ‘What’s going on here?’. That’s quite difficult because you really want to help improve that one individual’s care. But you don’t know who they are. So sometimes it . . . can feel to the staff a bit meaningless perhaps when they’re asked to respond to this. You know, you have to give a generic answer. I think that’s quite hard.
MU11, patient experience lead, site 2 hospital
Here, the patient experience lead of the site 2 hospital expresses feelings of powerlessness and frustration. The patient experience lead maintains that the anonymity of the feedback makes it difficult to help improve an individual’s care. In this situation, staff can feel helpless:
What you tend to get with anonymous feedback is everyone in the [staff]room who really cares thinks it’s [about] them [laughs]. And they’ll then do the kind of existential, ‘Oh god, was it me? I did this,’ and you’re sitting there thinking, ‘No, it wasn’t you . . . ’ but that’s the problem with anonymous. In all my experience, anonymous feedback doesn’t work. Because it just leaves everyone not being able to fix it. We’re not here just to pay our mortgages. We come in every day to help people. So, if we get given something that we can do nothing about, it is really, really disempowering.
SITE 2-12, head of performance, Improving Access to Psychological Therapies, site 2 hospital
In this case, the head of performance for one of the trust’s services is adamant that anonymisation of comments online resulted in it being very difficult for staff to do anything about the issue being publicised, and is keen to reiterate the point that NHS staff are there to make a positive difference to patients and NHS services. We note the interesting contrast between his view on how disempowering anonymous feedback is for his staff and the empowerment commonly thought to accrue to patients providing that feedback.
Interviewees felt that, when posting anonymously, the patient is, in most cases, in control of what information they do or do not reveal about themselves. The researcher was told that even if the patient chooses to reveal identifying information about themselves in a public space online, according to data protection and confidentiality laws, the trust cannot respond in a way that might confirm or deny the identity of the patient. It was thus said that, under the current law, even when patients reveal personal information about themselves online, NHS staff are bound by a legal duty of confidence to protect personal information of patients. The resulting perception among staff responsible for responding to online patient feedback was that they are able to give only very generic responses to elaborate accounts of care and patient experience:
Even if they’ve told you who they are, you can’t respond in a way because it’s a public forum, that it’s going to be letting any clinical information out . . . You know, so you do have to be very, very conscious of that. [um] And so your responses will often be . . . you’ll beat around the bush an awful lot about saying, ‘Oh, I’m sorry you’ve had this problem when you came in’.
TAU05, associate director, site 3 hospital
The extent to which legal concerns about anonymity constrained detailed and empathetic feedback is unclear. We came across examples of full responses to feedback despite anonymity. Nonetheless, anonymity in posting was often cited as one reason that responses to negative feedback by representatives of the trust in question – on public sites such as NHS Choices, iWantGreatCare and Care Opinion – are often quite general and generic. Typically, the response includes an expression of gratitude for taking the time to write the feedback, an apology for the bad experience and then a request or invitation to contact the PALS team via e-mail or telephone, so that the trust can investigate further.
In some cases, even a short reply is not felt to be possible if the person’s identity and confidentiality are to be properly protected. In a meeting with the communications team at site 1 hospital, the researcher was told of the difficulties in knowing what to do if a patient repeatedly revealed identifying information about themselves online. In one particular case, a patient with a serious mental illness had tweeted multiple times about the services that they had received, with tweets that had become increasingly aggressive in tone. The trust was concerned about their lack of recourse in this situation. They did not want to inadvertently provoke the patient or make their symptoms worse by drawing them into an exchange. Yet, they did not want to acknowledge details about the patient’s care, as that would have been a breach of confidentiality. In this particular situation, they told the researcher that the best thing to do was to ignore the tweets, as they had a duty of care not to worsen their patients’ conditions. The important effect here is how the trust’s responsibility to patients can impact on their response-ability – the ways in which they can and cannot respond in the light of anonymity and confidentiality issues. More infrastructural issues are discussed in the next section.
Response-ability
Text in this section is reproduced in part from Dudhwala and colleagues. 125 This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by-nc-nd/4.0/).
On the whole, from the interviews, it is clear that online patient feedback is valued as a mechanism for gathering more information about patients’ experiences and their feedback about the services received. However, it is also clear that in many teams there is a sense that there are not enough resources to do something about the feedback that is being collected. As the patient experience lead of site 2 hospital told the researcher, ‘so, I think online feedback is useful in that way, but then you sort of have to ensure that you’ve got the resources and the teams available to be looking at it and say, “OK, you know, what’s the bigger picture here?” ‘. This extends the notion of responsibility discussed earlier, so that it is not just about bearing the responsibility of protecting patients’ confidentiality and respecting anonymity, but also the responsibility to do something about the feedback once it’s been received, and part of this responsibility is ensuring that there are enough of the right resources to be able to deal with what comes through:
Within the context of this there’s no point in putting something in place where you then don’t have the resource to manage it . . . but your heart sinks because actually, I’ve got no resource [to do it] . . . and that’s just one more job for me . . . But you know, there’s no resource within the NHS; they keep just loading us with more and more and more little tasks to do, which are all noble and they’re not wrong things, but there’s no resource within . . . oh, I’ve got to go.
GAM14, head nurse, site 4 hospital
Once again, we see the tension playing out between feeling that feedback is a good thing, that it is useful and has the potential to improve the services and care being offered, but not having the capacity to properly use that feedback.
Elsewhere, we have characterised a dichotomy between sanctioned, solicited and sought (‘SSS’) and unsanctioned, unsolicited and unsought (‘UUU’) feedback: from an institutional perspective, there is a difference between feedback that is sanctioned (feedback obtained through a medium that is approved by the trust as an official feedback channel), solicited (consistently asked for from patients or carers) and sought (actively searched for and used); and feedback that is unsanctioned (not officially approved) unsolicited (not asked for) and unsought (not searched for). 125 Although there is an overlap between SSS and UUU feedback, a vast amount of feedback left online is largely unseen by trusts, either because they are not looking in those places or because they do not think of those avenues as feedback channels. In many conversations with staff at the four trusts, examples were cited of websites on which patients left feedback but of which a particular trust was unaware or did not actively visit to seek feedback.
The almost ‘real-time’ dialogue that online patient feedback allows was welcomed by the trusts that formed part of our study, as it gave them the ‘ability to understand in real time what somebody’s experience is like’, as one member of site 1 hospital’s board put it. We got the sense that trusts felt as though the immediate nature of online feedback can lead to quick and meaningful service improvement. A story that we heard on many occasions at site 1 hospital, a story that has become something of folklore in this trust, involved a patient who used Twitter to complain about how bad the food was in the particular ward in which they were receiving treatment. It is said that the Tweet was picked up by the facilities and service manager at the time, who went straight down to that ward and tried the food for himself. He agreed that the food was of substandard quality, thanked the patient for flagging it up and then changed the catering company that was being used.
Similarly, the patient experience lead at site 3 hospital told the researcher about a comment made by the daughter of an elderly patient who had recently been admitted to the hospital. The daughter told how she was used to talking to her mother every day, but this communication had stopped after the mother had been admitted:
There was a comment from a lady who [um] had put a comment on the website, which the comments would come through in real time on e-mail, and it was basically something like, ‘My mother who’s, I don’t know, 85, has recently been admitted to [site 3] and I live in [X], which is a hundred miles, or whatever miles away, and I’m used to talking to her every night on the phone; I can’t any more because she’s in hospital’. And I thought, ‘Why can’t you talk to your mum because she’s in hospital?’. That’s just, you know there’s no technical reason. So, I said to our telecoms guy, ‘[um] Why don’t we go and get the wards these . . . phones that you can move around’, you know. So, in fact he did it. I said to him, ‘You can go and buy them at the weekend in Argos, or somewhere’. . . . So, by the Monday we’d set it up for her, so she was able to phone her mum.
TAU10, patient experience lead, site 3 community hospital
These examples illustrate the ‘real-time’ benefits and rapid changes that trusts told us online patient feedback can bring about. These are examples in which online patient feedback allowed for the trusts to be response-able. There were, however, other instances when the trusts simply did not have enough resources to manage the amount of feedback or the multiple channels through which feedback could be left. We thus see two aspects of response-ability. On one hand, trusts felt that their ability to respond to and/or provide an answer of feedback was constrained by lack of resources, and that anonymity, in any case, was preventing them from doing so. On the other hand, trusts felt that the lack of resources limited their ability to act on feedback.
Responsivity
We have discussed the responsibility that trusts feel they have to people who feed back and also the challenge trusts face in being able to respond at all, given limited resources to deal with issues of amount of feedback, multiple feedback mediums and having the right infrastructure in place to deal with them. Our fieldwork suggest that there is also, however, a changing tide being felt in terms of the pressure to respond in a timely and visible manner, which we have referred to as the responsivity of the trusts.
Publicly, online feedback mechanisms extend the reach of feedback in terms of who can potentially access it. Thousands of people can now come across an item of feedback and track the response left, if any, by the trust. Consequently, we found that trusts are feeling the pressure not only to improve their services as a result of the feedback, but also to leave a public account of having done so. We are once again reminded of the dictum that social actions ‘are not only done, they are done so that they can be seen to have been done’. 139 In practice, this means that it is no longer enough to make changes or learn as a result of feedback, but this must now be done in a timely manner and followed up via the same platform on which the feedback was left, so that there is a publicly visible account of having done so. In a recent study of effective responses to online feedback, Baines and colleagues132 found that a response given within 7 days was deemed acceptable by the patients who were part of their research, although 3 days was the most desirable. The authors claim that, beyond this time period, there are important implications for the ‘reputation, perceived responsiveness and sensitivity of organisations concerned’. 132
In site 3 hospital, this new pressure to be visibly responsive was clearly apparent. One of the patient experience leads told of a frustrating experience the trust had experienced with NHS Choices. NHS Choices ‘tags’ a trust with a certain code when feedback is left about them so that the trust is alerted as and when new feedback is left. This makes the trust staff aware that there is feedback on the site about them so that they can access it and respond if they wish. The NHS Choices website developed an error in the codes ascribed to feedback being left about site 3 hospital. This resulted in hundreds of items of feedback being left without the trust being made aware of them, so the trust was not aware of their existence and could not respond to them. The resulting backlog of patient feedback was of great concern to the patient experience team at the site 3 hospital, both because they worried about their patients thinking that they did not care and, more especially, as a reputational issue, that others would come to think that site 3 hospital did not think that patient feedback was important. The researcher was told that when the issue was eventually resolved and the correct codes were applied to the feedback, the trust went back to each and every piece of feedback that was left and wrote ‘due to a technical issue we haven’t been able to respond to your feedback, do you still need help with this?’. This account signals a real sense of obligation to respond to these patients immediately and to apologise for the delay, both so that any patients still needing help would get it and also so that hundreds of visibly unanswered messages would not be left on the site.
Responding to online patient feedback serves multiple purposes. Alongside helping the patient with any problems that they had, or apologising for situations that led to suboptimal experiences of care, there is also an imperative to respond so that imagined viewers of the feedback can see that the trust has responded and, consequently, that the trust is taking patients’ comments and feedback seriously. In effect, this creates specific types of patients to which the trusts are catering. In the first instance, the patient takes on the imagined persona of ‘service user’, such that complaints are used to improve the service so as to provide better outcomes and a resolution for the patient. In the second instance, the patient takes on the imagined persona of a ‘deliberating customer’ who uses online patient feedback as way to build up a picture about a trust and decide whether or not it is the type of trust in which they would want to be treated.
Discussion
Online patient feedback renders opinions and ratings about the trust visible to all with access to the internet, in a largely unsynthesised way and without the categorisation deemed relevant by some ‘expert’ audit bodies. This means that the potential audience for such feedback includes a wide range of other health-care providers, health-care professionals, patients, friends, families, potential patients, media and so on. In this paper, we have argued that we need to add the dimensions of response-ability and responsivity to more traditional understandings of what it means to be a responsible organisation that is accountable and transparent.
Added to the responsibilities of health-care organisations to provide care and a good-quality service to their patients, online feedback generates a new responsibility of protecting anonymity and preventing breaches of patient confidentiality, even in those instances in which the patient has chosen to reveal details about themselves online. However, protecting anonymity can mean that trusts are unable to respond so effectively to feedback, or if the patient has revealed details of their care online then trusts and health-care staff within the trust are still unable to respond for fear of breaching that patient’s confidentiality by confirming episodes of their care. In our case study sites, although there were many instances when online patient feedback had enabled them to make swift changes, we also found that this particular dilemma over anonymity and confidentiality sometimes had the effect of leaving trust staff feeling disempowered by this patient-empowering technology. There were, thus, instances when trusts were made aware of situations that could be improved on or made better, but had no means of making these changes. This challenge of anonymity was also a prominent finding in the free-text responses in our health professionals’ survey described in Chapter 4.
We found that trusts are now also having to change work practices in order to respond to the many different channels through which feedback is received online and the increased amount of feedback that these bring. The increased amount and visibility of online patient feedback seems to be putting more pressure on trusts to respond. Although in some cases we saw examples of how this led to positive changes to services and patient experience in real time, as in the example of the patient’s daughter being able to call her and the patient whose food was substandard, in other cases staff were left feeling that this was just one more task to add to their already long list.
Finally, we have shown how the new sorts of accountability created by online patient feedback necessitate not only responding to the feedback (as Coulter and colleagues142 argue), but also responding visibly and within a certain time frame, and encourage the view that it is unethical even to ask for patient feedback if nothing will then be done about it. Importantly, it is no longer enough just to make changes based on feedback. Trusts are now expected to publicly account for the changes that they have made by responding to the person who gave feedback and stating how they have rectified the issue, or have learned from the feedback, or have planned changes as a result. This phenomenon of public dialogue and needing to close the feedback loop is also prominent in online feedback sites such as Care Opinion, which includes a dedicated space for trusts to respond and dedicated symbols to indicate whether or not a change has been made as a result of the feedback. This is, we found, as much to do with having a visible trace of communication and response, with reputation management and showing other potential readers that the trust is a caring trust that listens to its patients and acts accordingly, as it is to do with real improvement.
Strengths and weaknesses of the study
To the best of our knowledge, this is the first study to take an organisational perspective on the NHS approach to the new phenomenon of online patient feedback. The focused ethnographic method has allowed us to collect and explore a rich data set, taking account of multiple perspectives and generating new concepts that help to explain the processes and practices in this area. In particular, we have shown how this new technology (feedback) and the responses to it are constructed and performed by the organisation. Owing to resource constraints, we had a relatively small sample of four trust sites to study. Our selection process was designed to include a variety of sites, but we recognise that not focusing on a particular type of trust risks having too much variation in the findings. Having said that, the key thematic findings held across all four sites. This was cross-sectional work, with fieldwork occurring in each site for a short period of 6–10 weeks. A more longitudinal study over several months or years would probably reveal more about the processes and practices of the trusts in relation to feedback.
Conclusion
The potentially infinite and constant public auditing of health-care organisations by patients provokes new practices of accountability. This challenges the traditional understandings of what it means to be a responsible organisation within the NHS, by adding to the additional imperatives of response-ability and responsivity. Online patient feedback brings with it new ideals for organisations to aspire to. Organisations such as NHS trusts are now required not only to put structures in place to be able to respond to the feedback that is left, but also to respond in a timely manner that is visible for all to see. In addition to the expectations to be responsible organisations, in the face of online patient feedback trusts must now also become ‘response-able’ and ‘responsive’.
Chapter 7 Patient and public involvement
Introduction
Given that we were studying the use of online feedback from patients and carers to improve NHS services, it was important that we engaged fully with patients and carers in designing this project. PPI shaped what we did, through dedicated events and through ongoing consultation from the application process onwards.
The lead for our PPI work was the lay co-investigator (AI), an active ‘e-patient’, a regular blogger and frequent user of social media to relay her experiences as a patient and patient leader. She was involved from the very beginning, contributing to the development and design of the funding proposal, and throughout the programme of work. We also had a second lay representative (DF) who sat on the SSC and who convened and chaired the PCPRG, an independent group that could ‘hold us to account’ and with which we could engage for input into issues such as study design, interpretation of findings and advice on our dissemination and ‘toolkit’. Douglas Findlay has experience as a carer and as a NHS patient, and works for Healthwatch, the independent national champion for people who use health and social care services. Both Anya de Iongh and Douglas Findlay have wide experience in patient and public involvement, with third-sector organisations and with NIHR, and have interests in NHS quality improvement.
In this chapter, we describe how PPI contributed to all stages of the project, from proposal development to dissemination. It should be noted that the original proposal development was for a submission to the NIHR Programme Grants for Applied Research funding stream. Although not funded, many elements of the proposal were highly scored by reviewers and the team was encouraged to resubmit. The lead investigator then refocused the proposal for a HSDR application. This led to an unusually long lead time between the development of the first submission and the start date of the funded project, also contributed to by co-investigator maternity leave.
Proposal development
The research team worked closely with local research design service support to involve patients and the public. The application was informed by four engagement activities:
-
A stakeholder engagement event with patient groups and professional bodies informed much of the initial thinking. Participants agreed that the project ‘gave patients a voice’. There was enthusiasm for the programme to have practical benefit and not just be ‘academic’. There was also a general feeling that health services should be seeking to make as much benefit of digital opportunities as other industries do.
-
A series of one-to-one meetings with volunteer members of public, inviting comment on the proposed research questions and priorities.
-
Attendance at local PPI events at which we identified two lay PPI collaborators (AI and another who did not have the capacity to continue once the project was funded due to the protracted application process).
-
These two lay PPI collaborators co-designed and co-facilitated two workshops, which invited members of the general public to comment on the application (one held in Oxford, one in London). Electronic feedback was obtained from people who expressed interest but who were unable to attend.
The main points raised by Anya de Iongh in the original proposal included:
-
The importance of including feedback from carers, as well as feedback from patients.
-
Consideration as to why individuals prefer to provide feedback online compared with other forms of feedback (e.g. paper based). Are there issues that are specific to online feedback?
-
Consideration of the impact of feedback for health-care professionals, even when it does not relate to care or services directly provided at their trust or organisation.
-
Sharing online social media communities that often discuss these topics as a potential resource.
-
Importance of distinguishing between what is being fed back on (i.e. treatment, services, professionals or a combination of the three, as experientially, they can be interchangeable).
-
The importance of capturing experiences of mental health services when looking at primary and secondary care services in case studies.
-
Tweeting from the workshop, Anya de Iongh sourced further comments from several patients and carers that highlighted the importance of feedback, but specifically its use to drive improvements and the need to maximise the use of currently available channels.
Recruitment of the Patients, Carers and Public Reference Group
The PCPRG advertisement, which was co-created by Anya de Iongh, Douglas Findlay and the research team, requested individuals who had recent personal or family experience of hospital inpatient care on which they have provided online feedback, or people who had sought involvement involved in trying to improve hospital services. We advertised for lay members through a range of avenues, in order to attract as diverse as possible a mix in terms of age, ethnicity, geography, health condition and type of health-care experience. The opportunity was promoted via the University of Oxford, the Nuffield Department of Primary Care Health Sciences PPI co-ordinator, Patients Active in Research, Twitter and Douglas Findlay’s contacts at Healthwatch.
Eleven expressions of interest were received and the researchers worked with Anya de Iongh and Douglas Findlay to appoint a PCPRG of seven members (including DF). The appointment decisions were based on the range of perspectives we wanted from the panel and the quality of the applications in terms of experiences and insights. The panel was diverse in terms of sex, ethnicity and geography, but, on reflection, the panel would have benefited from some younger members who were more familiar with online technologies and platforms. An older demographic is not unusual in PPI panels. The motivation to take part in the project was mixed. Although some members genuinely had good experiences and wanted to give something back, others saw the project as more about NHS complaints. Discussing the project with these individuals highlighted the confusion between feedback and formal complaints, and how important the differences between the two are. This was an ongoing challenge in future discussions as a panel.
Patients, Carers and Public Reference Group involvement throughout the study
The PCPRG met for the first time in July 2016. The research team gave the group a briefing sheet in advance (co-produced with AI and DF), which described the programme in more detail. At the start of the meeting, the chairperson, Douglas Findlay, covered the role of the panel and the terms of reference.
Researchers provided an overview of the five interlinked projects. Comments were specifically sought on:
-
the project 2 draft public survey questions
-
the project 3 qualitative interview guide with patients and carers.
Further detail was sent by e-mail after the meeting.
For project 2, the group suggested that the survey asked about (1) online behaviour (such as use of social media and reviewing other services online such as TripAdvisor), (2) how people decide which websites to trust; and (3) participants’ level of English language comprehension, which can affect online engagement. Subsequent piloting work used cognitive interviewing with other members of the public who were asked to ‘talk through’ their thought processes as they completed the draft. This work improved the phrasing and clarity of the questions to improve understanding and reduce ambiguity (e.g. changing the question wording or the response code options).
For project 3, the group discussed feedback comments generally beyond complaints, and although there was some difference in opinion, there was a consensus that although complaints are important, and difficult for individuals involved, the more general feedback comments are very important as well. Specifically, the PCPRG gave their views on the types of questions that they personally would like to be asked and question areas that they thought had been omitted. In addition, Anya de Iongh reviewed recruitment material (recruitment poster and invitation letter).
The PCPRG was invited to the first INQUIRE symposium, which was held in Oxford in December 2016. There they received an update on the INQUIRE programme of work and were able to feedback to us. The PCPRG chairperson, Douglas Findlay, gave a presentation entitled ‘online patient and carer feedback, a personal perspective’. The event was also attended by academics, policy-makers, senior managers from NHS trusts, people actively involved in patient feedback platforms and experts in social policy (on patient participation in health care and on public accountability in the NHS). In order to involve other people who used services or were carers but not able to attend on the day, the presentations were all live tweeted on the INQUIRE UK Twitter account. Anya de Iongh curated the Twitter activity and this was uploaded onto the project website. Douglas Findlay had separate conversations with each of the panel members in the days following the workshop, to seek their feedback on the programme as a whole.
As requested by the panel, an update was provided in April 2017 to share progress since the previous meeting in December and to support preparation for the next meeting. This was to directly address feedback about ensuring that time together had the greatest impact for the members. The research team shared the aims of the next meeting (June 2017) in advance and provided a summary of the key findings per project. A series of short questions were also provided so as to stimulate thinking in advance of the meeting. Douglas Findlay drafted a visual diagram of the different components of the project to help support them.
At the June 2017 meeting, the group discussed what the findings of project 2 (public survey) and project 3 (patient interviews) might mean and helped to interpret them from a patient’s perspective. The reference group was asked to suggest recruitment routes to increase the diversity of the sample. Additional research participants were recruited as a result. Subsequent discussion focused on what might be important to include in the toolkit/online resource and how to present it to make it relevant and useful for NHS organisations.
Formal PCPRG involvement was more limited in projects 1, 4 and 5. In these projects, the only PCPRG inputs were discussion of the findings and comments on the respective content in the toolkit. However, as outlined in Broader public and patient involvement (Anya de Iongh and Douglas Findlay), in addition to this PCPRG element, both Anya de Iongh and Douglas Findlay contributed to these projects as individual lay experts. PCPRG participation was instead concentrated on those projects where there was a clear role for the group.
During 2017, three of the seven PCPRG members withdrew for personal reasons. (Note, the programme had been extended for 11 months longer than originally intended due to a delayed start to project 5.) The PCPRG was welcomed to the second symposium in June 2017, an important dissemination event and a chance for them to hear about the findings of the study. The chairperson, Douglas Findlay, again gave a presentation on his perspectives of research.
The remaining PCPRG members were consulted on the online resource, the lay summary of the monograph and dissemination. Virtual meetings and electronic communication were favoured over a single face-to-face event. Douglas Findlay also gave a short video interview for use in the resource.
Broader public and patient involvement (Anya de Iongh and Douglas Findlay)
The programme benefited throughout from the insights of Anya de Iongh (as co-investigator) and Douglas Findlay (as SSC member), who were involved in discussions on all projects at full-team and SSC meetings, respectively, commenting critically on the findings and their implications. In addition, they did the following:
-
Anya de Iongh reviewed the text for the INQUIRE website and contributed to the PPI section. Both Anya de Iongh and Douglas Findlay wrote and contributed to blogs published there.
-
Anya de Iongh and Douglas Findlay contributed to a briefing sheet intended for policy-makers and also to update the PCPRG.
As mentioned in Patients, Carers and Public Reference Group involvement throughout the study, in projects 1, 4 and 5, in which there was limited involvement of the PCPRG, the PPI input came from Anya de Iongh and Douglas Findlay as individuals.
For project 4, Anya de Iongh reviewed a draft set of questions to be included in the surveys of doctors and nurses. For example, Anya de Iongh highlighted the importance of referencing that the feedback referred to could include that from carers, as well as patients.
For project 5, both Anya de Iongh and Douglas Findlay contributed to discussions on the NHS trusts to be approached as case study sites.
Challenges faced and lessons learned
Anya de Iongh identified the following challenges for meaningful and appropriate PPI during the programme, and suggested lessons that may be learned:
-
Feedback is a general term and captures a spectrum of activity, so clearly defining this and the other interlinked but distinct activities, such as complaints, is a nuanced but critical point and clarity is needed.
-
All involved in a PPI capacity need to be fully supported to fully understand the project and the nature of the feedback required by the research team. Clear terms of reference are necessary to ensure that lay advisors feel empowered to be proactive.
-
The importance of regular updates should not be underestimated, particularly when the ‘slow’ pace of research means that there is necessarily a period in which there are few involvement opportunities.
-
Challenges of working remotely exist, yet if overcome would generate greater opportunities to review material.
-
Challenges exist when working electronically (e.g. missed communications).
-
The maintenance of PPI engagement becomes more difficult when delays are encountered. Even without project extensions, people’s lives and other commitments change year to year.
The significant commitment required of the patient and public representatives working on a project like this should be noted. For Anya de Iongh, this was a 5-year commitment from the first time the ideas were discussed with her to the completion of this report. She had to fit this in around her other roles, as well as her health needs. Furthermore, during the course of this project she has moved to a new role in the NHS and become patient editor at the British Medical Journal.
Chapter 8 Conclusions and implications
Summary of main findings
Health systems are under increasing pressure to save money and improve quality. There are hopes that, over the next decade, digital health and, specifically, the internet, harnessed as a health service tool, can address these aims by shaping individuals’ service use and health perceptions. At the same time, as the NHS embraces a responsive, patient-centred, listening culture, it is important that it is listening, interpreting and responding to the right signals. In line with many other sectors, these signals are increasingly coming from online user-generated content, as patients use the internet to comment on their experiences of health and care services.
In this context, we set out to examine this emergent phenomenon that was being harnessed in other sectors, such as travel and retail, with the intention of providing the NHS with the evidence required to understand and use online patient feedback.
Our first objective was to identify the current practice, state-of-the-art practice and future challenges for online patient feedback, and to determine the implications for the NHS. This was primarily achieved through the stakeholder and literature review work undertaken in the first project, described in Chapter 2. This was then enhanced by the findings of our own research, described in Chapters 3–6. The key findings from our initial evidence synthesis and consultation were that although research into online feedback has grown in recent years, it remains limited in quality and quantity, and lags behind the practice and the issues of interest to stakeholders. The evidence is predominantly from descriptive and small-scale studies. We can conclude that a minority of patients in the various developed countries that have been studied are using these sites to choose health professionals and to gauge public opinion about them, and that use appears to be increasing. There are examples of online feedback being used to monitor health services, for example in the Netherlands, demonstrating the potential for use by care quality regulators. Although the literature review (and our own study described in Chapter 4) showed that many health-care professionals remain somewhat sceptical about the validity and reliability of online patient feedback (and doctors more so than other health-care professionals), and worry about feedback being overly negative, previous work consistently shows that most feedback is positive. Our own projects showed that the main intention for many people who leave online feedback and comments is to contribute positively to the NHS, rather than to criticise.
Our second objective was to understand what online feedback from patients represents, who is excluded and with what consequences. As stated above, online feedback is generally positive. Patients ‘care for care’, wanting to help improve the NHS, and they also report wanting to engage in constructive conversations about their care. There seem to be several ‘disconnects’ here: patients are increasingly using online feedback, yet are rarely asked to provide it. Health professionals (especially doctors) are unsure about the value of the feedback and believe it to be mostly negative, and rarely solicit it. Patients who are not invited to contribute through specific channels are often challenged by the fragmented landscape of feedback they face. Our survey findings suggest that although providing online feedback about health care is still an unusual activity for most, > 40% of the internet-using population read this feedback, indicating that this user-generated content about health services has the potential to have wide influence. People who read and write online feedback are not representative of the general population, with those who read feedback more likely to be younger, female, with higher income, experiencing a health condition, urban-dwelling and having more frequent use of the internet. For providing feedback, the only significant association was more frequent internet use. This needs to be taken into account when using online feedback as a way to improve and monitor health-care services.
Our third objective was to understand the factors that influence (which we previously referred to as potential barriers to and facilitators of) the use of online patient feedback by NHS staff and organisations, and the organisational capacity required to combine, interpret and act on patient experience data. As already stated above, despite patients wanting to provide constructive feedback and engage in conversations with the service, we uncovered some caution among health-care professionals about the usefulness and tone of online patient feedback. At an organisational level, there is a further ‘disconnect’: patients using a wide range of online settings (ratings websites, social media, etc.) to make comments and express their desire for responses and dialogue, whereas organisations lack the resources or awareness to monitor and respond to feedback, or monitor only certain sanctioned channels (perhaps literally disconnected from the routes used). Organisations are also challenged by the anonymity of much internet commentary; this ‘anonymity paradox’84 may empower patients who wish to provide feedback without fear of consequence, yet disempower organisations and professionals if they cannot locate the source of the problem and thus take action. In our ethnographic work, we showed that NHS trusts need clarity over where the responsibility for dealing with online patient feedback lies, as well as being response-able (having adequate resources) and responsive in providing timely and visible responses, which both deal with the issue in a transparent way and demonstrate to others that the organisation is one that can address such issues (reputation management).
We also had a fourth ‘knowledge translation’ objective. Our full list of outputs is provided in Appendix 19. In addition to academic papers and talks, as well as a briefing paper for policy-makers, seminars for CQC and NHS Digital, two INQUIRE workshops, a website (inquireuk.org) and Twitter account (@InquireUK), we used the study findings to develop an online resource for NHS organisations to encourage appropriate use of online feedback in combination with other patient experience data. The development of this resource was informed by a meeting of a learning set, bringing together several projects funded by NIHR HSDR under the same call. Several projects were planning some kind of ‘toolkit’ and our plans were shaped by the discussion at this learning set, including a presentation from a doctoral student involved in one of the other projects, on her work on the value of toolkits. 143 In particular, we worked with HSDR project 14/156/06 (hosted within the same university department), which was also developing an online resource. 144 Together with this other project, we concluded that although ‘toolkits’ are increasingly used for dissemination, both researchers and funders appear to have reservations about them, not least relating to the word ‘toolkit’ itself. The key messages that both of our projects took from these initial discussions were that these outputs were likely to be more useful if designed iteratively, produced with professional design and marketing input, and disseminated by a trusted ‘champion’. With the agreement of our SSC, and following the decision of HSDR project 14/156/06, we commissioned the Point of Care Foundation to produce and host this resource. The Point of Care Foundation is well established and known for hosting high-quality, useful resources in the area of patient experience. It also has the support and involvement of NHS England. We therefore worked iteratively with the Point of Care Foundation over several months to design and populate this online resource, with several rounds of comment from both the research team and our PPI representatives. The final resource provides a summary of our findings written for both health professional and public audiences, together with practical messages for NHS staff seeking to make best use of online feedback. This was published on the Point of Care Foundation website in 2019 (URL: www.pointofcarefoundation.org.uk/resource/using-online-patient-feedback/; accessed 1 October 2019).
Over and above these objectives, our overarching aim was to improve NHS capability to interpret online feedback from patients and the public, and to understand if and how to act on this to improve services. We believe that our findings will inform professionals and the organisations they work for about the potential of online feedback, and its limitations, and the context in which it should be interpreted. In particular, we feel that there are key overarching messages for the NHS, in terms of the frequency of use of online feedback by patients, their constructive motivations for providing feedback and their desire to engage in conversations with the service, in order to support service improvement. Other key messages for the NHS are that professionals and organisations demonstrate some caution and lack of preparedness to harness online patient feedback. We discuss the implications for the NHS in more detail in Discussion. Given the low evidence base that existed before our study, we were not able to fully address the issues of whether or not and how to act on feedback, although we believe that we have indirectly addressed these by filling evidence gaps (such as about who comments and why), which needed to be answered before services are able to consider the if and how questions. In terms of improving NHS capability, we believe that our online resource, hosted by the Point of Care Foundation, provides the key lessons from our work in a succinct and user-friendly format, designed for use by NHS practitioners and others.
The multimethod approach
We used a variety of methods across five distinct projects. This was a multimethod study rather than a mixed-methods one. In other words, we used multiple approaches to studying different aspects of online patient feedback, but we were not doing so as part of a single research study, whereby the different methods would be integrated in the analysis to triangulate the conclusions. Instead, this was a portfolio of studies, each answering different questions that all related to our objectives. There are both ontological and epistemological differences between our approaches and within our research team. This is why each study is presented in a separate chapter and our descriptive overview of findings takes a pragmatic approach. 145 We feel that bringing differing lenses to investigating different aspects of this phenomenon is valuable without the need to reconcile the differences in philosophical orientation between studies. There is no intention that the weight of evidence from one study is more or less than any of the others. Nevertheless, it is important to compare and contrast the empirical findings of our four primary studies and, in particular, the two studies that looked at the public and patient perspective, and the two studies examining the professionals and organisational aspects. Our quantitative survey of the public was of a representative sample of the general population, whereas our interview study of patients and carers who provide feedback included participants with particular experience of online feedback. The quantitative study took a very broad definition of online feedback, including feedback on tests and treatments, as well as on services, and investigated the characteristics and motivations of users with relatively closed questions. Our qualitative study had a narrower brief and explored in-depth personal experiences of giving and using feedback on services. The former study was able to quantify the phenomenon and answer questions about who comments and how often, whereas the latter was able to explore more of the ‘why’ questions and unpick specific motivations around ‘caring for care’ and wanting conversations. Our quantitative surveys of professionals allowed some free-text responses, but mainly quantified their (limited) experience of online feedback and measured their attitudes with Likert scales. In contrast, the ethnographic method of project 5 (reported in Chapter 6) allowed us to undertake a rich exploration of the practices and processes within a few NHS trusts, and to draw emergent conclusions about responsibility and reputation. These were not included in the quantitative survey and therefore not captured (with hindsight it would have been useful to have included questions about reputation in the survey of professionals). At the same time, the ethnographic work was not able to provide more generalisable measures of the attitudes and experience of trust staff. Perhaps unusually, we undertook the surveys before and during the interviews and ethnographic work, in part due to resource and recruitment constraints. It was therefore not possible to draw on the findings of the qualitative work to inform the design of the surveys.
Limitations
Within the previous chapters, we have discussed the strengths and weaknesses of each individual study. The limitations of the scoping review method include the absence of a standardised method for synthesis and the fact that we did not formally quality appraise each study, given our aim to provide a broad overview of current work. The findings are limited by the weaknesses in the extant literature base. The surveys of the public and of professionals are both limited by having a narrow range of response categories and not allowing deeper exploration of issues, although we attempted to mitigate this with the inclusion of a free-text category in the professional survey. The surveys identify only associations and not causation. The surveys are also limited by the quota-sampling approaches used, and by their susceptibility to response and recall biases. The public survey was a household survey conducted face to face and the sampled population was demographically representative of the general population. However, the professional survey work was conducted online and therefore restricted to professionals able and willing to complete an online questionnaire. The qualitative interview study purposively sampled participants who had something to say about their use of online feedback and we therefore did not get much qualitative data from individuals who choose to eschew online feedback or who are excluded from it. The ethnographic study was limited to four NHS trust sites, as each case study required 6–10 weeks of fieldwork data collection.
As regards the limitations of the project as a whole, this work provides only a snapshot of a fast-emerging phenomenon. Some of our findings may become out of date relatively quickly, as the practices and processes of online patient feedback develop further. Our primary data-collection methods were cross-sectional and not able to provide longitudinal insights into issues such as patients’ interactions with online feedback as they navigate the journey of a chronic illness, or an organisation’s response over time to an emerging feedback story or to follow the consequences of a change in organisation policy regarding feedback. Our public survey and interview participants were English speakers. From a digital inclusion perspective, future work needs to consider how online feedback can address the needs of people who do not speak English (and who often experience worse care). More generally, we did not have a focus on non-users and, although we can describe their characteristics based on survey findings, we did not explore their experiences in the interview work. In addition, in places we may have tended to consider the ‘digital user’ as generic and future work could consider nuanced differences between users or types of users. 1 We also did not have any experimental element to test whether or not using online patient feedback as a specific intervention could improve services. We discuss these, and other suggestions, as areas for further work in Recommendations for future research.
Discussion
There are several bodies of literature relevant to our work. With reference to the patient safety and quality improvement literature, our findings speak to the concept of ‘soft intelligence’. 146,147 As Dixon-Woods and colleagues146 and Martin and colleagues147,148 have argued, soft intelligence, including narratives and comments from patients, can complement harder metrics to offer valuable insights into the performance of health services, particularly in relation to concerns about quality and safety. However, there are challenges for health services harnessing soft data in this way. Martin and colleagues147,148 showed that structural processes within NHS organisations can encourage the systematising of soft data: aggregating it and attending to the majority views to provide it with a ‘hard’ legitimacy, when its value may lie in outlying, exceptional reports. 147,148 This echoes our own findings in which professionals expressed concern that online feedback may not be ‘representative’ in a quantitative sense. We believe that this is to misunderstand the potential value of such feedback, which tends to follow a skewed U-shaped distribution (with more positive comments than negative) and for which taking a numerical average is not helpful. The rich data contained in online feedback can highlight specific areas for services to improve or learn from and, importantly, patients expect it to be read, engaged with and responded to. People are more likely to feed back on the extraordinary rather than the routine. However, as we found, organisations do not necessarily have the processes in place to deal with this new source of qualitative intelligence. It is interesting to note the attention given to various ‘big data’ initiatives, which are harvesting large amounts of patient comment and seeking to derive meaning through automated text mining and sentiment analysis techniques. 149 Other work by the principal investigator and members of the team on text mining (as yet unpublished) suggests that these computerised linguistic techniques are unable to capture the nuance of patient feedback and risk categorising complex comments in simplistic, binary ways (often positive vs. negative), thus losing the value of their ‘softness’. Although some have argued for more scientific rigour in the collection and reporting of narrative feedback,150 this should not lead to a quantitative reductionism, as patients themselves have told us that they want their stories to be heard (not just counted).
Another strand of the quality improvement literature that is relevant to this work is studies examining why, despite the widespread policy attention it has received, there is little evidence for patient feedback driving health service improvement. 142,151 As we found, organisational preparedness and competency to deal with patient feedback is important; as Gkeredakis and colleagues152 point out, simply presenting NHS staff with data will not lead to change. Sheard and colleagues153,154 undertook a study funded under the same NIHR HSDR call and participated with us in shared learning sets. In their study, they identified barriers to the effective use of patient experience feedback (not just online feedback) at macro and micro levels. 153 At a macro level, they noted that, despite a fast-growing industry for NHS organisations to collect their patients’ experiences (including the Friends and Family Test), there was little resource or know-how within the organisation as to how to interpret the data and take action as a result, and data collection became an end in itself. A second parallel NIHR HSDR study, led by Louise Locock, who is a co-investigator on the present study, found that within NHS trust organisations, survey data remained the most recognised and valued form of patient experience data and that online feedback was rarely visible or used. 144 Online patient feedback was seen as interesting, but staff often felt that they did not have organisational endorsement to engage with it. Also relevant here is the boundary between feedback and complaints: in many organisations these are dealt with separately, by different teams and, as Locock and colleagues144 found, there can be pressure to turn something into a complaint so that action could be taken. These findings link to our ethnographic study and the new practices of responsibility, responsivity and response-ability that organisations need to address in order to harness online feedback. In the Sheard and colleagues study,153,154 at the micro level, it observed that the majority of largely positive feedback is generic and difficult for staff to take action on, in contrast with the smaller amount of negative feedback, which was usually more precise and actionable, although staff sometimes questioned the validity of patient experience data and were unsure how to interact with it, or, as Locock and colleagues144 found, they lacked confidence to do so. In line with our own findings, these and other studies have shown that although many professionals are positive about the principles of patient-centred care, many are sceptical regarding the value of online feedback, especially about its ‘representativeness’. 123,150
It is also interesting to reflect on our findings in the light of previous work in the field of patient safety on ‘speaking up’ about perceived breakdowns in care. People often do not feel comfortable about speaking up. For example, those who are older, with worse overall and mental health and who do not speak English at home are less likely to speak up. 155 However, when supported and encouraged to speak up, many patients are able and willing to do so. 156 Key factors influencing speaking up are clinician support and the subsequent staff or organisational response. 157–159 Sadly, the literature also has plenty of examples of when patients have tried to draw attention to safety concerns, but these have been dealt with dismissively or ignored, or the diffusion of responsibility within the organisation inhibits taking action. 160 Our finding about ‘conversations about care’ is hugely relevant here: patients do not want online feedback to be a one-way transaction; they want to be engaged in a dialogue. Our findings also suggest challenges here for online feedback in particular: the perceived problems with patient confidentiality and with anonymity (seen as a benefit by many advocates of online feedback), which were reported in our professional survey and ethnographic case studies as factors that limit providing a meaningful response.
In common with the literature on patient empowerment in the digital age, our findings, in respect of online patient feedback, suggest that such empowerment remains constrained and context dependent:24,161,162 constrained both by the fact that only a minority of people either choose to or are able to participate, and also by the attitudes of professionals and issues related to the practices of organisations. The ‘disconnects’ we outlined in Summary of main findings add further constraints: people wanting to have conversations about care, but dialogue with health services is lacking and limited by the anonymity paradox. Feedback is often motivated by a desire to help services improve, but professionals are wary that the content will be negative and rarely encourage it. The feedback landscape is complex and hard to navigate, and it is not clear which channels to use. Aside from giving voice to health consumers and directly informing service improvement, another aim of patient feedback is to support the choices of other patients, but choice in the NHS is restricted as, in practice, users have limited choices and little control over the services they choose. 163
Yet the idea of a digitally sophisticated health consumer at the centre of a technology-enabled health system, actively engaged in managing their own care, which elsewhere we have characterised as the ‘digital health citizen’,164 has caught the imagination of policy-makers seeking to address the well-rehearsed challenges of twenty-first century health care. As Lupton165 and others have pointed out, this echoes idealised neoliberal discourses in the field of health promotion and elsewhere that position healthy lay people as ideally able and willing to actively participate in maintaining their good health and use services appropriately. In this discourse, little attention is paid to the ‘hidden’ extra work that may be created for both staff and patients, and the challenges this causes.
To make progress towards such a techno-utopian vision of a reconfigured digital health service centred on patients, a new social contract may be required, whereby both the health service and its digital health citizens have new rights and responsibilities. 164 Our findings show that providing online feedback is a minority, but growing, activity and there is some professional scepticism and a lack of organisational preparedness. Staff may need new responsibilities around signposting, monitoring and acting on online feedback, with an impact on their workload. The use of online feedback is likely to increase, perhaps exponentially if it mirrors other sectors, and given that patients are motivated to provide feedback to bring about improvement, perhaps among their new rights and responsibilities patients should have the responsibility and opportunity to rate every care encounter, alongside the right to have a response to every piece of feedback. As Adams10 identified in a pioneering early study in this area, this would shift the notion of the reflexive patient from an active health-care participant who makes informed choices to one who is perpetually engaged in a quality improvement dialogue. Perhaps this is the level of disruptive innovation we need to bring about truly transformative change to health services struggling to deliver high-quality, continuously improving, responsive, patient-centred care.
Implications for policy and practice
This is an emerging, but increasingly important, area for policy-makers and practitioners seeking to deliver patient-centred health services that make best use of technology. We are not suggesting that online patient feedback should replace all other forms of patient experience data, but we believe that it can provide a valuable and timely adjunct to existing sources. Online feedback data are not perfect, the people who engage with online feedback are not representative of the general population and taking averages from the online feedback content is not helpful, as people tend to comment when they have something to say about a particularly good or bad experience, so the distribution of the tone of content tends to be skewed. However, those working in policy or practice roles need to take note that reading online feedback from other patients is becoming a more mainstream activity for many people and has considerable potential to influence others’ behaviour. The number of people who provide online feedback is increasing, but it remains a relatively infrequent activity. Many people provide feedback because they want to give praise or constructive commentary. Previous work confirms that the content of most online feedback is positive in its tone. In our interviews, people describe caring about the NHS and wanting to help it as part of a conversation, rather than a one-way street.
Despite the above, medical professionals are somewhat sceptical and cautious about the usefulness of online patient feedback. They are becoming aware that patients are providing feedback online, but have concerns about non-representativeness, negative comments and the anonymity of most online mechanisms. Nursing staff are less sceptical, although do share some of these reservations. It is perhaps not surprising that very few patients report being encouraged to provide online feedback and few doctors or nurses ask their patients to do so. NHS trusts have varying approaches to capturing and using online patient feedback. A significant finding from our work is that, varying by trust, different online channels are seen as ‘sanctioned’ or ‘unsanctioned’ by the organisation and, in general, only the sanctioned channels get monitored and responded to (even though patients will often be unaware of this and use a multiplicity of routes to give feedback). Staff working within trusts are aware that the public visibility of online patient feedback makes response important for reputation management, as well as service improvement. However, trust staff are often unsure where the responsibility to respond to online feedback lies. They also do not always have the resources to be able to respond to feedback (either to provide direct responses or to act on the information provided) or feel powerless to do so as anonymous (or anonymised) comments restrict what response can be made (this is also constrained by patient confidentiality concerns). Attention must also be paid to any unanticipated consequences to the emergence of online patient feedback, and also the ‘hidden’ work it may cause both patients and staff.
Recommendations for future research
Our findings open up some key questions for future research:
-
Intervention research could examine the extent to which online patient feedback can deliver service improvements in settings such as general practice, residential homes or secondary care. This should build on the quality improvement work of organisations such as Care Opinion and iWantGreatCare, and recent academic work in this area such as that by Baines and colleagues,132 which produced a response framework for online feedback. Using experimental study designs, online patient feedback could be used in its ‘raw’ form or as part of an intervention package, as in the studies of experience-based co-design in service improvement. 166,167 To support such intervention work, work is needed to articulate the theory of change and logic model underpinning the direct and indirect links between online patient feedback and service improvement.
-
Further observational studies could take a longitudinal perspective to understand how staff and organisations deal with online patient feedback over time. This should consider both comments on acute care and comments provided by people with chronic conditions over the course of their contacts with the NHS. Studies should also examine differences between types of use and user (or non-user). Observational work would also be useful to determine what proportion of contacts with the health service (such as consultations) lead to an online comment being made and what the predictors of this are (e.g. by patient or service characteristics, or other factors). It would be interesting to observe how the health service can embrace feedback when it is under increasing resource pressures and many practitioners lack time for even directly providing care. It would also be important to look for any unanticipated consequences of online feedback at an individual, team and organisational level.
-
Policy research could examine how regulators could use online feedback as part of their inspections and quality control of organisations or individuals. This could be done in a cross-sectional way (e.g. what does the online feedback say about this organisation?) or in a predictive way (e.g. can monitoring online feedback predict when a quality problem is emerging in an organisation?). Policy work could also examine digital inclusion issues, for example whether and how online feedback can address the needs of non-English speakers.
-
Another area of focus in the future could be examining patient comments about particular treatments or diagnostics and, especially, whether or not this could be used for vigilance to safety issues. This is more likely to be useful for device or procedure vigilance, which are less developed than the area of pharmacovigilance.
-
Finally, methodological work is needed to determine the best approaches to analysing comments to provide the most useful data to the NHS. In our literature review work, we found papers that used both traditional qualitative analysis and machine learning techniques, such as sentiment analysis. The latter approaches have attracted a lot of interest, as have all areas of ‘big data’ analysis, but previous work has tended to conclude that computational approaches to online patient feedback are generally too insensitive to the nuanced nature of many comments, and that just categorising comments as being positive, negative or neutral in tone is not always very helpful for services seeking actionable feedback on which to base improvements. Future research could determine how best to derive actionable comments from large amounts of online feedback.
Acknowledgements
The authors would like to thank the following:
The participants of the separate studies that made up this research programme; the doctors and nurses who completed specific questionnaires; AHPs who attended a focus group; the members of the public who also completed a questionnaire survey; and the people who were interviewed about their experiences of using and posting online feedback.
The four participating NHS trusts, and their senior teams, for agreeing to take part and supporting the research.
The staff in a variety of roles at the four participating case study sites who variously gave interviews, attended focus groups, contributed documents and welcomed Farzana Dudhwala to meetings.
Joanna Goodrich, Bev Fitzsimons (Point of Care Foundation), Eleanor Stanley (Stories for Health), Simon Fairway (The Bureau) and Emma Hyde (University of Oxford) who worked with the principal investigator to develop the online resource on using internet ratings and experiences for service improvement.
Ruth Sanders, who video-recorded interviews with senior figures from NHS trusts, policy, academia and patient involvement for use within the online resource.
Vanessa Eade, who was instrumental in obtaining the necessary ethics and research governance approvals at a time of significant change with the Health Research Authority assuming the role of governance of all health research in the UK. We are grateful to Caroline Jordan and Kristy Ravenhall, who provided unwavering support and practical help in organising two highly regarded dissemination workshops, as well as full-team, PCPRG and SSC meetings. Thanks also to Caroline Jordan and Emma Hyde for editorial assistance and practical support during the production of this report.
Colleagues working on other studies funded under the same HSDR call with whom the principal investigator and some research team members met at intervals as a ‘learning set’ to share and reflect on each other’s findings and generate further ideas.
The members of the PCPRG, who engaged with the research and acted as ‘critical friends’.
The external members of the SSC: Aileen Clarke (University of Warwick, and chairperson of the SSC), Bob Gann (NHS England) and Felix Greaves (Public Health England).
Helen Margetts, who was originally a co-investigator on the study, contributing to the original design, but due to other commitments handed on her role to her colleague Rebecca Eynon.
Rebecca Eynon of the Oxford Internet Institute, who was instrumental to the cross-sectional survey of the public, bringing her specialist knowledge of the OxIS and contributing to the survey design, conduct, analysis and interpretation.
Amadea Turk from the Warwick Medical School, who analysed the free-text comments from the doctors’ survey in Chapter 4.
Adam Barnett from the DIPEx charity, who produced and maintained the INQUIRE website.
We acknowledge support from the NIHR Oxford Collaboration for Leadership in Applied Health Research and Care at Oxford Health NHS Foundation Trust for salary support to John Powell, Anne-Marie Boylan and Michelle van Velthoven. Sue Ziebland is a NIHR Senior Investigator.
Contributions of authors
John Powell (https://orcid.org/0000-0002-1456-4857) (principal investigator) conceived the study, led the overall design, provided academic leadership throughout the conduct of the study, contributed to all work packages, led the development of the online resource, sat on the SSC, led the writing of the final report and gave final approval of the manuscript.
Anne-Marie Boylan, Veronika Williams, Michelle van Velthoven and Farzana Dudhwala led the development of the online resource, sat on the SSC, led the writing of the final report and gave final approval of the manuscript.
Helen Atherton (https://orcid.org/0000-0002-7072-1925) (co-investigator) contributed to the overall study design, led the survey work of both the public and professionals, contributed expertise to all other projects within the programme, led the writing of both survey chapters, contributed to other chapters of the final report and gave final approval of the manuscript.
Veronika Williams (http://orcid.org/0000-0001-5660-8224) (co-investigator) conducted the scoping review with Anne-Marie Boylan; co-ordinated meetings, contributed to the surveys of the public and professionals, contributed expertise to all other projects within the programme, contributed to writing the final report and gave final approval of the manuscript.
Fadhila Mazanderani (https://orcid.org/0000-0002-3975-3283) (co-investigator) contributed to the overall study design, co-led the interview study with patients and led the writing of Chapter 5, contributed expertise to all other projects within the programme, contributed further to the authorship of the final report and gave final approval of the manuscript.
Farzana Dudhwala (http://orcid.org/0000-0002-4847-4542) (researcher) conducted the case study ethnography and led the writing of Chapter 6, contributed expertise to all other projects within the programme, contributed to the authorship of the final report and gave final approval of the manuscript.
Steve Woolgar (https://orcid.org/0000-0003-1465-3136) (co-investigator) contributed to the overall study design, co-led the case study ethnography and contributed to Chapter 6, contributed expertise to all other projects within the programme, contributed to the final report and gave final approval of the manuscript.
Anne-Marie Boylan (http://orcid.org/0000-0001-8187-0742) (researcher) conducted the scoping review with Veronika Williams and led the writing of Chapter 2, provided the link with the related NIHR Oxford Collaborations for Leadership in Applied Health Research and Care-funded work, contributed to the authorship of the final report and gave final approval of the manuscript.
Joanna Fleming (https://orcid.org/0000-0001-5708-5349) (researcher) worked on the survey of professionals, contributed to the writing of Chapters 3 and 4, contributed to the authorship of the final report and gave final approval of the manuscript.
Susan Kirkpatrick (https://orcid.org/0000-0003-2579-9448) (researcher) conducted the interview study with patients and contributed to Chapter 5, contributed expertise to all other projects within the programme, contributed to the authorship of the final report and gave final approval of the manuscript.
Angela Martin (https://orcid.org/0000-0001-6196-0409) (co-ordinator) contributed to aspects of the study design and managed the conduct of the study, co-authored Chapter 7, co-ordinated and contributed to the authorship of the final report and gave final approval of the manuscript.
Michelle van Velthoven (https://orcid.org/0000-0003-1245-8759) (researcher) worked on the public survey, contributed expertise to all other projects within the programme, contributed to the authorship of the final report and gave final approval of the manuscript.
Anya de Iongh (https://orcid.org/0000-0002-5431-5281) (co-investigator and lay representative) contributed to the design of the overall study, led the PPI activities and co-authored Chapter 7, contributed to the authorship of the final report and gave final approval of the manuscript.
Douglas Findlay (https://orcid.org/0000-0001-8239-2287) (SSC member and lay chairperson of the PCPRG) provided advice across all aspects of the work, contributed to the authorship of the final report and gave final approval of the manuscript.
Louise Locock (https://orcid.org/0000-0002-8109-1930) (co-investigator) contributed to the overall study design, provided expertise to all aspects of the programme, contributed to the final report and gave final approval of the manuscript.
Sue Ziebland (https://orcid.org/0000-0002-6496-4859) (co-investigator and Director of the Health Experiences Research Group) contributed to the overall study design, co-led the interview study with patients, contributed expertise to all other projects within the programme, contributed to the final report and gave final approval of the manuscript.
Publications
Dudhwala F, Boylan A-M, Williams V, Powell J. What counts as online patient feedback, and for whom? Digital Health 2017;3:1–3.
van Velthoven MH, Atherton H, Powell J. A cross sectional survey of the UK public to understand use of online ratings and reviews of health services. Patient Educ Couns 2018;101:1690–6.
Atherton H, Fleming J, Williams V, Powell J. Online patient feedback: a cross-sectional survey of the attitudes and experiences of United Kingdom health care professionals [published online ahead of print June 2 2019]. J Health Serv Res Policy 2019.
Boylan AM, Williams V, Powell J. Online patient feedback: a scoping review and stakeholder consultation to guide health policy [published online ahead of print September 7 2019]. J Health Serv Res Policy 2019.
The Point of Care Foundation. Using Online Patient Feedback to Improve Care. London: The Point of Care Foundation; 2019. URL: www.pointofcarefoundation.org.uk/resource/using-online-patient-feedback/ (accessed 1 October 2019).
Data-sharing statement
All data requests should be submitted to the corresponding author for consideration. Access to available anonymised data may be granted following review.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health and Social Care.
References
- Powell J, Deetjen U. Characterizing the digital health citizen: mixed-methods study deriving a new typology. J Med Internet Res 2019;21. https://doi.org/10.2196/11279.
- Darzi A. High Quality Care For All: NHS Next Stage Review Final Report 2008.
- Institute of Medicine (US) Committee on Quality of Health Care in America . Crossing the Quality Chasm: A New Health System for the 21st Century 2001.
- Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013;3. https://doi.org/10.1136/bmjopen-2012-001570.
- Llanwarne NR, Abel GA, Elliott MN, Paddison CA, Lyratzopoulos G, Campbell JL, et al. Relationship between clinical quality and patient experience: analysis of data from the English quality and outcomes framework and the National GP Patient Survey. Ann Fam Med 2013;11:467-72. https://doi.org/10.1370/afm.1514.
- Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med 2013;368:201-3. https://doi.org/10.1056/NEJMp1211775.
- Francis R. Independent Inquiry into Care Provided by Mid Staffordshire NHS Foundation Trust January 2005–March 2009. London: The Stationery Office; 2010.
- Keogh B. Review into the Quality of Care and Treatment Provided by 14 Hospital Trusts in England: Overview Report 2013.
- Berwick DM. A Promise to Learn – A Commitment to Act: Improving the Safety of Patients in England. London: Department of Health and Social Care; 2013.
- Adams SA. Sourcing the crowd for health services improvement: the reflexive patient and ‘share-your-experience’ websites. Soc Sci Med 2011;72:1069-76. https://doi.org/10.1016/j.socscimed.2011.02.001.
- Emmert M, Sander U, Esslinger AS, Maryschok M, Schöffski O. Public reporting in Germany: the content of physician rating websites. Methods Inf Med 2012;51:112-20. https://doi.org/10.3414/ME11-01-0045.
- Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013;15. https://doi.org/10.2196/jmir.2360.
- Fox S. After Dr Google: peer-to-peer health care. Pediatrics 2013;131:224-5. https://doi.org/10.1542/peds.2012-3786K.
- Fox S, Duggan M. Health Online 2013. Washington, DC: Pew Internet & American Life Project; 2013.
- Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients’ evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med 2010;25:942-6. https://doi.org/10.1007/s11606-010-1383-0.
- Trigg L. Patients’ opinions of health care providers for supporting choice and quality improvement. J Health Serv Res Policy 2011;16:102-7. https://doi.org/10.1258/jhsrp.2010.010010.
- Ziebland S, Wyke S. Health and illness in a connected world: how might sharing experiences on the internet affect people’s health?. Milbank Q 2012;90:219-49. https://doi.org/10.1111/j.1468-0009.2012.00662.x.
- Anderson C. The Impact of Social Media on Lodging Performance. New York, NY: Cornell University School of Hotel Administration; 2012.
- Melián-González S, Bulchand-Gidumal J, González López-Valcárcel B. Online customer reviews of hotels: as participation increases, better evaluation is obtained. Cornell Hosp Q 2013;54:274-83. https://doi.org/10.1177/1938965513481498.
- Competition & Markets Authority . Online Reviews and Endorsements. Report on the CMA’s Call For Information 2015 2015. www.gov.uk/cma-cases/online-reviews-and-endorsements (accessed 6 September 2019).
- Adams S. Post-panoptic surveillance through healthcare rating sites: who’s watching whom?. Inform Commun Soc 2013;16:215-35. https://doi.org/10.1080/1369118X.2012.701657.
- Hardey M. Public health and web 2.0. J R Soc Promot Health 2008;128:181-9. https://doi.org/10.1177/1466424008092228.
- Mazanderani F, O’Neill B, Powell J. ‘People power’ or ‘pester power’? YouTube as a forum for the generation of evidence and patient advocacy. Patient Educ Couns 2013;93:420-5. https://doi.org/10.1016/j.pec.2013.06.006.
- Powell JA, Boden S. Greater choice and control? Health policy in England and the online health consumer. Policy Internet 2012;4:1-23. https://doi.org/10.1515/1944-2866.1180.
- Ziebland S. The importance of being expert: the quest for cancer information on the Internet. Soc Sci Med 2004;59:1783-93. https://doi.org/10.1016/j.socscimed.2004.02.019.
- Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients’ online ratings of their physicians over a 5-year period. J Med Internet Res 2012;14. https://doi.org/10.2196/jmir.2003.
- Greaves F, Millett C. Consistently increasing numbers of online ratings of healthcare in England. J Med Internet Res 2012;14. https://doi.org/10.2196/jmir.2157.
- Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res 2013;15. https://doi.org/10.2196/jmir.2702.
- López A, Detz A, Ratanawongsa N, Sarkar U. What patients say about their doctors online: a qualitative content analysis. J Gen Intern Med 2012;27:685-92. https://doi.org/10.1007/s11606-011-1958-4.
- Allsop J, Jones K. Withering the citizen, managing the consumer: complaints in healthcare settings. Soc Policy Soc 2008;7:233-43. https://doi.org/10.1017/S1474746407004186.
- McCartney M. Will doctor rating sites improve the quality of care? No. BMJ 2009;338. https://doi.org/10.1136/bmj.b1033.
- Mays N, Roberts E, Popay J, Fulop N, Allen P, Clarke A, et al. Methods for Studying the Delivery and Organisation of Health Services. London: Routledge; 2001.
- Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19-32. https://doi.org/10.1080/1364557032000119616.
- Bardach NS, Asteria-Peñaloza R, Boscardin WJ, Dudley RA. The relationship between commercial website ratings and traditional hospital performance measures in the USA. BMJ Qual Saf 2013;22:194-202. https://doi.org/10.1136/bmjqs-2012-001360.
- Black EW, Thompson LA, Saliba H, Dawson K, Black NM. An analysis of healthcare providers’ online ratings. Inform Prim Care 2009;17:249-53. https://doi.org/10.14236/jhi.v17i4.744.
- Burkle CM, Keegan MT. Popularity of internet physician rating sites and their apparent influence on patients’ choices of physicians. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-1099-2.
- Frost C, Mesfin A. Online reviews of orthopedic surgeons: an emerging trend. Orthopedics 2015;38:e257-62. https://doi.org/10.3928/01477447-20150402-52.
- Gao GD, Greenwood BN, Agarwal R, McCullough JS. Vocal minority and silent majority: how do online ratings reflect population perceptions of quality. Mis Quart 2015;39. https://doi.org/10.25300/MISQ/2015/39.3.03.
- Gilbert K, Hawkins CM, Hughes DR, Patel K, Gogia N, Sekhar A, et al. Physician rating websites: do radiologists have an online presence?. J Am Coll Radiol 2015;12:867-71. https://doi.org/10.1016/j.jacr.2015.03.039.
- Glover M, Khalilzadeh O, Choy G, Prabhakar AM, Pandharipande PV, Gazelle GS. Hospital evaluations by social media: a comparative analysis of Facebook ratings among performance outliers. J Gen Intern Med 2015;30:1440-6. https://doi.org/10.1007/s11606-015-3236-3.
- Gray BM, Vandergrift JL, Gao GG, McCullough JS, Lipner RS. Website ratings of physicians and their quality of care. JAMA Intern Med 2015;175:291-3. https://doi.org/10.1001/jamainternmed.2014.6291.
- Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. JAMA 2014;311:734-5. https://doi.org/10.1001/jama.2013.283194.
- Johnson C. Survey finds physicians very wary of doctor ratings. Physician Exec 2013;39:6-8.
- Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res 2011;13. https://doi.org/10.2196/jmir.1960.
- Kinast RM, Barker GT, Day SH, Gardiner SK, Mansberger SL. Factors related to online patient satisfaction with ophthalmologists. Ophthalmology 2014;121:1843-5.e1. https://doi.org/10.1016/j.ophtha.2014.04.009.
- McCaughey D, Baumgardner C, Gaudes A, LaRochelle D, Wu KJ, Raichura T. Best practices in social media: utilizing a value matrix to assess social media’s impact on health care. Soc Sci Comput Rev 2014;32:575-89. https://doi.org/10.1177/0894439314525332.
- Merrell JG, Levy BH, Johnson DA. Patient assessments and online ratings of quality care: a ‘wake-up call’ for providers. Am J Gastroenterol 2013;108:1676-85. https://doi.org/10.1038/ajg.2013.112.
- Riemer C, Doctor M, Dellavalle RP. Analysis of online ratings of dermatologists. JAMA Dermatol 2016;152:218-19. https://doi.org/10.1001/jamadermatol.2015.4991.
- Samora JB, Lifchez SD, Blazar PE. American Society for Surgery of the Hand Ethics and Professionalism Committee . Physician-rating web sites: ethical implications. J Hand Surg Am 2016;41:104-10.e1. https://doi.org/10.1016/j.jhsa.2015.05.034.
- Segal J, Sacopulos M, Sheets V, Thurston I, Brooks K, Puccia R. Online doctor reviews: do they track surgeon volume, a proxy for quality of care?. J Med Internet Res 2012;14. https://doi.org/10.2196/jmir.2005.
- Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you?. JAMA Otolaryngol Head Neck Surg 2014;140:635-8. https://doi.org/10.1001/jamaoto.2014.818.
- Thackeray R, Crookston BT, West JH. Correlates of health-related social media use among adults. J Med Internet Res 2013;15. https://doi.org/10.2196/jmir.2297.
- Trehan SK, DeFrancesco CJ, Nguyen JT, Charalel RA, Daluiski A. Online patient ratings of hand surgeons. J Hand Surg Am 2016;41:98-103. https://doi.org/10.1016/j.jhsa.2015.10.006.
- Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Parental awareness and use of online physician rating sites. Pediatrics 2014;134:e966-75. https://doi.org/10.1542/peds.2014-0681.
- Kanouse DE, Schlesinger M, Shaller D, Martino SC, Rybowski L. How patient comments affect consumers’ use of physician performance measures. Med Care 2016;54:24-31. https://doi.org/10.1097/MLR.0000000000000443.
- Li S, Feng B, Chen M, Bell RA. Physician review websites: effects of the proportion and position of negative reviews on readers’ willingness to choose the doctor. J Health Commun 2015;20:453-61. https://doi.org/10.1080/10810730.2014.977467.
- Yaraghi N, Wang W, Gao GG, Agarwal R. How online quality ratings influence patients’ choice of medical providers: controlled experimental survey study. J Med Internet Res 2018;20. https://doi.org/10.2196/jmir.8986.
- Brody S, Elhadad N. An Unsupervised Aspect-Sentiment Model for Online Reviews. Human Language Technologies 2010.
- Hawkins JB, Brownstein JS, Tuli G, Runels T, Broecker K, Nsoesie EO, et al. Measuring patient-perceived quality of care in US hospitals using Twitter. BMJ Qual Saf 2016;25:404-13. https://doi.org/10.1136/bmjqs-2015-004309.
- Hopper AM, Uriyo M. Using sentiment analysis to review patient satisfaction data located on the internet. J Health Organ Manag 2015;29:221-33. https://doi.org/10.1108/JHOM-12-2011-0129.
- Paul MJ, Wallace BC, Dredze M. What Affects Patient (Dis) Satisfaction? Analyzing Online Doctor Ratings With a Joint Topic–Sentiment Model 2013. www.aaai.org/ocs/index.php/WS/AAAIW13/paper/viewPaper/7120 (accessed 6 September 2019).
- Ranard BL, Werner RM, Antanavicius T, Schwartz HA, Smith RJ, Meisel ZF, et al. Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Aff 2016;35:697-705. https://doi.org/10.1377/hlthaff.2015.1030.
- Rastegar-Mojarad M, Ye Z, Wall D, Murali N, Lin S. Collecting and analyzing patient experiences of health care from social media. JMIR Res Protoc 2015;4. https://doi.org/10.2196/resprot.3433.
- Wallace BC, Paul MJ, Sarkar U, Trikalinos TA, Dredze M. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. J Am Med Inform Assoc 2014;21:1098-103. https://doi.org/10.1136/amiajnl-2014-002711.
- Ellimoottil C, Leichtle SW, Wright CJ, Fakhro A, Arrington AK, Chirichella TJ, et al. Online physician reviews: the good, the bad and the ugly. Bull Am Coll Surg 2013;98:34-9.
- Lagu T, Goff SL, Craft B, Calcasola S, Benjamin EM, Priya A, et al. Can social media be used as a hospital quality improvement tool?. J Hosp Med 2016;11:52-5. https://doi.org/10.1002/jhm.2486.
- Smith RJ, Lipoff JB. Evaluation of dermatology practice online reviews: lessons from qualitative analysis. JAMA Dermatol 2016;152:153-7. https://doi.org/10.1001/jamadermatol.2015.3950.
- Bardach NS, Hibbard JH, Greaves F, Dudley RA. Sources of traffic and visitors’ preferences regarding online public reports of quality: web analytics and online survey results. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.3637.
- Detz A, López A, Sarkar U. Long-term doctor-patient relationships: patient perspective from online reviews. J Med Internet Res 2013;15. https://doi.org/10.2196/jmir.2552.
- Kilaru AS, Meisel ZF, Paciotti B, Ha YP, Smith RJ, Ranard BL, et al. What do patients say about emergency departments in online reviews? A qualitative study. BMJ Qual Saf 2016;25:14-2. https://doi.org/10.1136/bmjqs-2015-004035.
- Nakhasi A, Shen AX, Passarella RJ, Appel LJ, Anderson CA. Online social networks that connect users to physical activity partners: a review and descriptive analysis. J Med Internet Res 2014;16. https://doi.org/10.2196/jmir.2674.
- Sundstrom B, Meier SJ, Anderson M, Booth KE, Cooper L, Flock E, et al. Voices of the ‘99 percent’: the role of online narrative to improve health care. Perm J 2016;20:15-224. https://doi.org/10.7812/TPP/15-224.
- Lewis P, Kobayashi E, Gupta S. An online review of plastic surgeons in southern California. Ann Plast Surg 2015;74:66-70. https://doi.org/10.1097/SAP.0000000000000517.
- Timian A, Rupcic S, Kachnowski S, Luisi P. Do patients ‘like’ good care? Measuring hospital quality via Facebook. Am J Med Qual 2013;28:374-82. https://doi.org/10.1177/1062860612474839.
- Galizzi MM, Miraldo M, Stavropoulou C, Desai M, Jayatunga W, Joshi M, et al. Who is more likely to use doctor-rating websites, and why? A cross-sectional study in London. BMJ Open 2012;2. https://doi.org/10.1136/bmjopen-2012-001493.
- Greaves F, Pape UJ, King D, Darzi A, Majeed A, Wachter RM, et al. Associations between Internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf 2012;21:600-5. https://doi.org/10.1136/bmjqs-2012-000906.
- van Velthoven MH, Atherton H, Powell J. A cross sectional survey of the UK public to understand use of online ratings and reviews of health services. Patient Educ Couns 2018;101:1690-6. https://doi.org/10.1016/j.pec.2018.04.001.
- Brookes G, Baker P. What does patient feedback reveal about the NHS? A mixed methods study of comments posted to the NHS Choices online service. BMJ Open 2017;7. https://doi.org/10.1136/bmjopen-2016-013821.
- Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Qual Saf 2013;22:251-5. https://doi.org/10.1136/bmjqs-2012-001527.
- Greaves F, Laverty AA, Cano DR, Moilanen K, Pulman S, Darzi A, et al. Tweets about hospital quality: a mixed methods study. BMJ Qual Saf 2014;23:838-46. https://doi.org/10.1136/bmjqs-2014-002875.
- Patel S, Cain R, Neailey K, Hooberman L. General practitioners’ concerns about online patient feedback: findings from a descriptive exploratory qualitative study in England. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.4989.
- Patel S, Cain R, Neailey K, Hooberman L. Exploring patients’ views toward giving web-based feedback and ratings to general practitioners in England: a qualitative descriptive study. J Med Internet Res 2016;18. https://doi.org/10.2196/jmir.5865.
- Shepherd A, Sanders C, Doyle M, Shaw J. Using social media for support and feedback by mental health service users: thematic analysis of a Twitter conversation. BMC Psychiatry 2015;15. https://doi.org/10.1186/s12888-015-0408-y.
- Speed E, Davison C, Gunnell C. The anonymity paradox in patient engagement: reputation, risk and web-based public feedback. Med Humanit 2016;42:135-40. https://doi.org/10.1136/medhum-2015-010823.
- Greaves F, Pape UJ, King D, Darzi A, Majeed A, Wachter RM, et al. Associations between web-based patient ratings and objective measures of hospital quality. Arch Intern Med 2012;172:435-6. https://doi.org/10.1001/archinternmed.2011.1675.
- Bidmon S, Terlutter R, Röttl J. What explains usage of mobile physician-rating apps? Results from a web-based questionnaire. J Med Internet Res 2014;16. https://doi.org/10.2196/jmir.3122.
- Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013;15. https://doi.org/10.2196/jmir.2655.
- Emmert M, Meszmer N, Sander U. Do health care providers use online patient ratings to improve the quality of care? Results from an online-based cross-sectional study. J Med Internet Res 2016;18. https://doi.org/10.2196/jmir.5889.
- Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014;16. https://doi.org/10.2196/jmir.3145.
- Emmert M, Meier F, Heider AK, Dürr C, Sander U. What do patients say about their physicians? An analysis of 3000 narrative comments posted on a German physician rating website. Health Policy 2014;118:66-73. https://doi.org/10.1016/j.healthpol.2014.04.015.
- Emmert M, Halling F, Meier F. Evaluations of dentists on a German physician rating website: an analysis of the ratings. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.3830.
- Jans LC, Kranzbühler AM. The influence of rating volume in the effects of expert versus patient online ratings. Acta Orthop Belg 2015;81:662-7.
- van de Belt TH, Engelen LJ, Verhoef LM, van der Weide MJ, Schoonhoven L, Kool RB. Using patient experiences on Dutch social media to supervise health care services: exploratory study. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.3906.
- Hao H. The development of online doctor reviews in China: an analysis of the largest online doctor review website in China. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.4365.
- Zhang W, Deng Z, Hong Z, Evans R, Ma J, Zhang H. Unhappy patients are not alike: content analysis of the negative comments from China’s good doctor website. J Med Internet Res 2018;20. https://doi.org/10.2196/jmir.8223.
- Hao H, Zhang K. The voice of Chinese health consumers: a text mining approach to web-based physician reviews. J Med Internet Res 2016;18. https://doi.org/10.2196/jmir.4430.
- Grabner-Kräuter S, Waiguny MK. Insights into the impact of online physician reviews on patients’ decision making: randomized experiment. J Med Internet Res 2015;17. https://doi.org/10.2196/jmir.3991.
- Macdonald ME, Beaudin A, Pineda C. What do patients think about dental services in Quebec? Analysis of a dentist rating website. J Can Dent Assoc 2015;81.
- Rothenfluh F, Germeni E, Schulz PJ. Consumer decision-making based on review websites: are there differences between choosing a hotel and choosing a physician?. J Med Internet Res 2016;18. https://doi.org/10.2196/jmir.5580.
- Lagu T, Goff SL, Hannon NS, Shatz A, Lindenauer PK. A mixed-methods analysis of patient reviews of hospital care in England: implications for public reporting of health care quality data in the United States. Jt Comm J Qual Patient Saf 2013;39:7-15. https://doi.org/10.1016/S1553-7250(13)39003-5.
- Reimann S, Strech D. The representation of patient experience and satisfaction in physician rating sites. A criteria-based analysis of English- and German-language sites. BMC Health Serv Res 2010;10. https://doi.org/10.1186/1472-6963-10-332.
- Brown-Johnson CG, Sanders-Jackson A, Prochaska JJ. Online comments on smoking bans in psychiatric hospitals units. J Dual Diagn 2014;10:204-11. https://doi.org/10.1080/15504263.2014.961883.
- Emmert M, Adelhardt T, Sander U, Wambach V, Lindenthal J. A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites. BMC Health Serv Res 2015;15. https://doi.org/10.1186/s12913-015-1051-5.
- Greaves F, Pape UJ, Lee H, Smith DM, Darzi A, Majeed A, et al. Patients’ ratings of family physician practices on the internet: usage and associations with conventional measures of quality in the English National Health Service. J Med Internet Res 2012;14. https://doi.org/10.2196/jmir.2280.
- Lewis S. Qualitative inquiry and research design: choosing among five approaches, 3rd edition. Health Promot Pract 2015;16:473-5. https://doi.org/10.1177/1524839915580941.
- Kleefstra SM, Zandbelt LC, Borghans I, de Haes HJ, Kool RB. Investigating the potential contribution of patient rating sites to hospital supervision: exploratory results from an interview study in the Netherlands. J Med Internet Res 2016;18. https://doi.org/10.2196/jmir.5552.
- Verhoef LM, Van de Belt TH, Engelen LJ, Schoonhoven L, Kool RB. Social media and rating sites as tools to understanding quality of care: a scoping review. J Med Internet Res 2014;16. https://doi.org/10.2196/jmir.3024.
- Patel S, Cain R, Neailey K, Hooberman L. Public Awareness, usage, and predictors for the use of doctor rating websites: cross-sectional study in England. J Med Internet Res 2018;20. https://doi.org/10.2196/jmir.9523.
- Dutton W, Blank G, Groselj D. Cultures of the Internet: The Internet in Britain 2013. http://oxis.oii.ox.ac.uk/reports/ (accessed 12 April 2019).
- Atherton H, Fleming J, Williams V, Powell J. Online patient feedback: a cross-sectional survey of the attitudes and experiences of United Kingdom health care professionals [published online ahead of print June 2 2019]. J Health Serv Res Policy 2019. https://doi.org/10.1177/1355819619844540.
- Eynon R. Mapping the digital divide in Britain: implications for learning and education. Learn Media Technol 2009;34:277-90. https://doi.org/10.1080/17439880903345874.
- Ofcom . Adults’ Media Use and Attitudes 2017. www.ofcom.org.uk/__data/assets/pdf_file/0020/102755/adults-media-use-attitudes-2017.pdf (accessed 12 April 2019).
- Greenhalgh T, Stramer K, Bratan T, Byrne E, Mohammad Y, Russell J. Introduction of shared electronic records: multi-site case study using diffusion of innovation theory. BMJ 2008;337. https://doi.org/10.1136/bmj.a1786.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581-629. https://doi.org/10.1111/j.0887-378X.2004.00325.x.
- Picker Institute . Using Patient Feedback 2009. www.nhssurveys.org/Filestore/documents/QIFull.pdf (accessed 6 September 2019).
- Patient Opinion . The Power of Connection. How Networked Citizen Voice Is Changing Health and Social Care. 10th Anniversary Report 2015.
- Mandeville KL, Satherley RM, Hall JA, Sutaria S, Willott C, Yarrow K, et al. Political views of doctors in the UK: a cross-sectional study. J Epidemiol Community Health 2018;72:880-7. https://doi.org/10.1136/jech-2018-210801.
- Chatterjee R, Chapman T, Brannan MG, Varney J. GPs’ knowledge, use, and confidence in national physical activity and health guidelines and tools: a questionnaire-based survey of general practice in England. Br J Gen Pract 2017;67:e668-e675. https://doi.org/10.3399/bjgp17X692513.
- Lavrakas PJ. Encyclopedia of Survey Research Methods. Newcastle upon Tyne: Sage Group plc; 2008.
- NHS Digital . NHS Hospital &Amp; Community Health Service (HCHS) Workforce Statistics 2017. http://content.digital.nhs.uk/searchcatalogue?productid=25273&returnid=1907.
- Burt J, Campbell J, Abel G, Aboulghate A, Ahmed F, Asprey A, et al. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience. Programme Grants Appl Res 2017;5. https://doi.org/10.3310/pgfar05090.
- Farrington C, Burt J, Boiko O, Campbell J, Roland M. Doctors’ engagements with patient experience surveys in primary and secondary care: a qualitative study. Health Expect 2017;20:385-94. https://doi.org/10.1111/hex.12465.
- Asprey A, Campbell JL, Newbould J, Cohn S, Carter M, Davey A, et al. Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract 2013;63:e200-8. https://doi.org/10.3399/bjgp13X664252.
- Emmert M, Sauter L, Jablonski L, Sander U, Taheri-Zadeh F. Do physicians respond to web-based patient ratings? An analysis of physicians’ responses to more than one million web-based ratings over a six-year period. J Med Internet Res 2017;19. https://doi.org/10.2196/jmir.7538.
- Dudhwala F, Boylan AM, Williams V, Powell J. VIEWPOINT: what counts as online patient feedback, and for whom?. Digit Health 2017;3. https://doi.org/10.1177/2055207617728186.
- Lupton D. The commodification of patient opinion: the digital patient experience economy in the age of big data. Sociol Health Illn 2014;36:856-69. https://doi.org/10.1111/1467-9566.12109.
- Coyne IT. Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries?. J Adv Nurs 1997;26:623-30. https://doi.org/10.1046/j.1365-2648.1997.t01-25-00999.x.
- Morse JM. ‘Data were saturated . . . ’. Qual Health Res 2015;25:587-8. https://doi.org/10.1177/1049732315576699.
- Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol 2013;13. https://doi.org/10.1186/1471-2288-13-117.
- Ziebland S, McPherson A. Making sense of qualitative data analysis: an introduction with illustrations from DIPEx (personal experiences of health and illness). Med Educ 2006;40:405-14. https://doi.org/10.1111/j.1365-2929.2006.02467.x.
- Ziewitz M. Experience in action: moderating care in web-based patient feedback. Soc Sci Med 2017;175:99-108. https://doi.org/10.1016/j.socscimed.2016.12.028.
- Baines R, Donovan J, Regan de Bere S, Archer J, Jones R. Responding effectively to adult mental health patient feedback in an online environment: a coproduced framework. Health Expect 2018;21:887-98. https://doi.org/10.1111/hex.12682.
- Menon AV. Do online reviews diminish physician authority? The case of cosmetic surgery in the US. Soc Sci Med 2017;181:1-8. https://doi.org/10.1016/j.socscimed.2017.03.046.
- Mol A. The Logic of Care. Oxon: Routledge; 2008.
- Pols J. Care at a Distance: on the Closeness of Technology. Amsterdam: Amsterdam University Press; 2012.
- Armstrong N, Powell J. Patient perspectives on health advice posted on internet discussion boards: a qualitative study. Health Expect 2009;12:313-20. https://doi.org/10.1111/j.1369-7625.2009.00543.x.
- Lowe P, Powell J, Griffiths F, Thorogood M, Locock L. Making it all normal: the role of the internet in problematic pregnancy. Qual Health Res 2009;19:1476-84. https://doi.org/10.1177/1049732309348368.
- Garfinkel H. Studies in Ethnomethodology. Cambridge: Polity Press; 1967.
- Button G, Sharrock W. The organizational accountability of technological work. Soc Stud Sci 1998;28:73-102. https://doi.org/10.1177/030631298028001003.
- Lockyer S, Lewis-Beck MS, Bryman A, Futing Liao T. The Sage Encyclopedia of Social Science Research Methods. 1. Thousand Oaks, CA: Sage; 2004.
- Moerman M. Ethnic identification in a complex civilization: who are the Lue?. Am Anthropol 1965;67:1215-30. https://doi.org/10.1525/aa.1965.67.5.02a00070.
- Coulter A, Locock L, Ziebland S, Calabrese J. Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348. https://doi.org/10.1136/bmj.g2225.
- Sharp CA, Boaden R, Dixon WG, Sanders C. The Means Not the End: Stakeholder Views of Toolkits Developed from Healthcare Research n.d.
- Locock L, Graham C, King J, Parkin S, Chisholm A, Montgomery C, et al. Understanding how frontline staff use patient experience data for service improvement – an exploratory case study evaluation. Health Serv Deliv Res 2019.
- Morgan DL. Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J Mix Methods Res 2007;1:48-76. https://doi.org/10.1177/2345678906292462.
- Dixon-Woods M, Baker R, Charles K, Dawson J, Jerzembek G, Martin G, et al. Culture and behaviour in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106-15. https://doi.org/10.1136/bmjqs-2013-001947.
- Martin GP, McKee L, Dixon-Woods M. Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety. Soc Sci Med 2015;142:19-26. https://doi.org/10.1016/j.socscimed.2015.07.027.
- Martin GP, Aveling EL, Campbell A, Tarrant C, Pronovost PJ, Mitchell I, et al. Making soft intelligence hard: a multi-site qualitative study of challenges relating to voice about safety concerns. BMJ Qual Saf 2018;27:710-17. https://doi.org/10.1136/bmjqs-2017-007579.
- Wise J. BMJ awards 2019: digital innovation team of the year. BMJ 2019;365. https://doi.org/10.1136/bmj.l1519.
- Schlesinger M, Grob R, Shaller D, Martino SC, Parker AM, Finucane ML, et al. Taking patients’ narratives about clinicians from anecdote to science. N Engl J Med 2015;373:675-9. https://doi.org/10.1056/NEJMsb1502361.
- Duschinsky R, Paddison C. ‘The final arbiter of everything’: a genealogy of concern with patient experience in Britain. Soc Theory Health 2018;16:94-110. https://doi.org/10.1057/s41285-017-0045-2.
- Gkeredakis E, Swan J, Powell J, Nicolini D, Scarbrough H, Roginski C, et al. Mind the gap: understanding utilisation of evidence and policy in health care management practice. J Health Organ Manag 2011;25:298-314. https://doi.org/10.1108/14777261111143545.
- Sheard L, Peacock R, Marsh C, Lawton R. What’s the problem with patient experience feedback? A macro and micro understanding, based on findings from a three-site UK qualitative study. Health Expect 2019;22:46-53. https://doi.org/10.1111/hex.12829.
- Sheard L, Marsh C, O’Hara J, Armitage G, Wright J, Lawton R. The patient feedback response framework – understanding why UK hospital staff find it difficult to make improvements based on patient feedback: a qualitative study. Soc Sci Med 2017;178:19-27. https://doi.org/10.1016/j.socscimed.2017.02.005.
- Fisher KA, Smith KM, Gallagher TH, Huang JC, Borton JC, Mazor KM. We want to know: patient comfort speaking up about breakdowns in care and patient experience. BMJ Qual Saf 2019;28:190-7. https://doi.org/10.1136/bmjqs-2018-008159.
- Lawton R, O’Hara JK, Sheard L, Armitage G, Cocks K, Buckley H, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf 2017;26:622-31. https://doi.org/10.1136/bmjqs-2016-005570.
- Bell SK, Martinez W. Every patient should be enabled to stop the line. BMJ Qual Saf 2019;28:172-6. https://doi.org/10.1136/bmjqs-2018-008714.
- Carter W, Bick D, Mackintosh N, Sandall J. A narrative synthesis of factors that affect women speaking up about early warning signs and symptoms of pre-eclampsia and responses of healthcare staff. BMC Pregnancy Childbirth 2017;17. https://doi.org/10.1186/s12884-017-1245-4.
- Rance S, McCourt C, Rayment J, Mackintosh N, Carter W, Watson K, et al. Women’s safety alerts in maternity care: is speaking up enough?. BMJ Qual Saf 2013;22:348-55. https://doi.org/10.1136/bmjqs-2012-001295.
- Dixon-Woods M, Pronovost PJ. Patient safety and the problem of many hands. BMJ Qual Saf 2016;25:485-8. https://doi.org/10.1136/bmjqs-2016-005232.
- Henwood F, Wyatt S, Hart A, Smith J. ‘Ignorance is bliss sometimes’: constraints on the emergence of the ‘informed patient’ in the changing landscapes of health information. Sociol Health Illn 2003;25:589-607. https://doi.org/10.1111/1467-9566.00360.
- Broom A, Tovey P. The role of the internet in cancer patients’ engagement with complementary and alternative treatments. Health 2008;12:139-55. https://doi.org/10.1177/1363459307086841.
- Fotaki M. Towards developing new partnerships in public services: users as consumers, citizens and/or co-producers in health and social care in England and Sweden. Public Adm 2011;89:933-55. https://doi.org/10.1111/j.1467-9299.2010.01879.x.
- Powell J, Newhouse N, Boylan AM, Williams V. Digital health citizens and the future of the NHS. Digit Health 2016;2. https://doi.org/10.1177/2055207616672033.
- Lupton D. The Digitised Healthy Citizen. Digital Health Critical and Cross-Disciplinary Perspectives. London: Routledge; 2018.
- Donetto S, Tsianakas V, Robert G. Using Experience-based Co-design (EBCD) to Improve the Quality of Healthcare: Mapping Where We Are Now and Establishing Future Directions. London: King’s College London; 2014.
- Locock L, Robert G, Boaz A, Vougioukalou S, Shuldham C, Fielden J, et al. Testing accelerated experience-based co-design: a qualitative study of using a national archive of patient experience narrative interviews to promote rapid patient-centred service improvement. Health Serv Deliv Res 2014;2. https://doi.org/10.3310/hsdr02040.
Appendix 1 Study Steering Committee
The SSC comprised:
-
Aileen Clarke: SSC chairperson, Chair of Faculty of Medicine and Director of Warwick Evidence, University of Warwick, Coventry.
-
Felix Greaves: deputy director, Science and Strategic Information, Public Health England and honorary clinical senior lecturer, Department of Primary Care and Public Health, Imperial College London.
-
Bob Gann: programme director, Widening Digital Participation, NHS England.
-
Douglas Findlay: lay advisor and chairperson of the PCPRG.
-
John Powell: principal investigator.
Appendix 2 Survey of the UK public: questionnaire logic and text
Question number | Respondents | Logic |
---|---|---|
1 | All | N/A |
2 | All | N/A |
3 | All | N/A |
4 | All | N/A |
5 | All | N/A |
6 | All | N/A |
7 | All |
If response to any of these questions is 1–4 (they read a review) then answer questions 10A, 11A and 12A If response to all questions is 5 (they never read a review) then answer questions 10B, 11B and 12B |
8 | All | |
9 | All | |
10A | Read review/rating | N/A |
10B | Never read review/rating | N/A |
11A | Read review/rating | N/A |
11B | Never read review/rating | N/A |
12A | Read review/rating | N/A |
12B | Never read review/rating | N/A |
13 | All |
If response to any of these questions is 1–4 (they gave a review/rating) then answer questions 16A and 17A If response to all questions is 5 (they never gave a review/rating) then answer questions 16B and 17B |
14 | All | |
15 | All | |
16A | Gave review/rating | N/A |
16B | Never gave review/rating | N/A |
17A | Gave review/rating | N/A |
17B | Never gave review/rating | N/A |
18 | All | N/A |
19 | All | N/A |
20 | All | N/A |
Questionnaire text
Appendix 3 Survey of the UK public: internet access demographics
Internet access | ||||||
---|---|---|---|---|---|---|
Use the internet | Never use the internet but have access | Don‘t have access | ||||
n | % | n | % | n | % | |
All | 1824 | 90 | 75 | 4 | 137 | 7 |
Age (years) | ||||||
16–17 | 18 | 100 | 0 | 0 | 0 | 0 |
18–24 | 265 | 99 | 0 | 0 | 2 | 1 |
25–34 | 333 | 98 | 1 | 0.3 | 5 | 2 |
35–44 | 310 | 98 | 4 | 1 | 3 | 1 |
45–54 | 329 | 94 | 15 | 4 | 8 | 2 |
55–59 | 128 | 92 | 7 | 5 | 4 | 3 |
60–64 | 128 | 85 | 8 | 5 | 14 | 9 |
≥ 65 | 313 | 69 | 41 | 9 | 100 | 22 |
Sex | ||||||
Male | 904 | 91 | 33 | 3 | 57 | 6 |
Female | 920 | 88 | 42 | 4 | 80 | 8 |
Education | ||||||
No formal qualifications | 177 | 60 | 36 | 12 | 80 | 27 |
GCSE/O level/CSE | 307 | 90 | 12 | 4 | 24 | 7 |
Vocational qualifications | 157 | 92 | 6 | 4 | 8 | 5 |
A level or equivalent | 400 | 97 | 7 | 2 | 5 | 1 |
Bachelor‘s degree or equivalent | 461 | 98 | 4 | 1 | 7 | 2 |
MSc/PhD or equivalent | 176 | 99 | 1 | 1 | 0 | 0 |
Still studying | 14 | 100 | 0 | 0 | 0 | 0 |
Other | 119 | 86 | 9 | 7 | 10 | 7 |
Do not know | 13 | 68 | 2 | 11 | 4 | 21 |
Ethnic origin | ||||||
White | 1563 | 89 | 62 | 4 | 127 | 7 |
Mixed | 29 | 91 | 0 | 0 | 3 | 9 |
Asian | 149 | 93 | 8 | 5 | 4 | 3 |
Black | 52 | 90 | 3 | 5 | 3 | 5 |
Arab | 6 | 100 | 0 | 0 | 0 | 0 |
Other | 15 | 94 | 1 | 6 | 0 | 0 |
Do not know | 3 | 100 | 0 | 0 | 0 | 0 |
Refused | 6 | 100 | 0 | 0 | 0 | 0 |
Long-term illness, health problem or disability | ||||||
Yes | 373 | 81 | 37 | 8 | 49 | 11 |
No | 1449 | 92 | 38 | 2 | 87 | 6 |
Do not know | 1 | 50 | 0 | 0 | 1 | 50 |
Refused | 1 | 100 | 0 | 0 | 0 | 0 |
Area | ||||||
Urban | 499 | 91 | 18 | 3 | 32 | 6 |
Suburban | 1057 | 89 | 43 | 4 | 76 | 7 |
Rural | 251 | 85 | 14 | 5 | 29 | 10 |
Refused | 17 | 100 | 0 | 0 | 0 | 0 |
Working status | ||||||
Paid job: full time, ≥ 30 hours | 742 | 98 | 9 | 1 | 6 | 1 |
Paid job: part time, 8–29 hours | 221 | 97 | 5 | 2 | 1 | 0.4 |
Paid job: part time, < 8 hours | 13 | 100 | 0 | 0 | 0 | 0 |
Self-employed | 142 | 99 | 1 | 1 | 1 | 1 |
Full-time student | 126 | 100 | 0 | 0 | 0 | 0 |
Still at school | 10 | 100 | 0 | 0 | 0 | 0 |
Unemployed and seeking work | 61 | 91 | 2 | 3 | 4 | 6 |
Retired | 337 | 69 | 43 | 9 | 107 | 22 |
Not in paid job for other reason | 42 | 91 | 1 | 2 | 3 | 7 |
Not in paid job because of long-term illness | 42 | 71 | 9 | 15 | 8 | 14 |
Housewife | 87 | 89 | 5 | 5 | 6 | 6 |
Refused | 1 | 100 | 0 | 0 | 0 | 0 |
Income (£) | ||||||
< 4499 | 29 | 85 | 0 | 0 | 5 | 15 |
4500–6499 | 28 | 80 | 0 | 0 | 7 | 20 |
6500–7499 | 16 | 70 | 1 | 4 | 6 | 26 |
7500–9499 | 45 | 82 | 1 | 2 | 9 | 16 |
9500–11,499 | 44 | 72 | 6 | 10 | 11 | 18 |
11,500–13,499 | 62 | 82 | 4 | 5 | 10 | 13 |
13,500–15,499 | 54 | 93 | 3 | 5 | 1 | 2 |
15,500–17,499 | 53 | 83 | 5 | 8 | 6 | 9 |
17,500–24,999 | 138 | 89 | 5 | 3 | 12 | 8 |
25,000–29,999 | 139 | 95 | 2 | 1 | 6 | 4 |
30,000–39,999 | 168 | 97 | 3 | 2 | 2 | 1 |
40,000–49,999 | 123 | 98 | 1 | 1 | 1 | 1 |
50,000–74,999 | 141 | 99 | 2 | 1 | 0 | 0 |
75,000–99,999 | 72 | 97 | 1 | 1 | 1 | 1 |
≥ 100,000 | 76 | 100 | 0 | 0 | 0 | 0 |
No response/do not know/missing | 634 | 86 | 41 | 6 | 61 | 8 |
Social grade | ||||||
A | 58 | 94 | 2 | 3 | 2 | 3 |
B | 465 | 97 | 8 | 2 | 9 | 2 |
C1 | 537 | 95 | 15 | 3 | 16 | 3 |
C2 | 365 | 87 | 23 | 6 | 33 | 8 |
D | 254 | 83 | 15 | 5 | 38 | 12 |
E | 145 | 74 | 13 | 7 | 39 | 20 |
Appendix 4 Survey of the UK public: internet access frequency
Internet access frequency | Total (N = 2036; 100%) | Readers (N = 768; 38%) | Writers (N = 148; 7%) | |||
---|---|---|---|---|---|---|
n | % of total sample | n | % within demographic subgroup | n | % within demographic subgroup | |
Several times a day | 1490 | 73 | 669 | 45 | 132 | 9 |
Around once a day | 185 | 9 | 56 | 30 | 10 | 5 |
Less than once a day | 148 | 7 | 35 | 24 | 5 | 3 |
Never, but I have access | 75 | 4 | 6 | 8 | 1 | 1 |
Never, but I do not have access | 137 | 7 | 2 | 2 | 0 | 0 |
Appendix 5 Survey of the UK public: general characteristics of participants in detail
Total (N = 1824; 100%) | Read (N = 760; 42%) | Written (N = 147; 8%) | ||||
---|---|---|---|---|---|---|
n | % of total sample | n | % within subgroup | n | % within subgroup | |
Age (years) | ||||||
16–17 | 18 | 1 | 9 | 50 | 0 | 0 |
18–24 | 265 | 15 | 118 | 45 | 26 | 10 |
25–34 | 333 | 18 | 162 | 49 | 32 | 10 |
35–44 | 310 | 17 | 140 | 45 | 27 | 9 |
45–54 | 329 | 18 | 113 | 34 | 22 | 7 |
55–59 | 128 | 7 | 52 | 41 | 9 | 7 |
60–64 | 128 | 7 | 58 | 45 | 11 | 9 |
≥ 65 | 313 | 17 | 107 | 34 | 20 | 6 |
Sex | ||||||
Male | 904 | 50 | 344 | 38 | 65 | 7 |
Female | 920 | 50 | 416 | 45 | 82 | 9 |
Education | ||||||
GCSE/O level/CSE | 307 | 17 | 106 | 35 | 19 | 6 |
Vocational qualifications | 157 | 9 | 69 | 44 | 15 | 10 |
A level or equivalent | 400 | 22 | 173 | 43 | 32 | 8 |
Bachelor‘s degree or equivalent | 461 | 25 | 223 | 48 | 40 | 9 |
MSc/PhD or equivalent | 176 | 10 | 85 | 48 | 18 | 10 |
Other | 119 | 7 | 37 | 31 | 12 | 10 |
No formal qualifications | 177 | 10 | 61 | 35 | 11 | 6 |
Still studying | 14 | 1 | 7 | 47 | 0 | 0 |
Do not know | 13 | 1 | 1 | 7 | 0 | 0 |
Ethnic origin | ||||||
White | 1563 | 86 | 635 | 41 | 120 | 8 |
Mixed | 29 | 2 | 16 | 55 | 2 | 7 |
Asian | 149 | 8 | 71 | 48 | 16 | 11 |
Black | 52 | 3 | 24 | 46 | 3 | 6 |
Arab | 6 | 0.3 | 2 | 29 | 1 | 17 |
Other | 15 | 1 | 8 | 53 | 3 | 20 |
Do not know | 3 | 0.2 | 2 | 67 | 0 | 0 |
Refused | 6 | 0.3 | 4 | 57 | 1 | 17 |
Area | ||||||
Urban | 499 | 27 | 240 | 48 | 52 | 10 |
Suburban | 1057 | 58 | 424 | 40 | 75 | 7 |
Rural | 251 | 14 | 89 | 36 | 19 | 8 |
Refused | 17 | 1 | 7 | 41 | 1 | 6 |
Internet access frequency | ||||||
Several times a day | 1490 | 82 | 669 | 45 | 132 | 9 |
Around once a day | 185 | 10 | 56 | 30 | 10 | 5 |
Four or five times per week | 37 | 2 | 9 | 24 | 1 | 3 |
Two or three times per week | 54 | 3 | 12 | 22 | 1 | 2 |
Around once per week | 29 | 2 | 12 | 40 | 2 | 7 |
Two or three times a month | 10 | 1 | 1 | 9 | 1 | 9 |
Around once a month | 9 | 1 | 0 | 0 | 0 | 0 |
Less than around once a month | 8 | 1 | 1 | 13 | 0 | 0 |
Appendix 6 Survey of the UK public: social and health characteristics of participants in detail
Total (N = 1824; 100%) | Read (N = 760; 42%) | Written (N = 147; 8%) | ||||
---|---|---|---|---|---|---|
n | % of total | n | % within subgroup | n | % within subgroup | |
Working status | ||||||
Paid job: full time, ≥ 30 hours | 742 | 41 | 312 | 42 | 61 | 8 |
Paid job: part time, 8–29 hours | 221 | 12 | 93 | 42 | 14 | 6 |
Paid job: part time, < 8 hours | 13 | 1 | 6 | 46 | 0 | 0 |
Self-employed | 142 | 8 | 68 | 48 | 10 | 7 |
Full-time student | 126 | 7 | 56 | 44 | 11 | 9 |
Still at school | 10 | 1 | 4 | 36 | 0 | 0 |
Unemployed and seeking work | 61 | 3 | 22 | 36 | 8 | 13 |
Retired | 337 | 19 | 121 | 36 | 23 | 7 |
Not in paid job for other reason | 42 | 2 | 20 | 49 | 7 | 17 |
Not in paid job because of long-term illness | 42 | 2 | 19 | 45 | 4 | 10 |
Housewife | 87 | 5 | 39 | 45 | 10 | 12 |
Refused | 1 | 0.1 | 0 | 0 | 0 | 0 |
Income (£) | ||||||
< 4499 | 29 | 2 | 18 | 62 | 6 | 21 |
4500–6499 | 28 | 2 | 10 | 36 | 2 | 7 |
6500–7499 | 16 | 1 | 8 | 47 | 2 | 13 |
7500–9499 | 45 | 3 | 22 | 49 | 3 | 7 |
9500–11,499 | 44 | 2 | 19 | 43 | 4 | 9 |
11,500–13,499 | 62 | 3 | 26 | 42 | 5 | 8 |
13,500–15,499 | 54 | 3 | 25 | 46 | 6 | 11 |
15,500–17,499 | 53 | 3 | 24 | 45 | 5 | 9 |
17,500–24,999 | 138 | 8 | 61 | 44 | 12 | 9 |
25,000–29,999 | 139 | 8 | 62 | 44 | 17 | 12 |
30,000–39,999 | 168 | 9 | 70 | 41 | 16 | 10 |
40,000–49,999 | 123 | 7 | 47 | 38 | 7 | 6 |
50,000–74,999 | 141 | 8 | 62 | 44 | 9 | 6 |
75,000–99,999 | 72 | 4 | 37 | 51 | 3 | 4 |
≥ 100,000 | 76 | 4 | 45 | 60 | 8 | 11 |
No response/do not know/missing | 634 | 35 | 224 | 35 | 42 | 7 |
Social grade | ||||||
A | 58 | 3 | 23 | 39 | 5 | 9 |
B | 465 | 26 | 221 | 48 | 37 | 8 |
C1 | 537 | 29 | 225 | 42 | 49 | 9 |
C2 | 365 | 20 | 150 | 41 | 25 | 7 |
D | 254 | 14 | 87 | 34 | 17 | 7 |
E | 145 | 8 | 54 | 37 | 13 | 9 |
Health status | ||||||
Very good | 697 | 38 | 284 | 41 | 55 | 8 |
Good | 769 | 42 | 306 | 40 | 54 | 7 |
Fair | 269 | 15 | 117 | 44 | 23 | 9 |
Bad | 67 | 4 | 41 | 62 | 12 | 18 |
Very bad | 21 | 1 | 11 | 55 | 3 | 14 |
Refused | 1 | 0.1 | 1 | 100 | 0 | 0 |
Long-term illness, health problem or disability | ||||||
Yes | 373 | 21 | 183 | 49 | 39 | 10 |
No | 1449 | 80 | 576 | 40 | 108 | 8 |
Do not know | 1 | 0.1 | 0 | 0 | 0 | 0 |
Refused | 1 | 0.1 | 1 | 100 | 0 | 0 |
Appendix 7 Survey of the UK public: reading online feedback
Appendix 8 Survey of the UK public: read or gave online ratings or reviews about the NHS, individuals or drugs
Number of health services | n | % |
---|---|---|
Health services about which feedback was read | ||
0 or do not know | 1063 | 58 |
≥ 1 | 760 | 42 |
NHS total | 507 | 28 |
Individual total | 331 | 18 |
Drug/treatment/test total | 579 | 32 |
1 | 320 | 42 |
NHS only | 95 | 30 |
Individual only | 33 | 10 |
Drug/treatment/test only | 193 | 60 |
2 | 223 | 29 |
NHS and individual | 54 | 24 |
NHS and drug/treatment/test | 141 | 63 |
Individual and drug/treatment/test | 27 | 12 |
3 | 217 | 29 |
Health services about which feedback was written | ||
0 or do not know | 1677 | 92 |
≥ 1 | 147 | 8 |
NHS total | 105 | 6 |
Individual total | 69 | 4 |
Drug/treatment/test total | 69 | 4 |
1 | 79 | 53 |
NHS only | 38 | 48 |
Individual only | 17 | 21 |
Drug/treatment/test only | 24 | 30 |
2 | 39 | 26 |
NHS and individual | 22 | 57 |
NHS and drug/treatment/test | 16 | 40 |
Individual and drug/treatment/test | 1 | 2 |
3 | 29 | 20 |
Appendix 9 Survey of the UK public: wrote online feedback
Appendix 10 Survey of the UK public: frequency of reading and writing online feedback
Frequency | NHS organisations | Individual people | Drugs, treatments, tests | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Read | Written | Read | Written | Read | Written | |||||||
n | % | n | % | n | % | n | % | n | % | n | % | |
Daily | 7 | 0.4 | 1 | 0 | 4 | 0.2 | 1 | 0 | 4 | 0.2 | 1 | 0.1 |
Every couple of days | 7 | 0.4 | 3 | 0.1 | 5 | 0.3 | 3 | 0.1 | 7 | 0.4 | 5 | 0.3 |
Weekly | 23 | 1 | 5 | 0.3 | 25 | 1 | 5 | 0.3 | 25 | 1 | 1 | 0.1 |
Fortnightly | 21 | 1 | 1 | 0.1 | 17 | 1 | 1 | 0.1 | 25 | 1 | 9 | 1 |
Monthly | 57 | 3 | 8 | 1 | 45 | 3 | 8 | 1 | 97 | 5 | 21 | 1 |
Every few months | 172 | 10 | 13 | 1 | 104 | 6 | 13 | 1 | 238 | 13 | 32 | 2 |
Once in the last year | 220 | 12 | 37 | 2 | 131 | 7 | 37 | 2 | 183 | 10 | 1 | 0.1 |
Never | 1315 | 72 | 1719 | 94 | 1492 | 82 | 1755 | 96 | 1245 | 68 | 1755 | 96 |
Do not know | 2 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Appendix 11 Survey of the UK public: reading versus writing feedback
Never written feedback | Wrote feedback | Total | ||||
---|---|---|---|---|---|---|
n | % | n | % | n | % | |
Never read feedback | 1044 | 57 | 19 | 1 | 1063 | 58 |
Read feedback | 633 | 35 | 128 | 7 | 761 | 42 |
Total | 1677 | 92 | 147 | 8 | 1824 | 100 |
Appendix 12 Survey of the UK public: websites on which online feedback was read and written
Websites | n | %a |
---|---|---|
Health review and ratings websites | ||
Readers (N = 760) | ||
NHS Choices | 373 | 49 |
WebMD | 114 | 15 |
Care Opinionb | 42 | 6 |
Drugs.com | 39 | 5 |
iWantGreatCare | 9 | 1 |
NetDoctor | 2 | 0.1 |
Writers (N = 147) | ||
NHS Choices | 51 | 35 |
Care Opinionb | 13 | 9 |
WebMD | 7 | 5 |
iWantGreatCare | 1 | 1 |
Drugs.com | 1 | 1 |
Social media and other websites | ||
Readers (N = 760) | ||
Google Reviews | 233 | 31 |
188 | 25 | |
Online forum(s) | 90 | 12 |
79 | 10 | |
Charity website | 3 | 0.3 |
Mumsnet | 5 | 1 |
Online news page | 3 | 1 |
Other | 53 | 7 |
Do not know | 68 | 9 |
Writers (N = 147) | ||
34 | 23 | |
Google Reviews | 20 | 14 |
Online forum(s) | 9 | 6 |
5 | 3 | |
Other | 13 | 9 |
Do not know | 8 | 6 |
Appendix 13 Attitudes and experiences of UK health-care professionals: questionnaire text
Appendix 14 Attitudes and experiences of UK health-care professionals: focus group topic guide
The focus groups will explore similar issues to the survey (i.e. use of online commentary and the opportunities, concerns and cautions that it generates).
In addition, the focus group facilitator will use the flexibility of the groups:
-
to examine if (and why) they think that there are particular issues facing online feedback for their professional group
-
to share their ideas about which professions might be more or less enthusiastic about online feedback
-
to reflect on what (if anything) they would want to know before acting on feedback from patients – and why.
Online patient feedback on experiences of NHS care, which is captured on internet reviews and ratings sites, is useful to help the NHS improve services?
Online patient feedback in social media (such as in Tweets on Twitter, or in posts on Facebook or a discussion forum like Mumsnet) is useful to help the NHS improve services?
Online patient feedback on experiences of NHS care which is captured on internet reviews and ratings sites is generally negative?
Online patient feedback in social media (such as in Tweets on Twitter, or in posts on Facebook or a discussion forum like Mumsnet) is generally negative?
You encourage your patients/their carers to leave feedback on internet reviews and ratings sites?
Your organisation feedback internet reviews and comments left by patients/carers to you or your team?
You make a change to your practice because of feedback from internet reviews and ratings sites?
How representative of patient views do you think online patient/carer feedback is?
Have patients/carers ever left online patient feedback on an internet review or ratings site about an episode of care in which you were involved?
Have patients/carers ever left online patient feedback on an internet review or ratings site about you as an individual practitioner?
Appendix 15 Attitudes and experiences of UK health-care professionals: online patient feedback on experiences of NHS care is generally negative
Predictor variable | Internet reviews and ratings | Social media | ||||
---|---|---|---|---|---|---|
OR | 95% CI | p-value | OR | 95% CI | p-value | |
Health professional type (doctor vs. nursea) | 1.887 | 1.324 to 2.689 | 0.000 | 3.645 | 2.463 to 5.394 | < 0.001 |
Health professional setting (community vs. hospitala) | 2.835 | 2.142 to 3.753 | 0.000 | 2.450 | 1.792 to 3.348 | < 0.001 |
Sex (male vs. femalea) | 1.040 | 0.742 to 1.459 | 0.819 | 0.881 | 0.598 to 1.300 | 0.525 |
Age (years) | ||||||
< 30 | 1.263 | 0.565 to 2.824 | 0.570 | 1.225 | 0.520 to 2.885 | 0.643 |
30–39 | 1.568 | 0.873 to 2.815 | 0.132 | 1.364 | 0.720 to 2.583 | 0.341 |
40–49 | 1.457 | 0.821 to 2.588 | 0.199 | 1.548 | 0.826 to 2.902 | 0.173 |
50–59 | 1.513 | 0.848 to 2.699 | 0.161 | 1.574 | 0.837 to 2.958 | 0.159 |
≥ 60a |
Appendix 16 Attitudes and experiences of UK health-care professionals: behaviours in relation to online feedback on internet reviews and ratings sites
Predictor variable | Encouraged patients/carers to leave feedback | Made a change to practice | ||||
---|---|---|---|---|---|---|
OR | 95% CI | p-value | OR | 95% CI | p-value | |
Health professional type (doctor vs. nursea) | 0.537 | 0.359 to 0.803 | 0.002 | 0.328 | 0.229 to 0.470 | < 0.001 |
Health professional setting (community vs. hospitala) | 0.559 | 0.405 to 0.771 | 0.000 | 0.550 | 0.414 to 0.730 | < 0.001 |
Sex (male vs. femalea) | 1.073 | 0.719 to 1.601 | 0.731 | 0.837 | 0.578 to 1.211 | 0.345 |
Age (years) | ||||||
< 30 | 0.957 | 0.407 to 20252 | 0.921 | 1.125 | 0.532 to 2.377 | 0.758 |
30–39 | 0.745 | 0.384 to 1.445 | 0.384 | 0.811 | 0.442 to 1.488 | 0.499 |
40–49 | 0.900 | 0.480 to 1.688 | 0.742 | 0.917 | 0.514 to 1.636 | 0.770 |
50–59 | 1.032 | 0.556 to 1.916 | 0.921 | 1.283 | 0.733 to 2.248 | 0.383 |
≥ 60a |
Appendix 17 Interview study with patients and their family members: patient interview topic guide
Interview protocol
The interview protocol will need to differ slightly depending on how prolific an online commentator the interviewee is. The protocol below is to be adapted accordingly. Centre the interviews on the participants’ experiences of creating and/or using online feedback (i.e. start with their experience and then prompt as and when necessary, and develop the protocol as the interviews progress).
Adapt the wording and order as appropriate.
Interviewees’ experiences of reading online feedback about health-care services and practitioners provided by other patients
Have you ever read comments posted online by other patients about NHS services or practitioners?
For interviewees who DO read online feedback
Can you tell me a little more about your experiences of reading feedback about health-care services or practitioners on the internet?
If the interviewee has read feedback online more than once you may like to ask them to focus on any experiences they feel are particularly important for them or that they think are specifically relevant to the research project.
For prolific online commenters, it would be useful to get them to talk both about the first time they read online feedback and the most recent time.
Below are some topics that it would be good to cover. Interviewee to be prompted if necessary:
-
How did you find these comments online (e.g. search engine, a trusted website, etc.)?
-
What platform or service were they on (e.g. NHS Choices, Patient Opinion, etc.)?
-
Did you trust these comments? Why?
-
Did they influence your decision(s) or health-care practices in any way? If yes, how and why? If not, why?
-
Were they useful or not? If so, in what ways?
-
How would you improve on the online feedback currently available about the NHS and health-care practitioners?
For interviewees who DO NOT read online feedback
Can you tell me a little more about why you do not read online feedback on health-care services?
Some possible reasons that can be used as prompts:
-
I do not trust other people’s experiences as a source of information as it is subjective.
-
I think the feedback is not genuine.
-
It makes me anxious.
-
I have not got a choice about the service/practitioner so why bother.
Interviewees’ experiences of providing feedback about health-care services and practitioners on the internet
As you know, we are interested in your experiences of creating and using internet technologies to provide feedback about health-care services and practitioners. Could you please start by telling me about your experience(s) of commenting about health-care services or practitioners on the internet?
For interviewees who DO comment online
If the interviewee has commented multiple times then ask them to focus on any experiences that they feel are particularly important for them or that they think are specifically relevant to the research project.
For prolific online commenters, it might be useful as a memory aid to get them to talk both about the first time they commented online and the most recent time.
As the interviewee talks about their experiences, we need to get information on the points listed below. Hopefully, much of this emerges organically, but if not then the interviewer may need to prompt accordingly:
How did you share your experience online (exploring actions)?
Below are some topics that it would be good to cover. Interviewee to be prompted if necessary:
-
How regularly do they post online, do they do so regularly or rather did so in response to a specific incident or experience?
-
What device did they use (e.g. telephone, tablet, laptop, etc.)?
-
What platform or service did you use (e.g. NHS Choices, Patient Opinion, etc.)?
-
What did you think about this platform? Was it easy to use? What were its strengths and weaknesses?
-
Did you provide free-text comments and/or fill in check boxes? What do feel works better and why?
-
Was your feedback moderated or altered in any way? If yes, who moderated the feedback? How did they do this? What did you feel about this being done?
-
Did you get a response? What was it and were you satisfied with this response?
-
Were you satisfied with their overall experience of providing online feedback? If yes, what aspects were particularly beneficial and effective? If not, what would you improve?
Why did you decide to provide online feedback about your experience (exploring motivations)?
Some possible reasons that you may wish to prompt them on, to:
-
express their emotions
-
improve the service
-
help other patients
-
thank a practitioner or service provider
-
complain about a poor experience.
What were the consequences, if any, of you providing this feedback?
Was this what you had hoped for? If not, what would you have liked to have happened?
For interviewees that have provided online feedback multiple times it would be useful to get them to reflect on how their experiences of commenting on health care has changed over time.
Some things you may want to prompt them on:
-
How have the technologies changed?
-
How has the responses from the NHS, health-care practitioners, other patients and the media changed?
For interviewees who DO NOT comment online:
Could you please tell me a little more about why you have never commented about health-care services or practitioners online?
Some possible reasons to prompt interviewees on it needed:
-
Do not think it would be useful for others?
-
Do not think it will make a difference?
-
Any privacy concerns?
-
Do not have the technical skills?
-
Concerned it will have a negative effect on their care?
Recommendations for how the NHS can better respond to and use online feedback:
Based on your experiences do you have any recommendations for how the NHS can better respond to and use online feedback about health-care services and practitioners?
Some possible prompts:
-
Improved technology (e.g. better search facilities, layout and formatting, etc.).
-
Increased transparency about who is posting and how the information is used.
-
Faster response rate.
-
Improved integration of the feedback into the NHS.
Anything we haven’t asked about!
Is there anything we haven’t asked that you think is important, either in terms of your own experiences or recommendations you have for how the NHS can best use internet technologies for patient feedback?
In addition to the above, it would be useful to know the following. Some of this may emerge in the previous sections, but we may want to find this out before or after the interview.
-
Information on technology use:
-
How long have they been using internet technologies?
-
Are they comfortable using internet technologies?
-
How do they usually access the internet? (PC, laptop, telephone, etc.)
-
How regularly do they go online?
-
Do they have any problems in terms of access or use of the internet?
-
If they get support to use the internet who provides this?
-
-
Demographic information:
-
sex
-
age
-
health status and any health conditions
-
geographic location
-
employment
-
education.
-
Appendix 18 Attitudes and experiences of UK health-care professionals: online patient feedback on experiences of NHS care is useful to help the NHS improve services
Predictor variable | Internet reviews and ratings | Social media | ||||
---|---|---|---|---|---|---|
OR | 95% CI | p-value | OR | 95% CI | p-value | |
Health professional type (doctor vs. nursea) | 0.101 | 0.070 to 0.146 | 0.000 | 0.162 | 0.119 to 0.220 | < 0.001 |
Health professional setting (community vs. hospitala) | 0.315 | 0.242 to 0.410 | 0.000 | 0.448 | 0.351 to 0.572 | < 0.001 |
Sex (male vs. femalea) | 1.057 | 0.783 to 1.426 | 0.718 | 1.018 | 0.760 to 1.364 | 0.904 |
Age (years) | ||||||
< 30 | 1.049 | 0.341 to 3.232 | 0.933 | 1.112 | 0.507 to 2.440 | 0.791 |
30–39 | 0.691 | 0.390 to 1.223 | 0.205 | 1.042 | 0.617 to 1.758 | 0.879 |
40–49 | 0.770 | 0.439 to 1.350 | 0.362 | 1.041 | 0.624 to 1.738 | 0.877 |
50–59 | 0.635 | 0.360 to 1.121 | 0.117 | 0.988 | 0.591 to 1.653 | 0.964 |
≥ 60a |
Appendix 19 The INQUIRE publications and dissemination activities
Publications
Dudhwala F, Boylan A-M, Williams V, Powell J. What counts as online patient feedback, and for whom? Digit Health 2017;3:1–3. https://doi.org/10.1177/2055207617728186
van Velthoven M H, Atherton H, Powell J. A cross sectional survey of the UK public to understand use of online ratings and reviews of health services. Patient Educ Counsel 2018;101:1690–6. https://doi.org/10.1016/j.pec.2018.04.001
Atherton H, Fleming J, Williams V, Powell J. Online patient feedback: a cross-sectional survey of the attitudes and experiences of United Kingdom health care professionals [published online ahead of print June 2 2019]. J Health Serv Res Policy 2019.
Dissemination activities
Project 1: stakeholder consultation and evidence synthesis
Boylan A-M. The Qualitative Challenge: Making Use of New Forms of Patient Experience Feedback. Oral presentation given at the CQC, London, UK, February 2017.
Boylan A-M, Powell J. Charting the Landscape of Online Patient Feedback: Initial Findings From Our Research. Oral presentation given at the NHS England Experience of Care Week, Webinar, March 2017.
Project 2: public survey
Atherton, H. Survey of the General Public to Understand Use of Online Feedback on Health Services. Oral presentation given at the South West Society for Academic Primary Care Conference, University of Oxford, Oxford, UK, March 2017.
Project 3: qualitative study, patients’ experiences of creating/using online comment
Mazanderani F, Kirkpatrick S, Ziebland S. Caring About Care: Patient Perspectives on Providing Online Feedback About the NHS. Oral presentation given at the British Sociological Association Medical Sociology Annual Conference, York, UK, September 2017.
Mazanderani F, Kirkpatrick S, Ziebland S. Conversations About Care. Using Online Comments and Feedback to Improve NHS Services. Oral presentation given at NHS Scotland, Glasgow, UK, September 2017.
Mazanderani F, Kirkpatrick S, Ziebland S, Powell J. Conversations About Care; Patients’ and Their Family Members’ Perspectives of Ratings, Reviews and Feedback about NHS Healthcare Services. Oral presentation given at the Health Services Research UK Conference, Nottingham, UK, July 2018.
Project 4: survey and focus groups with health-care professionals
Atherton H. What are the Attitudes and Behaviours of Frontline NHS Staff to Online Feedback? Survey of Health Professionals to Understand Practice, Attitudes and Use of Online Feedback. Oral presentation given at the West Midlands Informatics Network, University of Warwick, Coventry, UK, January 2017.
Fleming J, Atherton H, Williams V, Powell J. Attitudes and Behaviours of Frontline NHS Staff to Online Feedback. A Survey of Health Professionals to Understand Practice, Attitudes and Use of Online Feedback. Oral presentation given at South West Society for Academic Primary Care Conference, Oxford, UK, March 2017.
Project 5: organisational case studies/ethnography
Dudhwala F. ‘Would You Recommend This Shoulder Surgery to Your Friends and Family?’ The Effect of Online Feedback and Ratings on Health Care Service Provision and Perception. Oral presentation given at the Society for Social Studies of Science, Boston, MA, USA, August 2017.
Dudhwala F, Woolgar S, Powell J. Whose Feedback is it Anyway? Enacting Agency in Online Health Experience Reports. Oral presentation given at the European Association for the Study of Science and Technology Conference, Lancaster, UK, July 2018.
Overarching
Online resource hosted by the Point of Care Foundation. URL: www.pointofcarefoundation.org.uk/resource/using-online-patient-feedback/ (accessed 6 September 2019).
First INQUIRE Symposium, Oxford, UK, December 2016.
Powell J. Health Inequality: TB, Trauma and Technology. Radio 4, Start the Week. June 2017. URL: www.bbc.co.uk/programmes/b08tvj71 (accessed 6 September 2019).
Powell J. INQUIRE: Improving NHS Quality Using Internet Ratings and Experiences. Briefing paper distributed to policy-makers. October 2017.
Powell J. INQUIRE: Improving NHS Quality Using Internet Ratings and Experiences. Oral presentation given to the NHS Choices Clinical Information Advisory Group hosted by NHS Digital, London, UK, April 2018.
Powell J. INQUIRE: Improving NHS Quality Using Internet Ratings and Experiences. Oral presentation given to the NIHR HSDR Commissioning Board, London, UK, May 2018.
Powell J, Dudhwala F. INQUIRE: Improving NHS Quality Using Internet Ratings and Experiences. Oral presentation given to the NIHR Dissemination centre at The King’s Fund, London, UK, June 2018.
Second INQUIRE Symposium, Oxford, UK, June 2018.
List of abbreviations
- AHP
- allied health professional
- CI
- confidence interval
- CQC
- Care Quality Commission
- GP
- general practitioner
- HCAHPS
- Hospital Consumer Assessment of Healthcare Providers and Systems
- HSDR
- Health Services and Delivery Research
- INQUIRE
- Improving NHS Quality Using Internet Ratings and Experiences
- NIHR
- National Institute for Health Research
- OR
- odds ratio
- OxIS
- Oxford Internet Surveys
- PALS
- Patient Advice and Liaison Service
- PCPRG
- Patients, Carers and Public Reference Group
- PPI
- public and patient involvement
- PRISMA
- Preferred Reporting Items for Systematic Reviews and Meta-Analyses
- RCN
- Royal College of Nursing
- SSC
- Study Steering Committee
- SSS
- sanctioned, solicited and sought
- UUU
- unsanctioned, unsolicited and unsought
Notes
-
Summary of papers included in scoping review
Supplementary material can be found on the NIHR Journals Library report project page (www.journalslibrary.nihr.ac.uk/programmes/hsdr/140448/#/documentation).
Supplementary material has been provided by the authors to support the report and any files provided at submission will have been seen by peer reviewers, but not extensively reviewed. Any supplementary material provided at a later stage in the process may not have been peer reviewed.