Notes
Article history
The research reported in this issue of the journal was funded by PGfAR as project number RP-PG-0108-10023. The contractual start date was in April 2010. The final report began editorial review in October 2015 and was accepted for publication in August 2016. As the funder, the PGfAR programme agreed the research questions and study designs in advance with the investigators. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The PGfAR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2017. This work was produced by Priebe et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Chapter 1 Introduction
Background
Approximately 1% of the population is affected by schizophrenia and related disorders, with particularly high rates in urban areas. 1 These disorders are associated with a range of disruptive symptoms such as disordered thoughts, delusions, apathy and hallucinations. Such disorders result in significant distress for patients and carers and account for a substantial societal burden. They also generate high costs to the NHS, through the need for ongoing intensive care and frequent hospitalisation, and to the society at large caused by loss of employment of patients and frequently also carers. 2,3 Currently, established pharmacological and psychological treatments only have limited effect sizes in the long-term treatment of schizophrenia and are associated with substantial rates of non-adherence. 4–6
Patients with severe forms of schizophrenia are now regularly cared for in the community. As a result of major reforms of and substantial additional investment in mental health care since the 1970s, multidisciplinary community mental health teams (CMHTs) have been set up throughout the UK and provide ongoing care. 7 More than 100,000 patients with these diagnoses are in the care of CMHTs (or other secondary care teams with similar functions) in England at any time. 8 Every patient has a designated clinician or care co-ordinator (usually a nurse or social worker by background) who has regular meetings (at least once per month) with the patient to assess their needs, engage them in treatment, discuss different treatment options and co-ordinate their care. Currently, the interaction in these meetings is based more on common sense than on evidence-based methods.
Although evidence suggests that a more positive patient–clinician relationship is associated with more favourable outcomes, there is no evidence-based intervention to achieve a better therapeutic relationship in community mental health care. 9–13 In addition, until relatively recently, there was no evidence-based method to structure the communication between patient and clinician in a way that would eventually lead to more favourable clinical outcomes. 14 A trial in the Netherlands15 found that asking patients what they wanted to discuss with their psychiatrists improved clinical decisions; however, this intervention did not impact on clinical outcomes. The FOCUS (Function and Overall Cognition in Ultra-high-risk States) trial16 in London involved patient- and clinician-rated outcomes being collected monthly and fed back to both staff and patients every 3 months. The intervention led to a reduction of treatment costs through reduced bed use, but did not improve subjective and other clinical outcomes. It is possible that these interventions did not have an effect on patient outcomes because they failed to influence clinician behaviour within routine clinical meetings. Any attempt to improve patient–clinician interactions in community care should structure action and behaviour change, instead of merely providing information.
The DIALOG intervention
To address this issue, DIALOG was developed as the first intervention to directly structure the patient–clinician interaction in community mental health care. In this technology-supported intervention, clinicians regularly presented patients with 11 fixed questions regarding their satisfaction with their (1) mental health, (2) physical health, (3) job situation, (4) accommodation, (5) leisure activities, (6) friendships, (7) relationship with their partner/family, (8) personal safety, (9) medication, (10) practical help received and (11) meetings with mental health professionals. Patients gave their answers by using a rating scale (1 = couldn’t be worse, 2 = displeased, 3 = mostly dissatisfied, 4 = mixed, 5 = mostly satisfied, 6 = pleased and 7 = couldn’t be better) and also indicated their needs for additional help in each area. Subsequently, their ratings were displayed graphically and could be compared with ratings from previous meetings.
This assessment provided a structure to the meetings and aimed to make them patient-centred and focused on change. In a cluster randomised controlled trial in six European countries, DIALOG was tested in the community treatment of patients with psychosis compared with treatment as usual. At the end of the 1-year study period, the intervention was associated with significantly better subjective quality of life, fewer unmet treatment needs and higher treatment satisfaction. 17
The effectiveness of the intervention may have been caused by three mechanisms. First, patients and clinicians were required to talk about eight life domains and three treatment aspects, which automatically provided a structured and comprehensive assessment of the patient’s situation and needs. There is widespread evidence in medicine that more comprehensive assessments can lead to more effective treatments. 18–20 Second, clinicians asked patients to indicate their satisfaction and wishes. Thus, it focuses the communication on the patient’s views and makes it patient-centred. Patient-centeredness is widely seen as an indicator of positive clinical communication. 21 Third, patients were asked about their wishes for different treatment. This could facilitate a negotiation of those wishes and lead to shared decision-making, which has been shown to be associated with more positive outcomes across medicine. 22
The DIALOG intervention is not a specialist programme for a small number of patients, but a generic method that can be utilised in routine care throughout the NHS. It does not require the setting up of new services or restructuring of organisations. It can be implemented at relatively low cost, particularly as it does not require extensive training of clinical staff and can benefit tens of thousands of patients at the same time. Thus, even small health and social gains for individual patients could add up to substantial public health effects. This also applies to potential cost savings. The FOCUS study16 mentioned previously suggested that annual cost savings of regular outcome data feedback (which is also provided in DIALOG) was equivalent to £5172 per patient through reduced bed use. If replicated for only 20% of patients with schizophrenia and related disorders in community care in the NHS, the savings would exceed £100M every year.
The regular outcome data generated through DIALOG (i.e. patients’ ratings of satisfaction with life and treatment and requests for further care) can be used to evaluate services on a local, regional and national level. So far, attempts to establish outcome assessment in routine community mental health care have largely failed, partly because it is difficult to motivate clinicians and patients to rate and enter outcome data on a regular basis. 23–25 The DIALOG intervention provided a method to generate such data in a way that is meaningful to clinicians and patients, and was likely to facilitate routine outcome assessment in mental health care.
Next steps: the DIALOG+ intervention
Although the DIALOG intervention implemented a structured patient assessment within routine meetings, it did not provide any guide for clinicians on how to respond to patients’ ratings and requests for additional help. The psychotherapy literature shows that it is important for clinicians to have a clear model for their interventions. 13 We proposed that the DIALOG intervention could be extended to ‘DIALOG+’ by equipping clinicians with a manual to respond to patients’ statements. Elements of cognitive–behavioural therapy (CBT) and solution-focused therapy (SFT) could be used to develop such a guide, including defined responses to help patients explore their beliefs, feelings and behaviours, and facilitate self-management skills. The CBT model has a strong evidence base supporting its effectiveness,26–28 is a widely accepted model among health professionals, includes patient-based outcome monitoring and can be generically applied. SFT has some overlap with CBT, appears particularly suited to a very brief intervention as it would be required to expand DIALOG to DIALOG+, is forward looking, which is in line with the DIALOG approach and, like CBT, can also be generically applied.
Before DIALOG and/or DIALOG+ could be rolled out, it was also necessary to ensure that the practical procedure of implementing the technology-supported intervention was as user-friendly as possible. DIALOG was designed using common sense, but with very limited systematic research on patients’ and clinicians’ experiences. We noted that certain features of the DIALOG intervention could be improved, for example how the questions are posed to patients, the labelling of the response options, how results are displayed and compared with previous ratings, and the usability of the software and hardware. Mobile technologies, such as tablet computers, have developed significantly since the original DIALOG trial, representing an opportunity to produce a more user-friendly application (app) that can be widely rolled out at low cost.
Objectives
Against this background, the overall aim of this programme was to make community mental health care of patients with schizophrenia more beneficial by structuring part of routine meetings with a manualised, technology-supported intervention. The specific objectives of the programme were:
-
to optimise the practical procedure and technology of DIALOG/DIALOG+ to make it more user-friendly, so that it is widely acceptable and sustainable in routine care in the NHS
-
to manualise elements of CBT and SFT that equip clinicians to respond effectively to the information provided by patients in the DIALOG intervention, to develop a corresponding training programme for this new ‘DIALOG+’ intervention, and to test the effectiveness of DIALOG+ in an exploratory randomised controlled trial
-
to test the cost-effectiveness of the DIALOG+ intervention and develop a protocol for a definitive trial.
To achieve these objectives, a mixed-method approach throughout the wider programme was taken. To inform the updated procedure and technology of DIALOG, video data from DIALOG sessions in the original trial were qualitatively analysed. Findings from this analysis informed a topic guide for focus groups with patients from the target population, to ensure that the views of the end users of the technology were taken into consideration in further developing the software. Recommendations based on the video analysis, together with feedback from the focus groups, informed the final specification for the new DIALOG software.
Concurrently, consultations were held with a network of three groups of expert consultants: three experts conducting research on the use of CBT with patients with psychosis; four experts in delivering training in SFT with private clients with mental health issues; and six leading community-based practitioners in the UK, who informed the development of an extended, manualised intervention – ‘DIALOG+’ – to accompany the new software. The manual and a corresponding training programme were tested and refined in a small internal pilot involving two clinicians, who would later train others in use of the new approach.
Having developed the new software and extended the intervention to DIALOG+, an exploratory cluster randomised controlled trial was selected as the method through which to gain evidence on the effectiveness and cost-effectiveness of the DIALOG+ intervention in improving outcomes for patients with schizophrenia or a related disorder. In order to gain a broader understanding that was not purely quantitative, focus groups with clinicians and patients were also conducted, to learn about the unique experiences of the participants with DIALOG+. Video data of DIALOG+ sessions were also analysed to provide a further qualitative perspective.
This mixed-methods approach ensured a comprehensive and detailed understanding of the DIALOG+, which would inform a protocol for a definite trial on the new intervention.
Chapter 2 Developing the DIALOG software
Study A1: analysis of video-recorded DIALOG sessions
Introduction
The use of software on hand-held computers during meetings between patients and clinicians is a new approach to communication in the community mental health setting, on which little research has been conducted. Therefore, it is important to consider what effect, if any, the use of these devices may have on patient–clinician interaction, particularly with respect to their therapeutic relationship. Although difficult to define, the therapeutic relationship has been conceptualised as consisting of three components in a psychometrically validated assessment scale. 29 The first is ‘positive collaboration’, referring to how well the patient and the clinician get on together. The second is ‘positive clinician input’, referring to the extent to which the patient perceives the clinician to be encouraging, understanding and supportive. The third is ‘non-positive clinician input’ or ‘emotional difficulties’, referring to the presence of problems in the relationship, for example a perceived lack of empathy.
The importance of the therapeutic relationship is well established. It has been documented that the quality of the relationship between therapist and patient is a consistent and strong predictor of outcome in many different forms of psychotherapy;29 furthermore, the quality of the relationship has been found to predict treatment adherence and outcomes for patients with a range of diagnoses and in different treatment settings. 30–33 Qualitative interviews and surveys have found that patients recognise and value the importance of the therapeutic relationship,34,35 and some research suggests that the relationship itself has the potential to be curative. 36 Findings on the moderators of the effect seen in the DIALOG intervention further emphasise the importance of the therapeutic relationship. 37 In their study, Hansson et al. 37 found that the effect of receiving DIALOG versus treatment as usual was greatest for dyads with a better baseline therapeutic relationship.
We have no reason to assume that introducing technology to patient–clinician meetings impacts negatively on the patient’s care, given the higher treatment satisfaction seen in patients who used DIALOG in the original trial. 17 Indeed, some authors propose that the use of both visual and auditory techniques may facilitate communication by improving patient attention and information assimilation and by reducing interference from psychiatric symptoms such as delusions in patients with psychosis. 38
The aim of the current study was to consider the impact of the original DIALOG procedure on the therapeutic relationship, using exploratory methods. Specifically, we aimed to investigate any problems with the version of DIALOG implemented in the original trial and to consider how a new version of DIALOG might be designed to safeguard this relationship.
Methods
Data
Video-recorded sessions of patient–clinician dyads using DIALOG were analysed for the purpose of this study. These were recorded as part of the original DIALOG study,17 which received a favourable ethical opinion from the National Research Ethics Service (NRES). A total of 13 video clips were analysed, involving 10 patients with schizophrenia or a related disorder and four clinicians in CMHTs in the East London NHS Foundation Trust (ELFT). The participants were diverse with respect to age, gender and ethnicity.
Analysis
Two researchers independently viewed the video clips and noted instances where they considered that the DIALOG procedure was potentially problematic in its impact on the therapeutic relationship, based on concepts in the literature. 39,40 The researchers compared their findings, noted discrepancies in their interpretations and, subsequently, reviewed their findings and discussed the discrepancies until consensus was reached. This was in accordance with the principle of intersubjectivity,41 in order to lend greater reliability to the study. In addition, the researchers presented their findings at regular intervals in an iterative process to senior members of the DIALOG research team, which included two experts in the therapeutic relationship [Stefan Priebe (SP) and Rosemarie McCabe (RMC)], who provided feedback and guidance.
Results
The findings of this exploratory analysis are presented in three categories. These categories are (1) technology issues, (2) content and procedure issues and (3) training issues.
Technology issues
These were issues surrounding the technology through which DIALOG was implemented, as distinct from the content of the DIALOG intervention.
Use of styluses
As part of the original DIALOG procedure, the users viewed the DIALOG software on a palmtop computer and input data by means of a stylus. The person holding the stylus was generally seen to dictate how the DIALOG procedure was conducted, in that they determined the pace of the meeting, deciding when to proceed from one domain to the next. In the majority of cases, one member of the dyad was seen to hold the stylus for the duration of the meeting, effectively maintaining control of the meeting, with little opportunity for working collaboratively.
When it was the patient who held the stylus, this could lead to them proceeding very quickly from one domain to the next, with little opportunity for the clinician to discuss the patient’s reports and raise issues that they felt were important and relevant. In one video clip, the clinician was seen to ask the patient holding the stylus to slow down. In another, the patient was seen to use DIALOG in silence, with the role of the clinician in the procedure rendered redundant.
When it was the clinician who held the stylus, this could lead to them proceeding from one domain to the next without allowing the patient to elaborate on their situation when they felt inclined. There were instances of the clinician proceeding while the patient was still discussing the previous domain, effectively shutting down the patient’s communication, impacting negatively on the therapeutic relationship.
The DIALOG intervention was designed to be a collaborative process between the patient and the clinician. The use of a stylus created an undesirable situation of a leader and a follower, and appeared to be a barrier to the therapeutic relationship.
Restrictive palmtop keyboard
Patients were asked if they would like additional or different help a total of 11 times per session as part of the original DIALOG procedure. When patients selected ‘yes’, the patient or the clinician (usually the latter) typed a description of the help requested into a text box in the DIALOG software. This was seen to be problematic, as it required the clinician to put down the stylus and start typing, using a keyboard that was considerably smaller than most laptop keyboards widely used today. As a result, the clinician often typed into the keyboard using their two index fingers, causing delays. In addition, they tended to hunch over and stare at this small keyboard in order to type in the request, with the result that the clinician’s attention was focused fully on the device. This was seen to impact negatively on the therapeutic relationship.
Procedure issues
These were issues surrounding elements of the DIALOG intervention itself, rather than the technology through which it was administered.
Problems with Likert scale labels
In the original software, the labels of the Likert scale were 1 = couldn’t be worse, 2 = displeased, 3 = mostly dissatisfied, 4 = mixed, 5 = mostly satisfied, 6 = pleased and 7 = couldn’t be better. It appeared that these labels were not sufficiently distinct from one another or sensitive in capturing information as, in most cases, patients were seen to choose one positive-oriented rating and one negative-oriented rating and then use these two ratings alone to indicate whether they were satisfied or unsatisfied, more generally. In particular, the differences between ‘mostly satisfied’ and ‘pleased’, and between ‘mostly dissatisfied’ and ‘displeased’, may have been too subtle for patients to distinguish. As the language used to label the scale was not consistent (‘satisfied’ vs. ‘pleased’ vs. ‘couldn’t be better’), patients did not memorise the labels easily and appeared to forget which label corresponded with which numerical rating. The grammatical structure of the labels ‘couldn’t be better’ and ‘couldn’t be worse’ were notably complex and appeared to confuse patients. Thus, on the whole, the scale was not very meaningful or accessible to them. This rendered the provision of ratings somewhat redundant and patient inertia in completing these ratings may have the potential to impact negatively on the therapeutic relationship.
Documentation of requests for additional help
Linked to the technology problem of the restrictive palmtop keyboard described previously was the procedural expectation for clinicians to constantly document patients’ needs for additional help throughout the meeting. Clinicians were seen to type extensive descriptions of patients’ requests for help, causing them to be distracted with documentation for lengthy periods. In many cases, clinicians appeared to be more focused on getting the documentation right than paying attention to the patient. Eye contact was frequently lost, as was the pace and general momentum of the meeting. Long pauses were common, and, in some cases, there was complete silence for extended periods of time.
In one clip, the patient was seen to yawn repeatedly as the clinician typed into the computer. However, the patient had not made any explicit request for help ahead of this, so this clinician must have been involved in some other form of note-taking that was not prescribed by the intended DIALOG intervention, which was distracting him from engaging with the patient. In another clip, the patient displayed bored and restless body language (fidgeting, looking around, shuffling in his seat) as the care co-ordinator typed into the device, again to an extent that was deemed to be excessive.
Lengthy documentation by the clinician of needs for additional help should not be part of the updated DIALOG procedure. Rather, any such documentation should take place at the end, after the administration of the assessment. Otherwise, the potentially negative impact on the therapeutic relationship of typing at length during meetings is significant. Clinicians should not use available text boxes in software for other routine documentation.
Review of ratings
Although the DIALOG software featured a function whereby clinicians could review the ratings submitted by the patient and compare these ratings with previous sessions – including an overall picture across domains as well as the option to examine domains individually – this function was not automatic in the software. As a result, it was rarely used. Clinicians failed to initiate a review of the domains, despite instructions from the original research team to conduct such reviews routinely as part of the procedure, in the company of the patient, for the patient’s benefit. In the few cases in which such reviews were conducted, clinicians appeared to be at a loss as to how to comment on ratings, particularly when ratings had not changed much. As a result, such reviews were not particularly informative for patients, which may have undermined the exercise of undertaking the DIALOG intervention. If patients believe that such activities are not worthwhile, they could potentially be damaging to the therapeutic relationship.
Training issues
These were matters that were not related to the technology or procedure of the intervention, but related to the specific instructions and training provided to clinicians regarding how they should use DIALOG.
Perception of DIALOG as a task for completion
In many of the video clips, the clinician was seen to place undue emphasis on acquiring a rating of satisfaction, rather than facilitating discussion. That is, the clinician was focused on identifying where the patient lay on the seven-point Likert scale rather than considering the information arising from such a rating. Often, the interaction seemed as though the clinician was administering a questionnaire, rather than conducting a structured conversation.
In some cases, the patient appeared to have more to say beyond simply providing a rating but did not get the opportunity, or found difficulty in expressing themselves in terms of a rating without elaborating on their situation first to provide context. However, clinicians repeated questions when the original question did not result in the patient providing a rating. In most cases, the clinicians restricted themselves to the two core questions (i.e. satisfaction and help needed) for each of the 11 domains, without probing or encouraging elaboration. The consequence of this was that some of the domains were simply assessed, but not addressed in any meaningful way. In a few cases, the patient was seen to describe problematic situations, warranting further discussion between the two of them, only for the clinician to proceed directly to the next question. It appeared as though clinicians were uncomfortable deviating from the framework of the DIALOG tool.
Positioning of the device
In some of the video clips, the patient was seen holding the device on their lap. In others, the clinician was seen to have placed the device on a desk that was outside the line of vision of the patient. Neither of these scenarios was desirable. Patients holding the device on their lap were often seen to stare down at the device, without directing their gaze and attention to the clinician. The clinician frequently showed difficulty in engaging the patient and pacing the meeting, without having control of the device. This is likely to impact negatively on the therapeutic relationship. In the case of the clinician having the device on a desk, they were often seen to turn away from the patient while tending to the device. In a few instances, the position of the device on the desk enabled the clinician to avoid certain topics raised by the patient, by maintaining their gaze on the device and not engaging with the patient’s concerns.
Clinicians should be trained to keep the device in a shared space in order to facilitate a collaborative process in which both clinician and patient can be active partners.
Reading of response options aloud by the clinician
In some cases, particularly when the device was in the clinician’s possession and outside the gaze of the patient, the clinician was seen to read out the various response options on the seven-point Likert scale to the patient. This caused long delays and, generally, appeared to be monotonous for both the patient and the clinician, seemingly impacting negatively on their therapeutic relationship.
Clinicians omitting elements of the intervention
In some cases, the clinician was seen to omit important aspects of the intervention. This ranged from neglecting to ask patients whether or not they needed additional help and what help they would like, to not indicating that the satisfaction questions were to be answered with a prescribed set of response options, to omitting the review of ratings, as previously discussed. Any future redesign must consider how omission of key processes in the DIALOG procedure can be prevented.
Discussion
In the original DIALOG study, the DIALOG intervention was found to lead to an improvement on subjective quality of life, treatment satisfaction and level of unmet needs in patients with psychosis. 17 This was irrespective of any problems identified by the current study with respect to the therapeutic relationship. In this respect, we may wish to be conservative as regards any potential redesign of the DIALOG procedure. Yet, the data suggested that the practical procedure of DIALOG can be further optimised.
The use of a touch screen that eliminates the need for a stylus may be helpful in maintaining the therapeutic relationship. This technology would allow multiple parties to interact and engage with the DIALOG software, thereby reducing the issue of the balance of control between the patient and the clinician. The technology should be sufficiently large for both the patient and the clinician to view the screen. Ideally, it should be portable and easily shared, so that it is not strictly in the possession of one or the other.
Revisions to the Likert scale are warranted in DIALOG. The seven-point scale might be retained, but with simpler labels that are clearly hierarchical and quantitatively distinct from one another, in logical increments. Potential labels can be generated and explored in focus groups with patients. The Likert scale should use consistent language throughout (i.e. subtle distinctions between markers such as ‘mostly satisfied’, ‘pleased’ and ‘couldn’t be better’ should be avoided). This will be more intuitive for patients with psychosis and may help them to complete ratings with less difficulty, thereby removing the barrier to communication between patient and clinician that sometimes accompanies defined Likert scales.
Training issues are less straightforward. Although it is possible to train clinicians in the use of the DIALOG approach, this creates a considerable expense for an otherwise inexpensive intervention. Although a manual is clearly needed to accompany the DIALOG software, describing explicitly where the clinician should position the device, how they should address each domain, etc., it can never be guaranteed that this manual will be read or adhered to. It may be necessary to design the software in such a way that the behaviour of the clinician is prescribed by the software. For example, a review of the ratings, which was often omitted by clinicians, should be presented automatically. The software should also prescribe that each question on need for additional help (another element that was frequently omitted by clinicians) is answered before proceeding to the next question. Importantly, the software should remove the expectation (and the opportunity) for clinicians to record lengthy descriptions of patients’ reports and requests.
In summary, the DIALOG tool facilitates structured interaction between clinicians and patients with psychosis in the community mental health setting. However, adjustments to the procedure are warranted to safeguard a positive interaction, and maintain the therapeutic relationship. Further research involving focus groups with patients treated in CMHTs will assist in taking key decisions, as described in the subsequent study A2.
Study A2: focus groups exploring preferences for updated DIALOG software
Introduction
The previous substudy involved analysis of video data collected in the original DIALOG trial, leading to a preliminary brief for the updated DIALOG software (see Study A1: analysis of video-recorded DIALOG sessions). Following this analysis, a software developer joined the research team and produced options for an updated version of the DIALOG software based on the findings. The present study sought to gain feedback from the end users (i.e. patients in CMHTs) on mock-ups of the DIALOG software, to inform the final specification. Specifically, we explored how user-friendly patients found the software, which version of the software interface they thought was best and what changes to the software – particularly in terms of content – they felt were necessary. A full description of the development of the finalised technology is available in the subsequent study A3 (see Study A3: technology development).
Method
Study design
This study was a qualitative study involving focus groups. This methodology was selected in order to facilitate a permissive, non-threatening environment among peers;41 synergy and spontaneity between interacting participants42 and greater elaboration as a consequence, relative to individual interviews. 43 The study received a favourable ethical opinion from the NRES (Chelsea, London; reference number 11/LO/1779).
Participants
Participants were patients who were care co-ordinated in CMHTs in ELFT. Consultation with patients in these focus groups ensured that there was patient and public involvement in the development of the software, a tool with the potential to be used widely across the UK. Participants learned of the study through an information sheet provided by their care team. This information sheet had been reviewed for user-friendly language by a service user reference group comprising three service users with experience of psychosis and treatment in a CMHT. Interested participants subsequently met a researcher to learn more about the study. The inclusion criteria were (1) experience with a designated care co-ordinator as part of routine care in the community, (2) having a primary diagnosis of schizophrenia or a related disorder [International Statistical Classification of Diseases and Related Health Problems, Tenth Edition (ICD-10): category F20–29],44 (3) aged between 18 and 65 years, (4) having a sufficient command of English to understand the instructions and questions and (5) having the capacity to provide written informed consent. A total of 17 participants were recruited to six focus groups, comprising three participants in five of the groups and two participants in the sixth group. Participants were diverse with respect to age, gender and ethnicity across groups. Five patients had verbally consented to each group prior to the event; however, not all of them arrived on the day of the focus group. The 17 participants provided written informed consent on the day of the group, with no participants withdrawing their consent during the group.
Procedure and data collection
A semistructured interview schedule (see Appendix 1) was developed by the research team, referring directly to the content of the DIALOG intervention (i.e. the questions on satisfaction, the response options, the additional help question, etc.), along with three mock-ups of the DIALOG software, demonstrated on iOS iPads. The schedule was then piloted with the service user reference group and subsequently refined. Groups were conducted between May and August 2011. All groups were conducted by an experienced facilitator and co-facilitator, audio-recorded and later transcribed verbatim. The three software interface options were presented in a different order than in the previous focus group, to mitigate order effects. Participants were observed in using the software in addition to being interviewed in a group. Once transcripts were produced, they were anonymised and recordings were destroyed. Participants received £15 for participation and their travel expenses were reimbursed. Following the sixth focus group, the research team agreed that data saturation had been reached, given that an initial review of the transcripts demonstrated that similar data had arisen across the six sessions.
Analysis
The transcripts were independently reviewed by the facilitator and a co-facilitator. For each aspect of the DIALOG intervention discussed throughout the groups (e.g. the seven response options, the preferred layout of the software), the researchers summarised the opinions of the participants using verbatim quotes, representing consistency across groups, as well as differences of opinions. These summaries were presented to the wider research team and the service user reference group and implications for the software were discussed. A final decision on the content, features and preferred interface for the updated DIALOG software was then reached. The software was then developed for the iOS iPad (see Study A3: technology development) in preparation for using the software in the implementation of the new, extended DIALOG+ intervention (see Chapter 3), to be tested in a randomised controlled trial (see Chapter 4).
Results
Participants offered their opinions from a patient’s perspective on how best to deliver the DIALOG and DIALOG+ interventions using the new DIALOG software. Their views in regards to specific features of the software are presented below, accompanied by verbatim quotes, where helpful, in illustrating the design issue under consideration.
Core interface
Of the three design layouts presented at the focus group meetings, a strong majority of participants favoured the design depicted in Figure 1 (interface A). This design presents one ‘active’ question pertaining to satisfaction at any one time, to which a Likert scale to be rated corresponds, as well as a question pertaining to additional help. ‘Mental health’ is the active domain in Figure 1. The other 10 domains of the questionnaire are visible on the screen, but they are in truncated form and are inactive. In order to proceed to any one of these, the user has to press on the desired domain to activate that domain, in which case it will appear in large form on the screen, along with the accompanying question on satisfaction, the Likert scale to be rated and the additional help question. Mental health will now be inactive, as will be the other nine domains. The values from any previously rated domains will still be visible in truncated form, with a checkbox indicating when there has been a request for additional help.
Participants favoured this design over two alternatives. The first of these was interface B, which was similar to interface A; however, all domains were presented in large form, with the Likert scale and additional help question also visible in large form for each, meaning that there was room for only a few domains on the screen at a time. The user made a hand gesture on the touch-screen interface to proceed further down the list of domains or back towards the top. Interface C featured each domain appearing individually, one at a time on the screen, with considerably large text presented for the satisfaction question, the domain, the Likert scale and the help question. A hand gesture allowed the user to proceed to the subsequent domain or return to the previous one.
Participants favoured interface A because it presented the opportunity to tend to only one domain at any given time. Yet it was possible to see the other domains and the expectation that they were to be rated shortly was therefore understood. This gave participants a sense of structure to the procedure:
This one [interface A] is better because I know which questions are coming up. I like to know what’s coming, it helps me concentrate.
Participant (P)3, focus group (FG)3
This one, this one is easier . . . With the last one [interface B] you were moving all over the place and all the questions were all together . . . I got a bit lost . . . This way, it’s one at a time . . . I can see what I’m doing.
P1, FG2
I like how this builds up the answers from top to bottom. I know what’s going on, it’s clear, it doesn’t disappear . . . I know what I’m looking at . . . It’s good.
P2, FG6
Participants appreciated that interface C was very user-friendly in having the largest text, but preferred to able to see the upcoming domains to be rated as well, rather than one domain at a time. They felt that interface B was ‘too busy’ in its design and found interface A far clearer.
User-friendliness
All of the participants responded positively to the iPad platform used for demonstration purposes. The majority of participants managed to operate the touch screen and enter ratings intuitively, without instruction. Participants said of using DIALOG:
It was pretty self-explanatory . . . Nothing complicated.
P3, FG5
Even though I’ve never used one . . . It’s straightforward.
P1, FG4
Participants offered feedback on two major features of the interface. The first was regarding the design of the ratings comparison feature. Current ratings were represented in dark blue, with previous ratings represented in light blue. Participants pointed out that it was difficult to distinguish between the two, and that clashing colours should be used instead. Some participants could not tell which rating was the current rating and which was the previous one.
The second was regarding a ‘reorder’ function in the software. The research team had proposed an idea whereby, with the push of a button, patients’ ratings could be reordered subsequent to submission, so that the lowest-scoring ratings appeared on top and the highest-scoring ratings appeared on the bottom. This was intended to help the patient and the clinician identify priorities for further discussion. However, when this was piloted with participants, it transpired that reordering was confusing to them. It was also noted that reordering seemed to undermine explicit requests for additional help.
Posing of the satisfaction questions
Participants had mixed views on how the 11 satisfaction questions should be posed to the patient. They considered the options of the software presenting each domain as (1) a stand-alone topic such as ‘mental health’; (2) a statement such as ‘I am ______ with my [mental health]’; (3) the question ‘How satisfied are you with your [mental health]?’; (4) the question ‘How happy are you with your [mental health]?’ and (5) the question ‘How do you rate your [mental health]?’.
A minority favoured presenting the domains as stand-alone topics:
That way the care co-ordinator has to ask you the question . . . That means they’re talking to you. Not just reading off the screen. I think that might be better personally.
P2, FG2
It means there’s less stuff on the page. Less words . . . Might be easier.
P3, FG2
However, the majority of participants favoured presenting each domain as a question:
[Not having a question] would be an issue for me because I have problems with short-term memory. It helps me to focus on what the question actually is.
P1, FG1
I want to be sure about the question. I wouldn’t understand otherwise . . . Mental health . . . Mental health what? Do you know what I mean?
P2, FG1
None of the participants favoured presenting the domain as a statement to be completed. One participant said:
That way it doesn’t seem like a conversation. It’s not my care co-ordinator asking me a question.
P3, FG4
The majority opposed formulating the question as ‘How do you rate’, with one participant describing it as ‘too scientific’ and another noting:
That’s something completely different . . . I wouldn’t rate any of those things high . . . Accommodation and that . . . But am I satisfied? Yeah, they’ll do for now. I’m happy with them. It’s more about me. I don’t rate it as 5-star accommodation . . . Do you see!
P1, FG4
Similarly, few participants favoured formulating the question as ‘How happy are you with . . .’ The same participant noted a distinction between ‘satisfied’ and ‘happy’:
Again, it’s an issue. How happy are you, how sad are you . . . That’s different . . . I am satisfied with my physical health. I might have a funny leg and I’m not happy about it. But I’m satisfied . . . I know the situation. I know what I can do and what I can’t. I know how to manage it, it’s fine, I’m satisfied about it.
P1, FG4
The majority favoured the formulation ‘How satisfied are you with your mental health?’:
Makes sense. It’s about how satisfied am I . . . Do I need to work on it. Do I need to improve it, for myself.
P3, FG4
Additional help question
Participants were asked about how the additional help question of the DIALOG tool should be posed. A small majority said that the phrasing of this question should be amended to ‘Do you need more help in this area?’.
When invited to consider the phrasing ‘would you like’, a small minority of participants said that they preferred this option and noted that it was ‘friendlier’, which would make them more likely to ask for help if they needed it. However, the majority said that the word ‘need’ was better, in that it more accurately reflected the patient’s situation of requiring help, rather than simply desiring it. One participant said:
People are motivated by different words. I personally am more motivated by the word ‘need’. It’s very functional. I’m a very functional type of a person.
P3, FG1
Participants did not view one alternative option, ‘extra’, favourably:
Simpler . . . But, I mean, ‘extra’ help . . . It sounds like ‘all the extras’ – the stuff you don’t need. It sounds like you’re getting extra but you don’t necessarily need it. I don’t like that.
P2, FG2
I don’t like the connotation of ‘extra’ . . . sounds greedy.
P3, FG1
Order of the domains
Participants had differing views on how the domains should be ordered. One participant, when presented with the current order of the domains, remarked:
I can see what you’ve done there . . . Start off with mental health, then all the daily life, then all the stuff that happens at the team . . . I can see the logic to it. I don’t think it needs changing.
P3, FG4
Although a small majority held a similar view, there were alternative suggestions:
Maybe medication should be after mental health.
P1, FG5
I don’t think so. Mental and physical health, keep them together. Medication could be a longer discussion . . . Leave it ‘til later.
P2, FG5
Maybe not start off with mental health? It’s a big one.
P1, FG6
It is . . . But it influences everything else.
P2, FG6
With such diverse opinions, there was no strong consensus in any of the groups offering an alternative to the existing order. However, one participant noted:
If you make it (the software) so that you can answer whichever one you like [first] . . . You’ve not got a problem.
P3, FG4
Labelling of the response options
Having been asked to consider the best response options for the seven points of the DIALOG scale, participants had different individual preferences and did not reach consensus in the groups. However, all of the participants agreed that the current labels needed to be revised. These labels were 1 = couldn’t be worse, 2 = displeased, 3 = mostly dissatisfied, 4 = mixed, 5 = mostly satisfied, 6 = pleased, 7 = couldn’t be better.
‘Mixed’ is a problem. I don’t like that because of the idea of mixed diagnosis. It doesn’t belong . . . I don’t like the connotation.
P3, FG1
‘Couldn’t be better’ is just confusing. The negative in there . . . It doesn’t help. It’s complicated.
P1, FG1
I think that ‘mostly’ sounds too strong coming straight after ‘mixed’. That’s a huge jump to my mind.
P1, FG3
Mostly satisfied, then pleased, then couldn’t be better . . . You’d be better off just using the same name for all of them, but have a different amount, you know like . . . A bit satisfied. Then whatever else . . . The next strongest sounding word . . . really dissatisfied.
P2, FG6
In keeping with this, ‘satisfied’ and ‘dissatisfied’ were chosen as the consistent terms for the positive and negative sides of the scale, respectively. Of the various suggestions of the group, ‘fairly’, ‘very’ and ‘totally’ were decided on by the research team. More colloquial terms suggested by the focus groups (e.g. ‘a bit’ and ‘really’) were ruled out on the basis that they were difficult to quantify specifically.
One participant suggested ‘in the middle’ for the centre point of the scale. The research team agreed that this was the most direct way of describing the centre point, more so than the alternative suggestions – ‘neutral’, ‘good and bad’ and ‘neither’.
Discussion
The current study sought to gain the views of patients in CMHTs on mock-ups of the DIALOG software, and arrive at a user-friendly interface that would be suitable for routine use in community mental health care. The resulting specification for the software was as follows.
On initiating a session with a patient, the clinician and the patient are presented with the first of the 11 domains, mental health. The domain is presented in the context of a full question – ‘How satisfied are you with your mental health?’. A Likert scale is visible underneath, with the following labels: 1 = totally dissatisfied, 2 = very dissatisfied, 3 = fairly dissatisfied, 4 = in the middle, 5 = fairly satisfied, 6 = very satisfied and 7 = totally satisfied. There is a further question underneath the domain – ‘Do you need more help?’ – which is to be answered with ‘yes’ or ‘no’. The remaining 10 domains are visible underneath, in truncated form. After (1) mental health, the remaining domains are presented in the following order: (2) physical health, (3) job situation, (4) accommodation, (5) leisure activities, (6) relationship with partner/family, (7) friendships, (8) personal safety, (9) medication, (10) practical help received and (11) consultations with mental health professionals.
In order to proceed to a different domain, the clinician presses on that domain from the list of domains appearing underneath the currently active domain. The newly activated domain (in this case, physical health) is now large on the screen, with an accompanying question on satisfaction, a Likert scale and an additional question on needs for more help, with all other domains truncated. Responses to all previously completed domains, including requests for additional help, are still visible and gradually build up a general overview of the assessment.
From the second use of DIALOG+ onwards, the clinician and the patient may compare the current ratings with those of any previous session. On pressing the ‘compare’ button, a timeline appears at the top of the screen, showing the dates of all the previous meetings. When the clinician presses on any one of these dates, the ratings from that date appear, in orange, next to the ratings from the current session, in blue.
Following the results of concurrent studies in this programme concerned with the development of a manualised guide for a new ‘DIALOG+’ intervention, it was agreed that the software should not only represent DIALOG, but also DIALOG+. This decision was strongly favoured by the Programme Steering Committee. As a result, the software was further developed (see Study A3: technology development).
Study A3: technology development
Introduction
Following substudy A1 (see Study A1: analysis of video-recorded DIALOG sessions), an initial brief for the specification of the new DIALOG software was developed. This brief stipulated the following requirements:
-
The software should run on a portable device, which clinicians can easily bring to home visits with patients, and can be shared and passed back and forth.
-
The software should run on a device with a large screen to facilitate the patient’s engagement with the tool.
-
The software should run on a touch-screen-operated device to facilitate the patient’s engagement with the tool.
In anticipation of a trial involving the use of this software in real-world mental health services, further requirements were added to the brief by the research team:
-
The software should be fully operational even when an internet connection in not available.
-
Data should be stored safely and encrypted inside the device.
-
When an internet connection is available, the data should be posted on a server and be erased from the device’s memory.
-
Updates and security fixes should be easily distributed.
We sought to develop the new DIALOG software and a corresponding server for storage of data in accordance with this brief. This was an iterative process that took place concurrently with substudy A2 (see Study A2: focus groups exploring preferences for updated DIALOG software). Thus, study A2 informed the development described in this substudy also.
Method
We formed a DIALOG software development team comprising the principal investigator (SP); RMC; Pat Healey (PH), Professor of Human Interaction at the School of Electronic Engineering and Computer Science, Queen Mary University of London (QMUL), London, UK; a research assistant on the programme; and a software developer. The team met weekly between October 2011 and September 2012 to discuss issues surrounding development of the new software.
Informed by the findings of studies A1 and A2 and the resulting specification, it was agreed that the most suitable platform for the new DIALOG software would be a ‘tablet’ device. Larger than a mobile phone, and more portable and less heavy than a laptop, a tablet would allow for the DIALOG software to be easily shared between the patient and the clinician and would enable clinicians to use it during home visits.
Following this decision, we researched the three most popular mobile developing platforms: iOS 6 (Apple Inc., Cupertino, CA, USA) and Android version 4.1–4.3.1 (Google Inc., Mountain View, CA, USA), as well as a cross-platform solution using web development in hypertext markup language 5 (HTML5). We considered the advantages and disadvantages of each, both in terms of implementing a trial and distributing the software more widely following the conclusion of the programme.
Once the platform had been selected, the software developer commenced development of the DIALOG software and the corresponding server, overseen by PH. The implementation of the iOS app was made using the Objective-C 2.0 (Apple Inc., Cupertino, CA, USA) programming language. The design was based on the model–view–controller (MVC) design pattern. This is a classic design pattern that separates the system design into three distinct entities. The model is the layer that holds and manages all the data of the app, the view is the only layer visible to the user, allowing him/her to interact with the app and the controller is the layer between model and view, which manages the data from the model and presents them to the view. MVC is important for a clean and well-defined design. 45 One of the greatest advantages of applying the MVC design pattern is that the app can easily be ported to different devices. In that case, only the view and some parts of the controller layer have to be redefined and implemented.
The implementation was made using the objective C programming language, an object-oriented implementation of the C language.
Results
Evaluation of operating systems
Apple iOS
Apple iOS is the operating system that runs inside all portable devices made by Apple Inc.: iPhones, iPod Touch and iPads (Apple Inc., Cupertino, CA, USA). It was released in June 2008 with the first iPhone device and was based on the core of Mac OS X (Apple Inc., Cupertino, CA, USA), Apple Inc.’s desktop computer operating system. Based on reports from Strategy Analytics, iOS held 66.6% of the market share for tablets at the end of 2011,46 making it the most popular mobile operating system in the world at the time.
Selecting this environment for developing DIALOG presented the following matters for consideration:
-
Deployment: apps are easily distributed using the App Store (Apple Inc., Cupertino, CA, USA) – users can download apps instantaneously with the push of a button. Submission of apps to the App Store requires approval from Apple Inc., which delays the distribution by 5–10 days.
-
Cross-platform functionality: apps developed in iOS can only be operated on Apple Inc. platforms, that is, iPhones and iPads. Apps need to be specifically optimised to operate on both platforms and are often developed for either one or the other.
-
System access: native apps run securely in an isolated environment (they are ‘sandboxed’), meaning that an app cannot access data of other apps installed on the device. Furthermore, all system resources are protected from the operating system, making it more secure and stable.
-
Security: as a result of the restrictions applied by Apple Inc.’s approval process prior to distribution, no viruses affecting iOS devices have been detected to date. Thus, iOS devices offer superior security.
-
App updates: updates are automated, but require reapproval from Apple Inc., which delays the update distribution by 5–10 days.
-
Operating system updates: updates to the operating system are automated and very fast.
-
Data storage: all data are secure and encrypted inside the device.
-
Pricing: Apple Inc. hardware products are considered premium products, and tend to be more costly than alternative platforms.
Google Android
Android is the newest mobile platform, having released its first stable version on October 2008. It is based on the Linux kernel and was developed by Google as an open-source project, available for free to all smartphone manufacturers. According to Strategy Analytics, Android held 26.9% of the tablet market share at the end of 2011,46 making it the second most popular mobile operating system in the world at the time.
Selecting this environment for developing DIALOG presented the following matters for consideration:
-
Deployment: apps are easily distributed – using the Google Play Store (Google Inc., Mountain View, CA, USA), users can download apps instantaneously with the push of a button. This does not require approval from Google, meaning that an app can be made available on the Play Store within 1–2 hours.
-
Cross-platform functionality: Android offers cross-platform functionality across a range of mobile and tablet devices. This means that Android software is not restricted to one company and is compatible with different devices offered by Samsung, Sony and Nokia, among others.
-
System access: the latest Android versions also run the apps securely in an isolated environment (they are ‘sandboxed’), but some apps can acquire special permissions in order to access the device’s data storage, which makes the platform vulnerable.
-
Security: as a result of Google’s automated approval of new apps and all corresponding updates, 80 infected apps were discovered as of January 2011. This number increased by 400% to 400 infected apps as of June 2011. 47
-
App updates: updates are automatically approved by Google and take only 1–2 hours to be distributed.
-
Operating system updates: updates to the operating system are usually delayed because of the inconsistencies across devices of different hardware providers, and the necessity to make updates compatible with all of them. This is an especially important consideration when a vulnerability to the system is discovered.
-
Data storage: all data are secure, but not encrypted by default inside the device.
-
Pricing: cost depends on the hardware selected to support the Android app. Prices vary, however they are generally less costly than Apple Inc. products.
HTML5
Hypertext markup language is the open standard used to create web pages. The latest version at the time of software development, HTML5, provides several useful features such as web storage for saving data locally to the device’s internal memory.
Selecting this environment for developing DIALOG presented the following matters for consideration:
-
Deployment: apps are not distributed, but run on a web server. Devices need to open a web browser and type the server’s URL to use the app.
-
Cross-platform functionality: as apps are not distributed and run inside a web server, an implementation in HTML5 allows access to virtually any user on any device via the device’s web browser.
-
System access: apps have limited access to the device’s data storage.
-
Security: the security of this method depends on the device used to access the app.
-
App updates: as the app runs inside the server, updates are automated and instant.
-
Operating system updates: updates to the operating system depend on the device used to access the app.
-
Data storage: when saved locally on the device, data are not secured and not encrypted. Otherwise, a constant internet connection is required to use the app and save data to the server.
-
Pricing: HTML5 apps can be operated in web browsers, whether the browsers are on tablets or on desktops; therefore, the cost of hardware to support the app varies hugely (e.g. if choosing a laptop over a tablet).
Platform decision
Although all of the platforms described above were deemed suitable for the implementation of DIALOG, Apple Inc.’s iOS was selected as the superior option for the following reasons:
-
iPads are widely used and have the largest market share, at 66.6%.
-
It is the most secure option, with no virus threats or critical vulnerabilities detected to date. 48
-
Although software updates can take 5–10 days to be approved from Apple Inc., this makes the system more secure than other alternatives. 48
-
Apps natively built for iOS can run fast, even when an internet connection is not available.
-
The platform provides strong hardware encryption on the device’s data.
Model design
The model is the layer that holds and manages all data of the app. It usually takes the form of a database or a JavaScript Object Notation (JSON) file. In this project, we used the Apple Inc. Core Data persistent framework, which uses a database to store the data internally. 49 Figure 2 shows the model design of DIALOG, with a description of all entities existing in this model.
Clinician
A clinician entity represents the current user of the software. In this implementation, only one clinician can exist. We introduced this in case multiple user support is required in the future.
Client
A client entity represents a patient registered to the existing clinician.
Appointment
An appointment entity represents an actual appointment registered by a client and connected to the related client. It has several properties related to an appointment (e.g. date, duration), as shown in Figure 2.
Questionnaire
When an appointment takes place, it needs to be associated with a new questionnaire object. This represents an actual session that takes place between a clinician and a client.
Answer
Every questionnaire has 11 entities with the type of answer. Each answer represents one answered question.
Question
This entity contains a list of all available questions. An answer object is associated with 11 questions of this type. The available questions are:
-
How satisfied are you with your mental health?
-
How satisfied are you with your physical health?
-
How satisfied are you with your job situation?
-
How satisfied are you with your accommodation?
-
How satisfied are you with your leisure activities?
-
How satisfied are you with your relationship with your partner/family?
-
How satisfied are you with your friendships?
-
How satisfied are you with your personal safety?
-
How satisfied are you with your medication?
-
How satisfied are you with the practical help you receive?
-
How satisfied are you with your meetings with mental health professionals?
Label
This entity contains a list of all available answers. An answer object is associated with seven labels of this type. The available labels are:
-
totally dissatisfied
-
very dissatisfied
-
fairly dissatisfied
-
in the middle
-
fairly satisfied
-
very satisfied
-
totally satisfied.
Action item
Every answer is also associated with a number of action items. These represent plans or action points agreed by the clinician and the client, which should be carried out before the next meeting.
Security and data protection
Security and data protection were essential to this project. We used the built-in hardware encryption that all iOS devices provide, provided the user sets a passcode to unlock the device. This was implemented by adding the data protection provision profile and set into the level of complete protection. According to Apple Inc., use of this level of protection ensures that all files are encrypted and inaccessible when the device is locked. This passcode was requested every time the user attempted to unlock the iPad.
In order to increase the strength of this encryption, all devices were configured using the Configurator tool (Apple Inc., Cupertino, CA, USA), with the following requirements for setting a password:
-
A simple passcode of four digits was not allowed.
-
A minimum passcode length of at least six alphanumeric characters was required.
-
The passcode required at least one digit and one complex character.
-
The device automatically erased its data after 10 failed passcode attempts.
Finally, we used an additional security layer to the app itself. Every time DIALOG was launched, a similar but separate passcode to the one described above was required. This mechanism protected the app data independently. We used the iOS keychain feature to securely store the Secure Hash Algorithm 1 [SHA-1; National Institute of Standards and Technology (NIST), Gaithersburg, MD, USA] hash of the passcode to the device, making it practically impossible to recover and read the original passcode in case the device got stolen.
Server communication
One of the main requirements of this project was the ability to sync the data with a server, when an internet connection is available. The server was developed in Python 2.7 (Python Software Foundation, Wilmington, DE, USA) programming language using the Django 1.4 (Django Software Foundation, Lawrence, KS, USA) web framework. Data were transmitted over a secure hypertext transfer protocol (SHTTP) connection using secure sockets layer (SSL) encryption, and stored on a MySQL database (Oracle Corporation, Redwood City, CA, USA) connected to the server and located in the Department of Electronic Engineering and Computer Science at QMUL. Every time a clinician was synchronised with the server, all new data were pushed to the server and erased from the device. If the clinician had a planned future appointment with a patient, the data of that specific patient were synchronised back to the device in order to be available for the session. All data were available to the researchers via a web interface, protected with an administrator user account.
Discussion
The technology arising from this substudy was implemented in the subsequent study on the effectiveness of DIALOG+, described in Chapter 4. The app is now available to download for free from the App Store by searching for ‘DIALOG’ or following this link: www.itunes.apple.com/us/app/dialog/id914252327?ls = 1&mt = 8 (iPad only). Any person or service can set up a server and configure the app to post the data on it. Any server with a running web service that accepts JSON files can be used. To learn about an alternative version of the DIALOG app for a different platform developed at a later stage of the programme, see Chapter 9.
Chapter 3 Developing the DIALOG+ intervention
Study B1: development of the DIALOG+ manual
Introduction
The original DIALOG trial demonstrated that structuring the communication between patients and clinicians in CMHTs by routinely assessing key topics regarding patients’ satisfaction with life and treatment was sufficiently powerful to improve patient outcomes. However, the video clip study described in Chapter 2 showed that clinicians did not intuitively know how best to utilise the DIALOG tool and meetings did not always appear well structured and therapeutically helpful. There was a need for a manualised guide for clinicians on how to respond to patients who presented with low levels of satisfaction and the need for more help. The addition of this manual could potentially make the DIALOG tool considerably more effective. Evidence suggests that clinicians’ interactions are more therapeutic when they are guided by a clear model. 13
Patients treated in the community see clinicians (nurses or social workers) who act as their care co-ordinators at least once per month – more regularly than a psychiatrist (approximately every 3 months) or a psychologist (if such a referral is made; the patient must be deemed eligible, waiting times can be long, and sessions are for a finite duration). Considering the regularity with which patients meet their care co-ordinators, perhaps there is a missed opportunity for treating patients with a more psychological, evidence-based approach to problems. Two such approaches are CBT and SFT. There is a strong evidence base supporting the effectiveness of CBT,26–28 and it plays a major role in governmental plans to roll out psychological treatment in the UK. 50 It is commonly taught in teaching programmes and is widely accepted as a beneficial model among health professionals. SFT is becoming increasingly popular in the UK, because of the ‘brief’ nature of its training and generic approach that can be widely applied to different contexts, with a modest body of evidence for its effectiveness. 51–53
The aim of the current study was to develop a new and more extensive version of the DIALOG intervention – the ‘DIALOG+’ intervention – through the addition of a simple and robust manual for clinicians on how to respond to information gauged through the DIALOG assessment, informed by principles of CBT and SFT. This manual would guide clinicians in using the data arising from the DIALOG assessment to inform further discussion during the meeting, with a focus on tackling problems and identifying solutions. The approach described in the manual would be used in conjunction with the new software developed, as described in Chapter 2.
Methods
Study design
We established and worked with a network of three groups of expert consultants: three experts conducting research on the use of CBT with patients with psychosis; four experts in delivering training in SFT with private clients with mental health issues, ranging from mild anxiety to severe mental illness; and six leading community-based practitioners in the UK. These experts used the DIALOG tool with patients on their caseloads or private clients, and regularly reported to the research team on their experiences, with suggestions on guiding principles to inform the development of the DIALOG+ intervention. (They used the original software, as this study took place concurrently with studies A1 and A2, prior to the finalisation of the specification for the new software.) In an iterative process, these guidelines were fed back to the various members, discussed and refined, and specific recommendations for incorporating different components of CBT and SFT were made.
The core research team of this programme subsequently drafted the manual under the supervision of the principal investigator. The manual was repeatedly presented to a multidisciplinary research team of researchers, psychologists and psychiatrists based at the Unit for Social and Community Psychiatry, QMUL, with a wide range of experience in delivering novel, manualised interventions, including communication training for psychiatrists54 and group body psychotherapy. 55 In addition, the core questions in the manual to be asked of patients were presented to the service user reference group, and to patients in the focus groups detailed in Chapter 2, Study A2.
Following the feedback of the research team and the patient groups, the manual underwent further refinement, before being presented to the Steering Group of the programme, who provided further input. Lastly, the manual was piloted with the service user reference group. This ensured that key concepts in the manual could be understood, and that the terminology used with patients as part of the four-step approach was appropriate and easily understood. The final version of the manual was then further discussed, amended and agreed by the research team.
Results
Guiding principles informing the DIALOG+ intervention
Reviewing
All consultants placed importance on patients having the opportunity to review their ratings subsequent to submitting them. This would help the patient to take stock of their situation and ownership of their own data, developing an understanding that the ratings were for their benefit as much as for the clinician’s. This would also allow them to revise the ratings if necessary, an option that was notably absent from the original DIALOG software. Consultants noted that, according to the existing procedure, such reviews were on the initiative of the clinician, rather than being incorporated into the procedure dictated by the software. They proposed that the new software could automatically present a review immediately subsequent to the DIALOG assessment.
There were mixed opinions about the utility of the comparisons feature of DIALOG, whereby ratings from the current session could be compared with ratings from a previous session, and how this should be incorporated into the intervention. CBT experts valued being able to review progress from session to session, whereas SFT experts preferred that all sessions be future focused. Ultimately, it was agreed that the feature was useful and should be available as an option, but should not form an essential component of the DIALOG+ intervention (i.e. such comparisons would be on the initiative of the clinician or patient).
Agenda-setting
Consultants, in particular CBT experts, emphasised the importance of ‘agenda-setting’ in therapeutic meetings. They proposed that, following the initial assessment of a range of topics as part of the DIALOG assessment, it would be useful to choose priorities for further discussion. These could then be discussed individually in a more extensive and thorough manner, using a structured approach to problematic situations informed by CBT and SFT.
Which topics were to be chosen as priorities would be informed by the ratings and expressions of need for more help captured through the DIALOG assessment. Consultants proposed that it would be feasible to choose three or four topics to discuss at length during routine meetings.
Setting the agenda in this manner was intended to guide the patient in adhering to a new, structured approach to meetings from the outset, to help the patient to identify which topics were most important to them and to ensure that the time available was used productively.
Shared decision-making
All consultants emphasised the importance of shared decision-making in agenda-setting. They maintained that the patient should have the final say on which topics to discuss at length and that the clinician should explicitly invite the patient to take the lead with selecting priorities. However, they acknowledged that negotiation between clinician and patient was important and proposed that clinicians make suggestions on topics for discussion when appropriate.
Algorithm
While emphasising the importance of shared decision-making, the consultants also asserted that there needed to be a defined algorithm governing agenda-setting. The clinician delivering DIALOG+ would be responsible for explaining the algorithm to the patient and identifying topics that met the criteria for selection where necessary and appropriate, while ensuring that the final decision remained with the patient.
Some felt that any explicit request from the patient for additional help in a given area warranted that topic being put on the agenda. Others maintained that any topics that were explicitly low rated (i.e. 5, 6 or 7) should be put on the agenda. It was suggested that both of these aspects could inform decisions on which topics to be discussed during agenda-setting.
Positive commentary
Solution-focused therapy experts warned against the intervention being overly problem focused and noted that a wholly solution-focused intervention rarely involves extensive discussion of problems. They emphasised the importance of tending to domains where ratings had improved as well as problematic areas. They recommended that, subsequent to the initial DIALOG assessment, clinicians be instructed to comment on areas in which patients had improved on a previously negatively rated domain or maintained a positively rated domain and explore with the patient how they had achieved this.
Special attention to mental health
Practitioners from CMHTs asserted that, given its central importance, mental health should always be chosen as a topic for discussion during agenda-setting. Although CBT and SFT experts agreed that mental health was important given the context of the intervention, they did not wish to undermine the shared decision-making process between patient and clinician, or to further complicate the algorithm for selection of topics. To reconcile this, the principal investigator proposed that special attention be given to mental health following the assessment. If the clinician had reason to believe that the patient was experiencing distress, the clinician would suggest that they discuss this in more detail, whether through DIALOG+ or as a separate conversation.
Link to software
There were mixed views on the extent to which DIALOG+ should be incorporated into the software. The principal investigator envisaged the procedure as using the software for the initial DIALOG assessment only and subsequently implementing DIALOG+ independently of the software, so as not to make DIALOG+ overly prescriptive and to reduce reliance on the technology. The research team were also mindful of the potential of the software to distract the clinician and impact negatively on the therapeutic relationship, based on the findings from the video clip study (see Chapter 2). However, consultants from CMHTs felt that the software should serve as a tool to aid clinicians in implementing DIALOG+ as far as possible. The Steering Group maintained that incorporating a visual depiction of the psychological model inherent to DIALOG+ (i.e. the solution-focused four-step approach) would be of benefit to patients, who would be more likely to learn the approach to problems if they could follow an explicit model depicted in the software.
Use of rating scales
Solution-focused therapy experts noted that DIALOG lent itself well to SFT, in that SFT typically asks clients to position themselves on a scale and conceptualise real-life improvements as incremental increases on the scale. They felt that DIALOG+ should build on the ratings provided by patients as part of the DIALOG assessment in a solution-focused way. This should include solution-focused approaches to problems such as asking patients to describe how their situation would be if they rated their satisfaction as 7 out of 7, or what in their lives would need to have changed if their satisfaction was just one point higher on the scale.
In-depth descriptions of desired outcomes
In accordance with the principles of SFT, experts recommended that patients be encouraged to spend time not only discussing a problem, but also describing how their situation would be if the problem were removed. This was based on the observation that many patients can elaborate extensively on problems, but are not always aware of how things would be in the absence of the problem and what they are working towards. They proposed that patients should strive not to be vague and describe their desired outcomes as descriptively as possible as part of DIALOG+. These descriptions should be characterised by the patient behaving differently rather than feeling better, and defined by the presence of something tangible that is new, rather than the absence of a problematic situation.
Stepwise approach
Consultants recommended that patients be guided through problem-solving via a specific and linear stepwise approach to discussing problematic situations. This was intended to help patients to internalise the model of the intervention and recognise opportunities to apply a generic problem-solving approach to matters arising both inside and outside the meeting. Consultants noted that a stepwise approach would help to prevent a common tendency among practitioners of identifying a problem and immediately searching for an available solution, without wholly discussing the situation from the patient’s perspective.
Identifying resources
Consultants recommended that patients be encouraged to consider their strengths as part of DIALOG+. This would include identifying positive aspects of a situation even when they have explicitly evaluated it as negative, recognising coping strategies that the patient has employed in times of distress, and considering who in the patient’s life might be a source of help, aside from the clinician and the patient themselves.
Cognitive–behavioural therapy worksheets
Cognitive–behavioural therapy experts explored the possibility of incorporating CBT worksheets into DIALOG+, designed to help patients to challenge symptoms such as voices and delusions, as well as negative symptoms of schizophrenia. Although it was agreed that these worksheets were a valuable tool in challenging patients’ symptoms, there were concerns that their addition would make the intervention too complex and could potentially create an intervention in themselves, which would be difficult to abstract from the core features of the intervention. In addition, the worksheets were not compatible with the intention to make the intervention flexible and non-prescriptive. It was decided that the worksheets would not be a feature of the core intervention, but that they and other resources and treatments could be offered to patients as an outcome of discussions between clinicians and patients during care co-ordination.
Action planning
Experts agreed that discussions of domains should end with an explicit agreement on what action should be taken and who should be responsible for taking action: the patient, the clinician or someone else. Agreement on action should be based on shared decision-making. All actions should be documented within the software and reviewed prior to ending the meeting.
Continuity between sessions
Many noted that, according to the procedure implemented in the previous trial, each DIALOG session appeared to be carried out in isolation, with little or no continuity between one session and the next. Participants emphasised the importance of reviewing progress and working towards long-term goals through DIALOG+. They recommended that action items from the previous session be revisited at the beginning of a new session, with both the patient and the clinician informing the other of any progress that had been made since the previous session.
Manual outline
In the light of the above guiding principles, the research team developed an outline for a manualised DIALOG+ intervention. The procedure was defined, as detailed in the following sections.
Assessment
The clinician invites the patient to rate their satisfaction with the 11 topics of DIALOG.
-
The patient should provide a rating on a scale of 1–7 for each topic.
-
The patient should indicate whether or not they need more help in that area.
Review
The patient and the clinician review the ratings provided by the patient.
-
They may wish to compare the ratings from the current session with any one previous session, to monitor progress.
-
The clinician should comment on positive aspects of the assessment, such as a high rating or an improved rating.
Selection of priorities
The patient and the clinician make a joint decision on which domains to select for further discussion.
-
The patient should take the lead with choosing the domain, although the clinician may negotiate where necessary and appropriate.
-
The pair should choose no more than three domains for discussion in the first instance.
-
They should select a domain for discussion when its satisfaction rating is below 4 (the centre point of the scale) or when the patient has requested more help in a given area.
-
If there are many domains meeting these criteria, they should choose the three that are the most important to the patient.
-
If there are no domains meeting the above criteria, they should select domains with a rating of 4 or with deteriorated ratings since the last meeting.
-
Special attention is given to mental health. If the patient has rated his/her mental health at 5 or above and has not requested additional help, the clinician should ask if the patient feels distressed or concerned by any of the symptoms or experiences associated with their mental health problem. If the answer is yes, the clinician should negotiate the inclusion of mental health as a domain for further discussion, to ensure a thorough assessment of mental health.
Discussion of domains
The patient and the clinician discuss each of the three topics via a ‘four-step approach’ to problems. These four steps are as follows.
-
Understanding:
-
Elaborating on the reasons behind the low rating of satisfaction or request for more help.
-
Elaborating on why the rating is not any lower/how the patient copes with the lowermost rating and, by extension, understanding the patient’s strengths.
-
-
Looking forward:
-
Describing the patient’s ideal scenario, that is, what 7 out of 7 would mean to them.
-
Describing one small improvement in the direction of the desired outcome, which would represent climbing one point higher on the scale.
-
-
Exploring options:
-
Considering one or two options, however small, that the patient could pursue to improve their own situation.
-
Considering one or two options that the clinician could pursue to help the patient.
-
Considering how others in the patient’s life – friends, family members, neighbours or other supporters – could help the patient.
-
-
Agreeing on action:
-
Shared decision-making led by the patient on what action should be taken by the patient themselves, the clinician or anyone else.
-
Documentation of the actions.
-
Summary
The patient and the clinician summarise the action plans across all three domains and finish the session.
Subsequent sessions
Action items from the previous session are reviewed and progress is discussed. The new session then proceeds as described above.
The corresponding manual is available in Appendix 2.
Discussion
A manualised approach to DIALOG+ was intended to apply consistency to the implementation of the intervention among practising clinicians in CMHTs and to expand the therapeutic potential of the tool by introducing basic principles of CBT and SFT. The resulting DIALOG+ intervention may better reflect the principles of SFT than CBT, although CBT experts agreed that it was compatible with CBT. As such, it has been described as a solution-focused intervention in the implementation of the trial and throughout the remainder of this report. Subsequent chapters report the development of a corresponding training programme for DIALOG+ and systematic evaluation of the new intervention.
Study B2: development of the DIALOG+ training programme
Introduction
In the original DIALOG trial, clinicians were not formally trained in administering DIALOG. They were provided with the software on hand-held palmtop computers and it was assumed that the procedure would be intuitive and self-explanatory, that is, that the clinician would guide the patient through the questionnaire in a collaborative process, that they would review the DIALOG ratings and compare with previous sessions and that this information would be used to inform treatment. However, the video clip study described in Chapter 2 (see Study A1: analysis of video-recorded DIALOG sessions) revealed that the tool was implemented in various ways that deviated from this assumption. Study B1 specifically manualised the intervention, incorporating a solution-focused approach to problems when responding to patients’ ratings, now referred to as ‘DIALOG+’. To better implement the new DIALOG+ intervention in a randomised controlled trial, a brief training programme was devised, described in this section.
Methods
Conception
Manualisation of DIALOG+ and prescriptiveness of software
The previous substudy formally manualised DIALOG+ for clinicians (see Study B1: development of the DIALOG+ manual). This detailed the procedure for implementing DIALOG+: administering the assessment of satisfaction with life and treatment and of needs for more help; reviewing the assessment; selecting topics for further discussion; discussing the topics via a four-step solution-focused approach to problems; agreeing and documenting action items; summarising action plans to take place between the current session and the next meeting; and revisiting agreed action plans at the beginning of each subsequent meeting. The research team agreed that training should be designed to closely resemble the sequence of instructions provided in the manual. The intention was as much to structure behaviour as it was to introduce principles of SFT. Correspondingly, the new DIALOG software was designed to guide the clinician through the series of actions prescribed by the manual in an intuitive, sequential and semi-automated manner. In addition, the solution-focused questions for the clinicians to pose to patients were incorporated directly in the software. The clinician could also press buttons in the software to view the manualised description of the four-step approach as and when needed.
Accommodating the practical implementation of a randomised controlled trial
It was noted that the training would need to reflect the design of the cluster randomised controlled trial testing the effectiveness of DIALOG+ (see Chapter 4). This dictated that individual clinicians would only learn of their allocation to either the intervention group or the control group once the final patient from their caseload had been recruited, meaning that the clinician could now be randomised. Thus, it would not be possible to train clinicians en bloc, as recruitment from different caseloads would be complete at different time points. Instead, clinicians would be trained individually. It was agreed that a brief training programme of 2 hours would be appropriate to reflect the ‘brief’ nature of SFT and its generic and practical application to a variety of situations. This brief training would also lend itself well to the development of self-directed training on DIALOG+ subsequent to the trial, for clinicians seeking to learn DIALOG+ in the absence of a trained clinician’s availability.
Pilot
The principal investigator and a second member of the research team conducted a pilot of the training with two clinicians practising in ELFT: one, a psychosocial intervention worker working in the community with patients with psychosis who had a role in staff supervision; another, a qualified cognitive therapist and specialist in delivering training in manualised dialectical behaviour therapy. These two clinicians were to provide feedback on the training and suggest refinements, with a view to themselves ultimately delivering the training programme for clinicians allocated to DIALOG+ in the trial. Their experiences were discussed and details of the training programme were amended accordingly. They also piloted DIALOG+ with a selection of patients on their caseloads as part of their routine practice, ensuring that the experiences of patients informed the eventual delivery of the training.
The research team delivered a Microsoft PowerPoint® (Microsoft Corporation, Redmond, WA, USA) presentation describing the background to DIALOG+, including the results of the previous trial, and an introduction to CBT and SFT. Next, the DIALOG+ procedure was introduced and presented via Microsoft PowerPoint in accordance with the sequence described in the DIALOG+ manual. A selection of video clips of an actor clinician conducting DIALOG+ with an actor patient was then played to illustrate the procedure. After a short break, trainees were given an iPad on which the DIALOG+ software was installed and then guided through the software on a one-to-one basis. The research team then modelled the DIALOG+ procedure, acting in the roles of patient and clinician. The trainees then took turns to act as both patient and clinician in the procedure using case vignettes (see Appendix 3), with the research team providing constructive feedback. On closing, the trainees were given a copy of the DIALOG+ manual and instructed to use DIALOG+ in their routine practice in the subsequent weeks where possible and return to the research team with any questions or issues flagged.
Results
Trainees agreed that DIALOG+ training was useful and closely followed the logic outlined in the DIALOG+ manual. They had a number of suggestions for optimising the training:
-
Trainees noted that, although an introduction to the previous trial was helpful, an extensive discussion of its implementation and results was not needed, as clinicians were more interested in practical implications for their practice. They proposed that discussion of the previous trial should be reduced to a shorter ‘take-home’ message, that is, that regular use of DIALOG had improved patients’ subjective quality of life, treatment satisfaction and level of unmet needs over the course of 1 year.
-
They noted that an introduction to both CBT and SFT was somewhat overwhelming and did not provide a clear orientation on the focus of the training. They suggested that this introduction be shortened and that the focus be placed on SFT. They also proposed that supplementary reading materials on SFT be provided so that clinicians could pursue self-directed learning. As a result, clinicians would be given reading materials from BRIEF (The Centre for Solution Focused Practice, London, UK; www.brief.org.uk), the UK’s leading provider of solution-focused training, and a link to the BRIEF website.
-
Trainees reported that the Microsoft PowerPoint presentation was at times too text-heavy. They recommended that DIALOG+ be presented schematically, with an outline of the procedure and the four-step approach to problems presented at all times, to which trainees could refer throughout the training.
-
Trainees felt that there was no need for the team to present video clips as well as model the training. They felt that in-person simulations were preferable to the videos because they were more interactive. They noted that the time spent watching the video would be better spent practising the intervention.
-
Trainees agreed that the manual facilitated self-guided learning on the DIALOG+ intervention in a simple and effective manner. However, they felt that technical instructions on the operation of the DIALOG software were lacking from the manual. They recommended that further instructions be provided describing the operation of the technology. These should be separate to the DIALOG+ manual, which should focus on the psychotherapeutic intervention rather than the technical aspects.
-
Following the use of DIALOG+ with patients, trainees felt that it was appropriate to give special attention to mental health irrespective of the rating given by the patient and in this respect there was no need for an algorithm governing its discussion. As a result, the manual was amended to instruct clinicians to give special attention to mental health as a rule following the initial DIALOG assessment.
Overall, the experiences with the first trainees showed that the one-to-one training, compared with the usual group setting, had an impact on how the training could be implemented. The sessions were rather intense. Although the pace varied depending on the response of the trainee, the required ground was often covered quickly and after about 1 hour there was a sense of fatigue. As a result, it was decided that in the trial, training would be divided into two 1-hour sessions: one to take place prior to implementing DIALOG+ with patients, and another following real experience and practice with patients. In the second refresher training, clinicians would be invited to bring an audio-recording of a DIALOG+ session with a patient (provided that the patient consented), so that any difficulties with delivering DIALOG+ could be identified and discussed with the trainers.
Discussion
The development of the training programme differed from what was originally envisaged, mainly because the design of the trial required quick individual training. This came with the advantages that each training session was individualised and delivered in a direct interaction with an experienced trainer and that the trainer was familiar with the individual abilities of each trainee, which facilitated later supervision.
However, we did not systematically develop a training programme at this stage that could be easily and widely rolled out in the NHS in the more common format of group training. To address this, we decided at the end of the programme to develop a web-based training module for self-directed learning. This takes about 90 minutes to complete and can be flexibly used throughout the NHS and beyond. Instructions for accessing the training module are available on the DIALOG website (details of which can be found in Chapter 9).
Chapter 4 Randomised controlled trial testing the effectiveness of the DIALOG+ intervention
Introduction
Previous interventions to structure the patient–clinician communication in community mental health care, by simply feeding back outcome data to patients and clinicians, have failed to produce positive effects on patient outcomes in randomised controlled trials. 15,16 The DIALOG intervention, however, integrated an assessment as part of these routine meetings to structure the communication, influence clinician behaviour and make the meetings patient-centred and focused on change. In a cluster randomised controlled trial, the intervention was associated with significantly better subjective quality of life, fewer unmet treatment needs and higher treatment satisfaction after 1 year. 17
The current programme of research sought to build on this encouraging finding. We set out to develop DIALOG+, a more extensive, manualised version of the previous intervention. This involved creating new technology, as described in Chapter 2, and defining a procedure for clinicians to administer the intervention in a solution-focused way, formalised in a manual and training programme, as described in Chapter 3. The current study sought to gather evidence on the effectiveness of this new intervention.
The DIALOG+ intervention is described in detail in the DIALOG+ manual (see Appendix 2) and is briefly summarised in Methods.
We conducted an exploratory, pragmatic, cluster randomised controlled trial to investigate whether or not the DIALOG+ intervention is associated with better subjective quality of life and other improved outcomes in patients with psychosis, compared with an active control condition. The active component was added to control for the effect of using an electronic device during clinical meetings and the effect of patients completing regular assessments of their satisfaction without further discussion. Whereas the previous DIALOG trial included only patients with persistent disorders, who at the start of the intervention were assumed to stay in treatment for at least another year,17 this pragmatic trial included a wider range of patients with psychosis in CMHTs.
Objectives
The objectives of the trial were to:
-
test whether or not the regular use of the DIALOG+ intervention over a 6-month period improves patients’ subjective quality of life compared with an active control
-
test whether or not DIALOG+ leads to patient improvements in objective social outcomes, unmet needs, treatment satisfaction, self-efficacy, well-being, recovery, psychopathological symptoms and the therapeutic relationship
-
test the cost-effectiveness of the DIALOG+ intervention
-
assess the quality of the implementation of the DIALOG+ intervention
-
assess the experiences and opinions of clinicians and patients in using the DIALOG+ intervention.
In this chapter we report the results of the trial with regard to the primary and secondary outcomes (objectives 1 and 2). Results regarding the cost-effectiveness, the quality of the implementation, and the experiences of patients and of clinicians are reported in Chapters 5, 6, 7 and 8, respectively.
Methods
Trial design
The trial protocol is accessible in the public domain56 [International Standard Randomised Controlled Trial Number (ISRCTN) 34757603]. This was a pragmatic, parallel-group, cluster randomised controlled trial. The clusters were individual clinicians working as care co-ordinators in CMHTs, who were randomly assigned to either DIALOG+ or the control condition with an allocation ratio of 1 : 1. The use of a cluster randomisation design prevented potential contamination in clinicians’ practice when treating individual patients.
After trial commencement, a change to the number of clusters was made. The originally proposed sample size of 36 clinicians was increased to 49. This is reported in more detail in Sample size.
Ethical opinion and research governance
The study received a favourable opinion from the NRES (Stanmore; 12/LO/1145). The Steering Committee was involved in the design and implementation of the trial. A Data Monitoring Committee was also established for the trial.
Participants and setting
Clinicians and their patients were recruited from seven CMHTs across the three ELFT boroughs of Newham, Hackney and Tower Hamlets. The CMHTs consist of multidisciplinary teams that provide secondary care for working-age adults with severe mental illnesses. Each patient has a designated clinician within this team, usually a social worker or psychiatric nurse, often referred to as their ‘care co-ordinator’. These clinicians are the primary contact for patients, and they typically meet at least once per month for routine assessment and co-ordination of care.
The management of the provider organisation (ELFT) identified the CMHTs and participating clinicians within the teams. This reflected the pragmatic nature of the trial. Eligible clinicians met the following criteria:
-
had a professional qualification as a clinician (nurse, social worker, psychologist, occupational therapist, doctor)
-
had > 6 months of experience in working in a CMHT
-
were working as a care co-ordinator.
Clinicians were excluded if they planned to leave their post within the study period for at least 2 weeks.
The eligible patients were then identified by screening the caseloads of participating clinicians. Patients were eligible if they met the following criteria:
-
were aged 18–65 years
-
had received treatment in a CMHT for at least 1 month
-
had no planned discharge for the next 6 months
-
had a clinical diagnosis of schizophrenia or a related disorder (ICD-10 F20–29). 44
Patients were excluded if they met any of the following criteria:
-
had a mean score of ≥ 5 on the Manchester Short Assessment of Quality of Life (MANSA),57 reflecting an average rating of at least ‘mostly satisfied’ with all life domains, thereby preventing ceiling effects
-
had insufficient command of the English language for conducting meetings in English and completing outcome assessments
-
lacked capacity to give informed consent
-
had a diagnosis of learning difficulties.
All participating clinicians and patients provided written informed consent.
Interventions
DIALOG+
The DIALOG+ intervention is intended to provide a simple, solution-focused approach to identifying concerns, initiating change and monitoring progress. Clinicians and patients in the experimental group were instructed to use DIALOG+ once per month for 6 months. This frequency was chosen as clinicians are expected to meet patients at least once per month in these services, although it was acknowledged that this could vary in practice. Clinicians and patients decided whether or not they wished to continue using DIALOG+ at the end of the 6-month intervention period and this was recorded. The intervention was delivered using a tablet computer (iPad), which could be easily shared between the clinician and patient. Clinicians were required to administer DIALOG+ in line with the accompanying manual developed (see Appendix 2), as described in Chapter 3.
Each DIALOG+ session begins with an assessment of eight life domains (mental health, physical health, job situation, accommodation, leisure activities, family/partner, friendships and personal safety) and three treatment aspects (medication, practical help and meetings with professionals). The patient is asked to rate their satisfaction with each domain on a scale from 1 to 7 (from ‘totally dissatisfied’ to ‘totally satisfied’) and also to indicate whether or not they would like additional help in the given domain. Following the assessment, the computer displays a summary screen of the ratings, allowing the clinician and the patient to compare the current ratings with those from any previous DIALOG+ session. At this stage, clinicians are expected to provide positive feedback on high-scoring or improved domains. The patient and clinician then come to a joint decision on which domains they would like to discuss further in the meeting.
The subsequent discussion of each chosen domain is structured using a four-step approach informed by principles of SFT: (1) understanding the patient’s concerns and previous effective coping strategies; (2) looking forward by identifying best-case scenarios and smaller increments for improvement; (3) exploring options available to the patient, including the patient’s own resources, the clinician’s and those of others in the patient’s life; and (4) agreeing on actions to address the identified concerns. These agreed actions are then briefly reviewed at the start of the next session.
All clinicians in the experimental group received one training session prior to beginning the intervention. As previously stated, the training was provided by two experienced clinicians who were independent of the research study team (both qualified in psychological therapies). For practical reasons, the training was carried out one to one. In line with the developed programme, the training session consisted of a brief background to the original DIALOG trial and to SFT, a schematic Microsoft PowerPoint presentation of the procedure of DIALOG+, guidance in operating the DIALOG software, a demonstration of the DIALOG+ procedure, role-play exercises and the provision of the manual and further learning materials for self-directed learning. For each clinician, where patient consent was provided, their first DIALOG+ session was audio-recorded and feedback was provided by their trainer. Clinicians were made aware that they could contact the trainers for further support and advice throughout the duration of the study.
For assessing the feasibility of the intervention, adherence to the manual and the impact of DIALOG+ in the patient–clinician meetings, we aimed to video-record and analyse one meeting per patient–clinician pair in the experimental condition. In the control group, we aimed to do the same for 20% of the sample. Adherence to the manual in the experimental condition was assessed against the DIALOG+ manual using a specifically developed scale. Further details of this are reported in Chapter 6.
Control
Clinicians and patients in the control group continued with treatment as usual, with patients also undergoing a brief procedure using similar technology. Clinicians were informed that they should ask their patients to rate their satisfaction on the same 11 domains of DIALOG using the same tablet computer (iPad), once per month for 6 months. The key difference between this and the intervention was that patients independently completed the assessment, at the end of their routine meetings, without any subsequent discussion with the clinician. This was to control for any effect of using an electronic device in clinical meetings and of completing regular assessments of satisfaction which could potentially influence patient-reported outcomes, especially subjective quality of life. All clinicians in the control group received a demonstration of the procedure and were provided with instructions on administering it correctly (see Appendix 4).
Outcomes
All outcomes were prespecified and measured at the level of individual patients at baseline and at 3, 6 and 12 months following the start of the intervention.
Primary outcome
The primary outcome was subjective quality of life, measured using the mean score on the MANSA. 57 The MANSA requires the patient to self-rate their satisfaction with 12 life domains on a Likert-type scale from 1 (couldn’t be worse) to 7 (couldn’t be better). The outcome at 6 months was taken as the primary point of interest.
Secondary outcomes
The following secondary outcomes were also assessed:
-
The total number of unmet needs was measured using the Camberwell Assessment of Need Short Appraisal Schedule (CANSAS), patient-rated version,58 which consists of 22 health and social needs domains rated as no need, met need or unmet need.
-
Treatment satisfaction was measured with the total score on the Client Satisfaction Questionnaire (CSQ-8),59 which consists of eight items self-rated from 1 to 4 (higher scores indicating greater satisfaction).
-
Self-efficacy was measured with the total score on the General Self-efficacy Scale (GSS),60 which consists of 10 items self-rated from 1 to 4 (higher scores indicating greater self-efficacy).
-
Mental well-being was measured with the total score on the Warwick–Edinburgh Mental Well-Being Scale (WEMWBS),61 which consists of 14 items self-rated from 1 to 5 (higher scores indicating better well-being).
-
Recovery was measured using the ‘severity’ dimension of each of the 24 items of the CHoice of Outcome In CBT for psychosEs (CHOICE) scale,62 which are self-rated from 1 to 10 (higher scores indicating better recovery).
-
Psychopathological symptoms were measured using the Positive and Negative Syndrome Scale (PANSS). 63 The PANSS is a 30-item observer-rated scale that assesses symptoms from 1 to 7 (higher scores indicating more severe symptom levels). It provides total scores for positive symptoms (ranging from 7 to 49), negative symptoms (ranging from 7 to 49), and general symptoms (ranging from 16 to 112).
-
The clinician–patient therapeutic relationship was measured using the total scores on the Scale for Assessing Therapeutic Relationships in Community Mental Health Care, patient version (STAR-P) and clinician version (STAR-C). 29 Both scales consist of 12 items self-rated from 0 to 5 (higher scores indicating a stronger therapeutic relationship).
-
Social outcomes were measured using the Objective Social Outcomes Index (SIX),64 which measures objective data on employment, accommodation and living situation, and provides a total score from 0 to 6 (higher scores indicating more positive social outcomes).
All outcome data were collected by researchers through one-to-one interviews with patients, either at the CMHT or at patients’ homes. The only exception was the STAR-C, which was self-rated by clinicians. In cases when patients did not wish to meet with a researcher, they were invited to complete the primary outcome assessment via telephone. The baseline assessments were completed by one of six researchers and follow-up assessments were completed by one of four researchers. The inter-rater reliability between all six researchers for the PANSS was good (intraclass correlation coefficient = 0.828).
Sample size
We initially aimed for a sample of 36 clinicians and 180 patients, with 18 clinicians and 90 patients in each condition (five patients per clinician). This was based on an anticipated loss of six clinicians caused by unexpected job changes and 30 of their patients, and a dropout rate of < 10% of the remaining patients. This would provide a sample of 136 patients. As in the original DIALOG trial, we assumed a practically negligible cluster effect and calculated that the target sample size would be sufficient to detect a medium effect size of 0.5 (Cohen’s d) with 80% power at the 5% significance level. On the MANSA, this effect size would reflect a rating that was higher by one point on at least 5 out of 12 life domains.
However, during the initial recruitment phase, a lower number of patients per clinician was being recruited than expected. To account for this, the sample size, as described in the protocol, was increased from 36 clinicians to 49, to reach a target of 180 patients.
Randomisation
Sequence generation
An independent statistician based at the Pragmatic Clinical Trials Unit (PCTU), QMUL allocated clinicians according to a computer-generated randomisation sequence created using random block sizes of four and six. As all clusters (clinicians) had similar patient caseloads and experience, randomisation was not stratified.
Allocation concealment mechanism
All clusters were identified and recruited prior to randomisation to minimise selection bias. Randomisation of clusters only took place once all participating patients from each clinician’s caseload had been recruited and all baseline assessments had been completed. The allocation of clusters and patients was concealed from outcome assessors, and clinicians were asked to keep their treatment allocation concealed from their colleagues at the CMHTs.
Implementation
Clinicians were initially nominated by the service directors of ELFT and asked to participate in the study, in order to reach the target sample of 36. As previously stated, additional clinicians were later recruited into the trial to reach a recalculated sample size of 49. These additional clinicians were recruited by contacting team managers and asking that they invite additional members of their team to volunteer. Researchers met with all clinicians to determine eligibility and obtain written informed consent to participate.
To identify eligible patients, researchers screened clinicians’ caseloads using electronic records and discussed the eligibility of each patient with the clinician. Suitable patients were first informed about the study by their clinicians and, if they agreed, met with a researcher for further discussion. During this meeting, the researcher provided information about the study, giving the patient an information sheet that had been checked for user-friendly language by the service user reference group. They also established all inclusion criteria, and obtained written informed consent. Initially, the researcher approached seven eligible patients from each clinician’s caseload according to a random (computer-generated) sample. If less than five of these patients consented to take part, then the remaining eligible patients were approached according to a predefined random (computer-generated) order, until either five patients had consented or no more eligible patients remained.
Clinicians were then randomised once all baseline assessments had been completed with their recruited patients. A member of the research team, who was not involved in outcome assessment, e-mailed the independent statistician with randomisation requests who then informed the researcher of the allocation via e-mail.
Blinding
The principal investigator, the data analysts and the four researchers assessing follow-up outcomes were blinded to the clinicians’ and patients’ allocations. It was not possible to blind patients to their treatment allocation. However, it was not explicit whether or not patients were receiving the experimental intervention. Patients in both treatment groups completed the same assessment of satisfaction on a tablet computer and neither group was provided with specific details about the four-step approach prior to the start of the study. If blinding was compromised during the interviews, a different researcher conducted the next follow-up assessment.
Statistical methods
The statistical analysis of the primary and secondary outcomes was conducted by statisticians at the PCTU, QMUL. An analysis plan was drafted and signed off by the principal investigator (SP) and the senior trial statistician (SE) prior to the analysis of results. All analyses were conducted as two-sided, with significance interpreted at the 5% level. Analyses were based on available cases and conducted at the level of the individual patient. The software used was Stata, version 12.1 (StataCorp LP, College Station, TX, USA).
When missing data existed on individual scales, total or average scores were calculated when at least 75% of the scale had been completed. It was decided that multiple imputation of the data would not take place for a number of reasons:
-
The missing data levels were low – if conducted at the level of the individual question, the imputation model would be extremely large and its performance questionable.
-
The 6- and 12-month analyses took place at separate time points.
-
Some items were missing not at random; therefore, straightforward multiple imputation with a missing-at-random assumption was not appropriate.
-
Some items were rated ‘not applicable’ and, therefore, it was not sensible for them to be imputed.
Continuous outcomes were analysed using a generalised linear model with a fixed effect for treatment and the associated baseline value of the outcome and a random effect for clinicians to account for clustering. For significant results, Cohen’s d was calculated from the raw data as a standardised effect size measure. The number of unmet needs (CANSAS) was analysed using a Poisson regression with treatment and baseline unmet needs fitted as fixed effects and clinician fitted as a random effect. The analysis of the SIX was conducted using a proportional odds model with random intercept, with treatment fitted as a fixed effect and clinician fitted as a random effect.
Results
Participant flow
The flow of participants throughout the trial, with reasons for excluding clinicians and patients, is summarised in the Consolidated Standards of Reporting Trials (CONSORT) diagram (Figure 3). Recruitment took place between October 2012 and September 2013. Follow-up assessments (at 3, 6 and 12 months) took place between February 2013 and October 2014.
We screened 59 clinicians and 709 of their patients for eligibility. Ten of the clinicians were excluded because they had too few eligible patients in their caseloads or they had plans to leave their post within the study period. Of the patients, 521 were excluded for a variety of reasons, including being ineligible on the MANSA, refusing to meet a researcher or their clinician being excluded. The remaining 49 clinicians and 188 patients completed the baseline assessments and were randomised between 30 October 2012 and 27 September 2013. Nine of the 188 participating patients were either discharged from their clinician’s caseload or withdrew from the study prior to randomisation. However, the research team were not made aware of this information until after the randomisation had taken place. Consequently, they were labelled as ‘randomised in error’ and excluded from the analysis. Thus, 49 clinicians and 179 patients were correctly allocated to DIALOG+ (clinicians, n = 25; patients, n = 94) or the active control (clinicians, n = 24; patients, n = 85).
Following randomisation, four clinicians (DIALOG+, n = 3; control, n = 1) withdrew from the trial and, as a result, their patients (DIALOG+, n = 11; control, n = 3) did not receive the allocated intervention. One additional clinician in the DIALOG+ group also withdrew, but their patients (n = 2) were transferred to another participating clinician and continued the allocated intervention. These patients’ data were analysed according to the clinician with whom they were randomised. The primary outcome was assessed in 120 out of 179 patients (67.0%) at 3 months (DIALOG+, n = 61; control, n = 59), 147 out of 179 patients (82.1%) at 6 months (DIALOG+, n = 73; control, n = 74) and 129 out of 179 patients (72.1%) at 12 months (DIALOG+, n = 61; control, n = 68). The follow-up rate at 3 months was relatively lower, as the research team was stretched to conduct the follow-ups and continue with recruitment at the same time. The problem still existed at the 6-month follow-up, although to a lesser extent.
Implementation
The total number of DIALOG+ sessions was recorded for 80 patients in the experimental group (data were missing for 14 patients because of technical problems in synchronising the tablet computer with the server). These data show that the implementation of the intervention was variable. Twenty-four patients (30%) did not complete any DIALOG+ session because either their clinician withdrew (n = 11) or the patient withdrew (n = 7), or simply because no sessions took place (n = 6). The mean number of sessions per patient in the DIALOG+ group overall was 1.8 [standard deviation (SD) 1.6] in the first 3 months and 1.1 (SD 1.2) in the second 3 months. However, when considering only those patients who received at least one session, the mean number of sessions was 2.6 (SD 1.3) in the first 3 months and 1.5 (SD 1.1) in the second 3 months, with a total of 4.1 sessions (SD 1.9 sessions). Only six patients continued to receive the DIALOG+ intervention after the 6-month follow-up, with an average of 1.2 (SD 0.4) additional sessions.
In comparison, only seven patients (8%) in the control group did not complete any ratings on the tablet computer, either because the clinician went on sabbatical leave (n = 3) or failed to administer the control intervention (n = 3) or because the patient withdrew (n = 1). Of those patients who completed at least one rating in the control group, overall the mean number of ratings performed was 4.2 (SD 1.8 ratings performed), with 2.6 (SD 1.1) ratings performed in the first 3 months and 1.7 (SD 1.3) ratings performed in the second 3 months.
The details of the quality of implementation of DIALOG+, according to the specially developed adherence scale, are provided in Chapter 6.
Baseline characteristics of participants
Table 1 summarises the clinical and sociodemographic characteristics of the patients. Patients were predominantly male, single and from a wide range of ethnicities. The primary diagnosis for eight patients changed post eligibility screening. However, this information was not identified until post randomisation, so they remained in the trial and were included in the analysis.
Characteristics | Missing data, n (%) | Group | Total (N = 179) | |
---|---|---|---|---|
Intervention (N = 94) | Control (N = 85) | |||
Gender, n (%) | 0 (0) | |||
Female | 28 (30) | 28 (33) | 56 (31) | |
Male | 66 (70) | 57 (67) | 123 (68) | |
Ethnicity, n (%) | 0 (0) | |||
White | 23 (24) | 23 (27) | 46 (26) | |
Black | 38 (40) | 32 (38) | 70 (39) | |
Asian | 28 (30) | 21 (25) | 49 (27) | |
Mixed/other | 5 (5) | 9 (11) | 14 (8) | |
Age (years) | 0 (0) | |||
Mean (SD) | 41.5 (10.7) | 41.7 (9.3) | 41.6 (10.1) | |
Median (IQR) | 40.5 (33–49) | 43.0 (36–48) | 42.0 (34–49) | |
Min., max. | 22, 64 | 21, 64 | 21, 64 | |
Birth country, n (%) | 0 (0) | |||
UK | 60 (64) | 47 (55) | 107 (60) | |
Other countries | 34 (36) 107 | 38 (45) | 72 (40) | |
First language, n (%) | 0 (0) | |||
English | 69 (73) | 56 (66) | 125 (70) | |
Other | 25 (27) | 29 (34) | 54 (30) | |
Marital status, n (%) | 0 (0) | |||
Single | 63 (67) | 61 (72) | 124 (69) | |
Married | 19 (20) | 14 (16) | 33 (18) | |
Cohabiting | 0 (0) | 0 (0) | 0 (0) | |
Widowed | 3 (3) | 3 (4) | 6 (3) | |
Divorced | 9 (10) | 7 (8) | 16 (9) | |
Has children, n (%) | 0 (0) | |||
No | 49 (52) | 45 (53) | 94 (53) | |
Yes | 45 (48) | 40 (47) | 85 (47) | |
Highest level of education, n (%) | 1 (0) | |||
Left education prior to GCSE | 26 (28) | 22 (26) | 48 (27) | |
GCSE or equivalent | 27 (29) | 25 (29) | 52 (29) | |
A level or equivalent | 9 (10) | 13 (15) | 22 (12) | |
NVQ or equivalent | 11 (12) | 8 (9) | 19 (11) | |
Diploma | 14 (15) | 3 (4) | 17 (9) | |
Degree | 6 (6) | 14 (16) | 20 (11) | |
Number of previous psychiatric admissions over 24 hours | 18 (10) | |||
Median (IQR) | 2.0 (1–4) | 2.0 (1–5) | 2.0 (1–4) | |
Number of compulsory psychiatric admissions | 41 (23) | |||
Median (IQR) | 1.0 (0–2) | 1.0 (1–2) | 1.0 (0–2) | |
Number of weeks in total under psychiatric admission | 50 (28) | |||
Median (IQR) | 13.0 (6–40) | 16.5 (6–39) | 14.0 (6–39) | |
Length of relationship with care co-ordinator (years) | 0 (0) | |||
Median (IQR) | 1.0 (0–2) | 1.5 (1–2) | 1.3 (1–2) | |
Length of contact with services (years) | 0 (0) | |||
Median (IQR) | 11.5 (7.0–19.0) | 12.0 (7.0–19.0) | 12.0 (7.0–19.0) | |
Primary diagnosis (ICD-10),44 n (%) | 0 (0) | |||
Schizophrenia (F20) | 76 (81) | 65 (76) | 141 (79) | |
Delusional disorders (F22) | 0 (0) | 2 (2) | 2 (1) | |
Schizoaffective disorders (F25) | 14 (15) | 10 (12) | 24 (13) | |
Unspecified non-organic psychosis (F29) | 4 (4) | 0 (0) | 4 (2) | |
Bipolar disorder (F31)a | 0 (0) | 4 (5) | 4 (2) | |
Major depressive episode (F32, F33)a | 0 (0) | 4 (5) | 4 (2) |
Blinding
There was a total of three cases in which the outcome assessors identified the allocation of a patient and blinding was compromised. This included one patient at the 3-month follow-up and two at the 6-month follow-up. If unblinding occurred, outcome assessors were changed. There were no cases where blinding was compromised at the 12-month follow-up.
Primary outcome
Table 2 summarises the results for all outcomes.
Outcomea | Group | β-coefficient | 95% CI | p-value | ICC | |||
---|---|---|---|---|---|---|---|---|
Intervention | Control | |||||||
n | Mean (SD) | n | Mean (SD) | |||||
Quality of life (MANSA) | ||||||||
Baseline | 94 | 4.0 (0.9) | 85 | 3.8 (0.9) | ||||
3 months | 61 | 4.4 (0.9) | 59 | 4.1 (0.9) | 0.299 | 0.021 to 0.578 | 0.035 | < 0.001 |
6 months | 73 | 4.3 (1.0) | 74 | 4.0 (1.0) | 0.257 | –0.009 to 0.524 | 0.058 | < 0.001 |
12 months | 61 | 4.4 (0.9) | 68 | 4.1 (0.9) | 0.319 | 0.063 to 0.575 | 0.014 | < 0.001 |
Treatment satisfaction (CSQ-8) | ||||||||
Baseline | 93 | 24.0 (4.8) | 85 | 23.9 (5.5) | ||||
3 months | 61 | 23.4 (5.1) | 58 | 24.2 (5.3) | –0.860 | –2.325 to 0.606 | 0.25 | 0.037 |
6 months | 73 | 24.2 (5.4) | 71 | 23.8 (6.0) | 0.436 | –0.956 to 1.827 | 0.54 | < 0.001 |
12 months | 61 | 24.4 (5.0) | 65 | 23.6 (6.0) | 0.730 | –0.800 to 2.260 | 0.350 | < 0.001 |
Self-efficacy (GSS) | ||||||||
Baseline | 91 | 25.6 (7.4) | 85 | 25.9 (6.6) | ||||
3 months | 61 | 25.8 (6.9) | 57 | 26.9 (6.6) | –1.458 | –3.347 to 0.432 | 0.131 | < 0.001 |
6 months | 73 | 26.5 (6.6) | 73 | 26.5 (6.2) | –0.162 | –1.848 to 1.525 | 0.851 | < 0.001 |
12 months | 60 | 27.0 (7.0) | 65 | 27.1 (6.1) | 0.231 | –1.604 to 2.065 | 0.805 | < 0.001 |
Well-being (WEMWBS) | ||||||||
Baseline | 93 | 43.0 (10.6) | 84 | 41.7 (9.9) | ||||
3 months | 61 | 43.4 (10.9) | 59 | 42.5 (10.6) | 0.025 | –0.165 to 0.214 | 0.799 | < 0.001 |
6 months | 73 | 43.8 (11.5) | 73 | 42.8 (10.0) | 0.030 | –0.157 to 0.217 | 0.753 | < 0.001 |
12 months | 61 | 45.8 (11.2) | 65 | 43.8 (10.4) | 2.005 | –0.802 to 4.811 | 0.162 | < 0.001 |
Recovery (CHOICE) | ||||||||
Baseline | 92 | 5.6 (1.9) | 85 | 5.4 (1.8) | ||||
3 months | 61 | 5.7 (2.1) | 56 | 5.6 (2.0) | 0.120 | –0.386 to 0.627 | 0.641 | < 0.001 |
6 months | 73 | 5.9 (2.1) | 72 | 5.4 (1.9) | 0.292 | –0.180 to 0.764 | 0.225 | < 0.001 |
12 months | 60 | 6.0 (1.9) | 65 | 5.7 (1.9) | 0.417 | –0.080 to 0.915 | 0.100 | < 0.001 |
Positive symptoms (PANSS positive) | ||||||||
Baseline | 93 | 14.8 (5.7) | 84 | 15.1 (6.4) | ||||
3 months | 61 | 14.1 (5.6) | 58 | 14.0 (5.3) | 0.206 | –1.102 to 1.514 | 0.757 | < 0.001 |
6 months | 73 | 13.2 (5.2) | 73 | 14.4 (5.7) | –0.927 | –2.432 to 0.579 | 0.228 | 0.065 |
12 months | 60 | 12.8 (5.3) | 65 | 14.3 (5.3) | –1.459 | –3.003 to 0.086 | 0.064 | < 0.001 |
Negative symptoms (PANSS negative) | ||||||||
Baseline | 94 | 17.1 (6.4) | 84 | 18.0 (7.8) | ||||
3 months | 61 | 15.2 (5.7) | 58 | 16.9 (6.6) | –0.923 | –2.692 to 0.846 | 0.306 | < 0.001 |
6 months | 73 | 15.1 (5.8) | 73 | 15.7 (6.1) | 0.037 | –1.591 to 1.665 | 0.965 | < 0.001 |
12 months | 60 | 13.3 (5.1) | 65 | 15.3 (6.3) | –1.470 | –3.364 to 0.423 | 0.128 | 0.208 |
General symptoms (PANSS-general) | ||||||||
Baseline | 93 | 32.9 (8.3) | 84 | 34.6 (10.1) | ||||
3 months | 61 | 29.2 (8.8) | 58 | 34.2 (9.2) | –3.415 | –6.335 to –0.495 | 0.022 | 0.189 |
6 months | 73 | 28.0 (9.2) | 73 | 32.8 (8.9) | –4.041 | –6.82 to –1.263 | 0.004 | 0.079 |
12 months | 60 | 26.4 (7.7) | 65 | 31.3 (7.3) | –4.271 | –6.712 to –1.829 | 0.001 | 0.067 |
Therapeutic relationship (STAR-P) | ||||||||
Baseline | 93 | 33.2 (8.0) | 85 | 34.4 (8.2) | ||||
3 months | 56 | 34.2 (8.0) | 59 | 33.3 (8.8) | 1.219 | –1.102 to 3.539 | 0.303 | < 0.001 |
6 months | 66 | 33.9 (8.7) | 70 | 32.2 (10.4) | 2.114 | –0.67 to 4.897 | 0.137 | 0.113 |
12 months | 48 | 33.0 (9.7) | 56 | 32.8 (9.3) | 0.448 | –2.712 to 3.607 | 0.781 | < 0.001 |
Therapeutic relationship (STAR-C) | ||||||||
Baseline | 77 | 40.8 (5.2) | 83 | 41.3 (4.4) | ||||
3 months | 52 | 39.8 (4.8) | 54 | 40.6 (4.7) | –0.033 | –2.341 to 2.276 | 0.978 | 0.557 |
6 months | 66 | 40.8 (4.7) | 66 | 41.7 (5.2) | –0.971 | –3.203 to 1.262 | 0.394 | 0.41 |
12 months | 39 | 40.2 (4.4) | 47 | 41.8 (4.9) | –2.454 | –5.108 to 0.200 | 0.070 | 0.536 |
Unmet needs (CANSAS)b | ||||||||
IRR | 95% CI | p-value | ICC | |||||
Baseline | 94 | 3.5 (3.1) | 85 | 3.9 (2.9) | ||||
3 months | 61 | 2.2 (2.2) | 59 | 3.2 (3.1) | 0.679 | 0.485 to 0.951 | 0.024 | 0.020 |
6 months | 73 | 2.0 (2.9) | 73 | 3.1 (3.0) | 0.607 | 0.412 to 0.895 | 0.012 | 0.032 |
12 months | 61 | 2.0 (2.3) | 66 | 2.7 (2.9) | 0.732 | 0.480 to 1.115 | 0.146 | 0.045 |
Social outcomes (SIX)c | ||||||||
POR | 95% CI | p-value | ICC | |||||
Baseline | 93 | 2.8 (1.0) | 85 | 2.6 (0.9) | ||||
3 months | 61 | 2.9 (1.3) | 59 | 2.7 (0.8) | 0.97 | 0.39 to 2.44 | 0.95 | < 0.001 |
6 months | 73 | 2.9 (1.1) | 74 | 2.6 (0.9) | 1.34 | 0.72 to 2.49 | 0.358 | < 0.001 |
12 months | 61 | 3.1 (1.0) | 68 | 2.6 (0.9) | 2.91 | 1.225 to 6.911 | 0.016 | 0.149 |
Subjective quality of life (MANSA) was significantly higher in the DIALOG+ group at the 3-month follow-up (effect size: Cohen’s d = 0.34) and, as a trend, at the 6-month follow-up (Cohen’s d = 0.29). It was also significantly higher in the DIALOG+ group at the 12-month follow-up (Cohen’s d = 0.34), 6 months after the intervention had ended for the majority of participants (only six patients continued to receive DIALOG+ after the initial 6-month intervention period).
Secondary outcomes
The number of unmet needs (CANSAS) was significantly lower in the DIALOG+ group at 3 and 6 months, reflecting a reduction of needs by 32% at 3 months and by 39% at 6 months. There were also significantly lower levels of general psychopathological symptoms in the DIALOG+ group at 3 months (effect size: Cohen’s d = 0.55) and 6 months (Cohen’s d = 0.54). There were no significant differences between treatment groups on any of the other secondary outcomes at the 3- and 6-month follow-up.
At the 12-month follow-up there were still significantly lower levels of general psychopathological symptoms in the DIALOG+ group (Cohen’s d = 0.65). Objective social outcomes were also significantly higher in the DIALOG+ group at the 12-month follow-up.
Discussion
Main findings
This pragmatic trial found that DIALOG+ had a positive effect in the community treatment of patients with psychosis. Patients in the DIALOG+ group had significantly higher subjective quality of life after 3 months and at borderline significance after 6 months. They also had significantly fewer unmet needs and lower levels of general psychopathological symptoms after 3 and 6 months. At 12 months, 6 months after most patients had ceased involvement with the intervention, subjective quality of life and general psychopathological symptoms remained significant. Patients in the DIALOG+ group also now had significantly better objective social outcomes.
Strengths and limitations
The pragmatic nature of the trial is a key strength regarding the generalisability of the findings. DIALOG+ was implemented within services as it would be if rolled out into routine practice. Although focusing on one NHS trust, the trial was conducted in several different services, so that it may be seen as a multisite trial. The trial also had a number of additional strengths, including the addition of an active control to account for the effect of repeated self-assessments and of using novel computer technology in clinical meetings; the similar frequencies with which the control and experimental intervention were delivered; the wide inclusion criteria for patients; and the blinding of outcome assessors.
However, there were also a number of limitations. The pragmatic nature of the trial may account for the variable implementation of the intervention, with 30% of patients allocated to the DIALOG+ group not receiving the intervention at all. The study team could not address this issue as the principal investigator was blinded to all post-randomisation information. If there had been awareness of the problem, it should have been possible to reduce this figure. Although this is a limitation of the trial, it rather emphasises the effectiveness of the intervention, which was substantial for the whole intervention group even though only 70% of patients in that group actually received the intervention.
Additionally, it was not possible to blind clinicians and patients towards their own allocation, which raises the possibility of performance bias. Patients were also excluded if they already had a high subjective quality of life at baseline, had an insufficient command of English, or were too unwell to provide informed consent. It is unclear as to what extent the findings would apply to these excluded groups if the intervention were rolled out in practice. There was also a higher than expected dropout rate. Finally, it was not possible to detect small effect sizes with sufficient power due to an insufficient sample size. This would be important, as even small individual effects could translate into a relevant public health benefit on a large scale.
Interpretation of the results
The positive effect of the DIALOG+ intervention is consistent with the findings of the original DIALOG trial. 17 Both interventions provide a comprehensive, structured assessment in routine community mental health meetings that is patient-centred and explores wishes for change. Adopting a patient-centred assessment in routine meetings is important, as a change to a patient’s self-reported needs has been found to be a stronger predictor of improved subjective quality of life than the clinician’s own perception of those needs. 65
However, the effect size on subjective quality of life is larger than in the original DIALOG trial (adjusted mean difference on the MANSA of 0.30 vs. 0.12). This is equivalent to an improvement in at least 3 out of 12 life domains on the MANSA, which can be regarded as a clinically significant improvement. There are a number of possible explanations for this increased effect. DIALOG+ extends the original intervention by going beyond facilitating a mere assessment of patients’ needs and wishes. It provides a guide for clinicians on how to respond to patients’ concerns and reach shared agreements on actions. This approach is based on principles of SFT, which previous findings suggest is effective in treating a wide range of disorders,51,52 requires a short time span51 and focuses on the personal and social resources of patients rather than their deficits. 53,66 The original DIALOG trial also differed methodologically in that there was no active control, it assessed outcomes after 1 year only, and there was no blinding of outcome assessors.
The effect size found is also similar to more time-consuming and cost-intensive psychological treatments such as CBT. 26–28,67 There are a number of possible explanations as to why the DIALOG+ intervention was found to be so effective.. DIALOG+ is used within the existing patient–clinician relationship and does not require referral to another service or clinician. This may support mutual trust and credibility, and facilitate actions. Additionally, the intervention addresses both psychological issues as well as practical ones, which could have a tangible impact on the patient’s life. This may subsequently alleviate general psychopathological symptoms, on which a medium-sized beneficial effect was found. The significant effect on general psychopathological symptoms, but not on positive or negative symptoms, might reflect that general symptoms are more closely associated with immediate problems and concerns than positive or negative symptoms.
The DIALOG+ intervention has an effect on subjective quality of life after just 3 months with, on average, two or three sessions, with no additional improvement afterwards. This suggests that the intervention produces a visible positive benefit after a short period of time, after which subsequent repetitions have limited benefit. Rather than applying DIALOG+ frequently in regular intervals, it might best be used once or twice in the first instance and rested following the identification of solutions, then revisited once the patient’s situation has changed. The effect also remained at the 12-month follow-up, 6 months after most patients ended the intervention, suggesting that the DIALOG+ intervention may lead to long-term benefits for patients. Most notably, patients in the DIALOG+ group had better objective social outcomes at the 12-month follow-up, whereas there was no effect at 3 and 6 months. These outcomes, including accommodation status and employment, would probably take a longer period of time for meaningful change to occur.
Interestingly, this trial found no positive impact on a number of secondary patient-reported outcomes such as treatment satisfaction, mental well-being and self-efficacy. As opposed to being disappointing, this finding suggests that the positive effect on subjective quality of life and self-rated unmet needs is not merely a generalised positive appraisal across all patient-reported outcomes. 68 On the contrary, the effect may be more specific and is not primarily linked to non-specific effects such as the therapeutic relationship. That this intervention did not have any significant effect on the therapeutic relationship also supports this.
Implications and future directions
This pragmatic trial suggests that the DIALOG+ intervention may be used widely in community care. It provides structured patient–clinician communication that collaboratively identifies a patient’s concerns, analyses them, explores solutions and initiates change. It is a low-cost, generic intervention that requires limited training and no reorganisation of services. It could, therefore, be rolled out easily across services and even small effect sizes in individual patients would translate into substantial public health gains. The DIALOG assessment itself has also been shown to be a valid indicator of subjective quality of life and treatment satisfaction. 69 The intervention could, therefore, provide regular patient-reported outcomes in a clinically meaningful procedure. This would be more economical and provide better response rates than separate assessments purely for evaluation purposes.
The findings encourage further attempts to turn routine clinical meetings into therapeutically effective interventions. 13 Modern computer technology could support such attempts, but this is not the essence of the intervention. Future research should further refine and test the DIALOG+ intervention in different clinical settings, geographical regions and patient groups.
Chapter 5 Cost-effectiveness of the DIALOG+ intervention
Introduction
The previous chapter of this report provided evidence on the clinical effectiveness of the DIALOG+ intervention. This chapter reports findings from a cost-effectiveness analysis of the intervention. Such evidence is crucial if scarce health-care resources are to be used optimally. Introducing new services and interventions clearly requires investment and, in the short term, this would mean that funds would not be spent on other services (whether in the mental health-care sector or elsewhere). Such investment represents good value for money if savings occur through reduced costs elsewhere or if the intervention replaces a more expensive one or if it does not decrease costs but does produce sufficiently improved outcomes to justify increased spending. Economic evaluation is a way of combining cost and outcome data in order to determine the extra cost (if any) incurred in order to achieve improved outcomes. This study has adopted a health-care perspective that is most relevant for bodies such as the National Institute for Health and Care Excellence (NICE), although the use of unpaid care from family and friends is also reported.
Methods
Service use and costs
The retrospective use of health and social care services in the 3-month periods prior to baseline and 3-, 6- and 12-month follow-ups was recorded using an adapted version of the Client Service Receipt Inventory (CSRI), which has been used extensively in mental health-care research. 70 Patients reported whether they had used specific services in hospital (psychiatric and general medical inpatient and outpatient) or the community [general practitioners (GPs), community mental health nurses, social workers, psychiatrists, practice nurses, dentists and other services]. Use of medication and the number of hours per week of care provided in specific areas by family and/or friends because of the participant’s health problems were also reported.
The extra cost associated with the intervention amounted to specific staff training and the use of the computer equipment. The cost of these per participant in the intervention arm was calculated at £109. The treatment as usual group also received some use of the computer equipment, but because this was intended to control for this non-specific effect, we did not include the cost of this in the analyses.
Service use measured with the CSRI was combined with relevant unit costs2,3 and summed to derive total costs. Informal carers are not paid for their support to patients, but there is still a value to this time. The cost of this unpaid care was estimated using the average national wage of £15.11 per hour. Total health costs over the follow-up period were estimated by adding costs at 3, 6 and 12 months and by adding the average of the last two (given that no 9-month follow-up was conducted).
Analyses
Data were analysed using Stata, version 11. Comparisons of total costs were made between the two groups using a bootstrapped regression model to account for non-normality in the data distribution, which is usually observed in studies such as this one (as a result of most participants having relatively low costs, and a small number having stays in hospital, which skew the cost distribution). Baseline costs were controlled for in the model and percentile confidence intervals (CIs) reported. To assess cost-effectiveness, we combined costs with the change in the MANSA score. If costs were lower for one group and outcome better, then that option was ‘dominant’. To address uncertainty around these point estimates, we generated 1000 incremental cost–outcome combinations using bootstrap methods and plotted these onto a cost-effectiveness plane. This allowed us to calculate the probability that, when compared with treatment as usual, the intervention was (1) cost-saving and outcome-improving, (2) cost-saving and outcome-worsening, (3) cost-increasing and outcome-worsening or (4) cost-increasing and outcome-improving. We did not produce a cost-effectiveness acceptability curve because we had no information about the threshold values to attach to a unit improvement in the MANSA.
Results
Service use and costs
Table 3 shows the number and percentage of each group using specific services prior to baseline and each follow-up interview. At baseline, the groups were reasonably balanced. Around two-thirds of participants had contact with GPs and with community mental health nurses, and around half had social worker contact. Overall, about half had psychiatrist contact. Nearly all were in receipt of psychotropic medication. Few participants had been inpatients. Slightly less than half had received informal care from family members or friends during the preceding 3 months. These service patterns were largely maintained at each follow-up. However, there were some noticeable differences. In the period prior to the 3-month follow-up, the intervention group participants were less likely to have primary care nurse contact than those in the control group. However, they were more likely to have contact with dentists and to have non-psychiatric outpatient care. In the period prior to the 6-month follow-up, the control group participants were more likely to have contact with ‘other’ professionals. In the final 3-month period, the main difference was that the control group participants were more likely to have community mental health nurse contact.
Service | Baseline | Follow-up | ||||||
---|---|---|---|---|---|---|---|---|
3 months | 6 months | 12 months | ||||||
Intervention (n = 94) | Control (n = 85) | Intervention (n = 61) | Control (n = 59) | Intervention (n = 73) | Control (n = 74) | Intervention (n = 61) | Control (n = 68) | |
GP | 66 (70) | 55 (65) | 48 (79) | 43 (73) | 46 (63) | 49 (66) | 38 (62) | 41 (60) |
Community mental health nurse | 68 (72) | 54 (64) | 43 (70) | 41 (69) | 55 (75) | 48 (65) | 38 (62) | 48 (71) |
Social worker | 53 (56) | 43 (51) | 28 (46) | 27 (46) | 36 (49) | 37 (50) | 32 (52) | 30 (44) |
Psychiatrist | 40 (43) | 45 (53) | 35 (57) | 33 (56) | 34 (47) | 38 (51) | 29 (48) | 36 (53) |
Primary care nurse | 17 (18) | 23 (27) | 15 (25) | 24 (41) | 16 (22) | 20 (27) | 20 (33) | 23 (34) |
Dentist | 6 (6) | 4 (5) | 12 (20) | 4 (7) | 13 (18) | 8 (11) | 6 (10) | 7 (10) |
Other professionals | 30 (32) | 37 (44) | 23 (38) | 23 (39) | 22 (30) | 30 (41) | 17 (28) | 20 (29) |
Psychiatric inpatient | 6 (6) | 6 (7) | 0 (0) | 3 (5) | 4 (5) | 4 (5) | 1 (2) | 3 (4) |
Other inpatient | 0 (0) | 1 (1) | 1 (2) | 2 (3) | 1 (1) | 3 (4) | 0 (0) | 3 (4) |
Psychiatric outpatient | 1 (1) | 0 (0) | 1 (2) | 0 (0) | 0 (0) | 3 (4) | 0 (0) | 1 (1) |
Other outpatient | 9 (10) | 10 (12) | 14 (23) | 8 (14) | 21 (29) | 21 (28) | 11 (18) | 14 (21) |
Medication | 86 (91) | 83 (98) | 58 (95) | 58 (98) | 67 (92) | 73 (99) | 57 (93) | 67 (99) |
Informal care | 42 (45) | 42 (49) | 28 (46) | 33 (56) | 37 (51) | 34 (46) | 27 (44) | 25 (37) |
For those patients using specific services, the quantity of what service was being used is described in Table 4. Again, there were few differences between the groups. At baseline, the days spent as a psychiatric inpatient for those who were admitted were slightly higher for the intervention group. Contact with ‘other’ professionals was more frequent for the intervention group and this continued during the next two periods also. Psychiatric inpatient days were again higher for the intervention group in the period prior to the 6-month follow-up, but in the final period only one intervention group participant was admitted and they had a shorter stay than the average for the control group. For those who received informal care from family or friends, the average number of hours per week ranged from 16 to 39.
Service | Baseline | Follow-up | ||||||
---|---|---|---|---|---|---|---|---|
3 months | 6 months | 12 months | ||||||
Intervention (n = 94) | Control (n = 85) | Intervention (n = 61) | Control (n = 59) | Intervention (n = 73) | Control (n = 74) | Intervention (n = 61) | Control (n = 68) | |
GP | 2.7 (2.9) | 2.8 (2.1) | 2.2 (2.6) | 2.5 (1.7) | 2.3 (2.3) | 2.8 (1.8) | 1.8 (1.0) | 2.2 (1.6) |
Community mental health nurse | 4.0 (1.9) | 4.2 (3.7) | 4.5 (2.7) | 3.6 (2.0) | 4.6 (2.0) | 4.4 (1.7) | 4.2 (2.2) | 3.6 (2.2) |
Social worker | 3.8 (2.1) | 3.8 (1.9) | 3.8 (1.7) | 3.6 (2.1) | 3.1 (1.7) | 4.1 (2.5) | 2.9 (1.5) | 3.4 (2.2) |
Psychiatrist | 1.2 (0.5) | 1.2 (0.4) | 1.1 (0.2) | 1.3 (0.8) | 1.2 (0.6) | 1.2 (0.5) | 1.2 (0.5) | 1.2 (0.4) |
Primary care nurse | 1.6 (0.9) | 2.7 (2.5) | 1.9 (1.3) | 2.5 (1.6) | 1.3 (1.0) | 1.8 (1.2) | 1.2 (0.6) | 2.5 (3.0) |
Dentist | 1.3 (0.5) | 1.5 (0.6) | 1.3 (0.5) | 1.8 (1.0) | 1.8 (1.4) | 1.6 (0.7) | 1.7 (1.6) | 2.3 (1.7) |
Other professionals | 3.8 (3.1) | 8.1 (14.1) | 4.6 (5.8) | 7.8 (12.9) | 5.0 (6.7) | 11.5 (20.8) | 5.0 (6.4) | 5.1 (7.4) |
Psychiatric inpatient (days) | 48.7 (23.4) | 40.3 (29.3) | – | 34.7 (24.7) | 36.8 (13.7) | 29.5 (13.8) | 7.0 (-) | 46.7 (42.5) |
Other inpatient (days) | – | 1.0 (–) | 13.0 (–) | 7.5 (6.4) | 5.0 (–) | 1.3 (0.6) | – | 3.0 (3.5) |
Psychiatric outpatient | 1.0 (–) | – | 1.0 (–) | – | – | 3.3 (2.1) | – | 1.0 (–) |
Other outpatient | 1.2 (0.4) | 2.1 (1.3) | 1.8 (1.4) | 4.0 (7.0) | 1.6 (1.0) | 1.8 (1.0) | 2.1 (1.6) | 1.8 (1.2) |
Informal care (hours) | 39.4 (38.6) | 30.4 (34.8) | 24.1 (31.6) | 20.8 (27.4) | 15.9 (22.7) | 22.8 (30.2) | 26.3 (37.5) | 29.4 (40.5) |
When psychiatric inpatient care was used, it accounted for a disproportionate amount of the total cost (Table 5). Costs of GP care were between £40 and £100 at each time point and costs of psychiatrist care were around £100. Medication costs were relatively stable over time. Informal care costs were not included in the totals but, as can be seen, they exceeded health costs for each group at each time point. At baseline, the total health-care costs were very similar between the groups. At the 3-month follow-up, costs were substantially higher for the control group, but the costs were again very similar at the 6-month follow-up. In the final period, the costs were again far higher for the control group.
Service | Baseline | Follow-up | ||||||
---|---|---|---|---|---|---|---|---|
3 months | 6 months | 12 months | ||||||
Intervention (n = 94) | Control (n = 85) | Intervention (n = 61) | Control (n = 59) | Intervention (n = 73) | Control (n = 74) | Intervention (n = 61) | Control (n = 68) | |
GP | 89 (163) | 90 (112) | 76 (103) | 95 (119) | 75 (138) | 100 (156) | 40 (43) | 70 (108) |
Community mental health nurse | 63 (67) | 72 (125) | 94 (128) | 68 (81) | 84 (91) | 65 (68) | 40 (49) | 56 (66) |
Social worker | 63 (76) | 65 (93) | 67 (99) | 68 (103) | 62 (104) | 76 (111) | 51 (64) | 51 (75) |
Psychiatrist | 84 (118) | 100 (115) | 99 (103) | 105 (142) | 89 (133) | 90 (114) | 91 (127) | 101 (115) |
Primary care nurse | 3 (8) | 6 (16) | 6 (16) | 12 (25) | 3 (8) | 7 (15) | 4 (9) | 8 (17) |
Dentist | 3 (11) | 2 (11) | 8 (17) | 4 (16) | 10 (29) | 6 (18) | 5 (22) | 8 (28) |
Other professionals | 70 (177) | 216 (422) | 127 (350) | 165 (426) | 86 (256) | 199 (511) | 86 (314) | 64 (178) |
Psychiatric inpatient | 1084 (4585) | 994 (4404) | 0 (0) | 615 (3122) | 703 (3097) | 557 (2538) | 40 (313) | 719 (4234) |
Other inpatient | 0 (0) | 7 (63) | 124 (970) | 148 (935) | 40 (341) | 32 (164) | 0 (0) | 77 (503) |
Psychiatric outpatient | 1 (11) | 0 (0) | 2 (14) | 0 (0) | 0 (0) | 15 (81) | 0 (0) | 2 (13) |
Other outpatient | 13 (42) | 27 (87) | 45 (110) | 59 (305) | 49 (98) | 56 (107) | 41 (115) | 40 (98) |
Medication | 150 (237) | 114 (187) | 167 (283) | 183 (284) | 123 (220) | 146 (279) | 165 (269) | 140 (215) |
Total health care excluding therapy | 1622 (4543) | 1693 (4417) | 814 (1121) | 1522 (3540) | 1324 (3116) | 1348 (2764) | 562 (540) | 1335 (4301) |
Informal care | 3461 (6355) | 2946 (5641) | 2174 (4795) | 2291 (4494) | 1585 (3521) | 2060 (4583) | 2290 (5492) | 2123 (5531) |
The total mean costs, including the intervention training and equipment over the follow-up period, were £3279 for the intervention group and £4624 for the control group. Adjusting for baseline, the savings for the intervention group were £1288, but this difference was not statistically significant (bootstrapped 95% CI –£1318 to £5633).
Chapter 4 reported that the intervention group had a higher MANSA score at follow-up. Therefore, the intervention was, in a technical sense, dominant (i.e. less expensive and more effective). Despite this, the uncertainty around each of these estimates is substantial. The cost-effectiveness plane shown in Figure 4 indicates that the most likely outcome (with a probability of 0.724) is that the intervention does indeed save resources and produce better outcomes. There is, however, a 0.265 probability that the intervention is more effective at a higher cost. Whether or not the extra benefit is sufficient to justify the extra cost is uncertain. What does seem certain from these analyses is that the intervention does not result in worse outcomes (i.e. the quadrants of Figure 4 to the left of the vertical axis).
Discussion
The findings from the cost-effectiveness analysis suggest that although there are some differences in service use at different time points, these are not large and costs do not differ substantially. Inpatient costs do appear to show large differences, but the numbers of patients admitted are small; despite this, the durations in hospital are often prolonged. The inpatient costs do appear rather volatile and it could be argued that their inclusion in a study of this size may cause problems of interpretation. We adopted a health-care perspective in the main analyses. However, we also estimated the costs of informal care. Measuring and valuing such care is challenging and there is a lack of consensus about how this should be done, or indeed, if it is relevant to do so. It is apparent, however, that patients were in receipt of much care from family members and friends and the costs of this care as estimated here are vast.
It is of interest that the intervention does not seem to increase costs. In fact, the analyses have demonstrated that there is a high likelihood (72%) that the intervention both improves outcomes and saves costs. It is noteworthy that this percentage represents not simply the cost-effectiveness of implementing a new intervention, but of implementing this intervention that simultaneously improves patient outcomes. Although there was a 27% likelihood of cost increases and outcome improvements, such a scenario could still indicate cost-effectiveness if the improved outcomes are valued sufficiently highly to justify the extra costs. Unfortunately, there is no commonly accepted threshold at which MANSA improvements are to be valued.
The use of the MANSA in the economic analyses is a limitation. Although it is of relevance for this patient group, it does not greatly help commissioners who have to decide between the intervention evaluated here and those in other health-care areas (cancer, diabetes, etc.). In future trials of such an intervention, it would be important to consider the use of a measure of quality-adjusted life-years such as the EuroQoL-5 Dimensions (EQ-5D). Quality-adjusted life-years are generic measures of health-related quality of life and, in principle, allow comparisons to be made across diverse health-care areas. However, the use of the EQ-5D in patients with severe mental illness has been criticised, as it may not reflect the domains that are relevant and may not be sensitive to change. 71 That being said, there would be scope for examining changes in a measure such as the EQ-5D in relation to clinically specific measures.
A further limitation of these analyses is that the service-use data relied partially on participant self-reporting (participants were interviewed about service use by researchers, and attempts were made to verify their reports against electronic records). Although the accuracy of this may be questioned, it is in reality the main method by which the breadth of service use required for a comprehensive costing can be acquired. Furthermore, a number of studies have found that this approach is acceptable. 72,73 Although routine data sources can be used for some service-use measures, it is not always apparent that these are in themselves wholly accurate measures of resource use. We measured service use over a relatively short period prior to each follow-up and there is no reason to expect that under- or (maybe less likely) over-reporting would affect one group more than another.
The sample size was relatively small. For most services, this may not be crucial, but the costs of inpatient care did fluctuate dramatically and differ between the groups. This may have been as a result of the care provided, or may have been more spurious. Future studies should ensure that they are large enough to make sure that inpatient estimates of costs are robust. Finally, the duration of the study is also a limitation. Assessing costs and outcome over a 1-year period is informative and is not unusual in mental health-care research. However, these are chronic conditions and producing cost-effectiveness findings relating to a number of years would be appropriate. This is rarely feasible with a trial, and so modelling methods can be used whereby transitions between health states, and the impact that an intervention can have on those transitions, can be estimated.
In conclusion, this chapter has provided preliminary evidence that the intervention is inexpensive and cost-effective, and this should be verified in future work.
Chapter 6 Analysis of videos of DIALOG+ sessions
Introduction
This chapter reports findings from video-recorded sessions of patients, in the intervention arm and in the control arm of the trial. The aims were to:
-
assess the adherence of clinicians allocated to the intervention arm to the DIALOG+ manual
-
assess the adherence of clinicians allocated to the control group to the instructions provided (i.e. that they should facilitate patients in independently rating the DIALOG scale, without discussing ratings)
-
investigate qualitatively how the DIALOG+ intervention was implemented in practice.
Methods
Study design
This was an observational study whereby patient–clinician meetings in both arms of the trial were video-recorded and analysed. Video recordings were chosen as a reliable way to gather information about the implementation of the intervention and control conditions in practice, without adding undue influence from a researcher observing sessions. 74 The study received a favourable ethical opinion from the NRES (Stanmore; reference number 12/LO/1145). Video data were collected only for those patients who consented to being recorded on enrolment in the trial.
Participants
Participants were patients and clinicians from both arms of the trial as described in Chapter 4.
Intervention sample
The aim was to record one DIALOG+ session for each patient–clinician pair in the intervention, covering a range of the six sessions expected to take place over the 6-month trial period. Eighteen videos of DIALOG+ were collected, but the final sample analysed was of 16 videos, as two had to be excluded because of use of native language in one and poor video quality in the other. Videos were from 12 clinicians and were mostly later sessions of the DIALOG+ intervention (session 2, n = 1; session 4, n = 4; session 5, n = 5; session 6, n = 6). This sample size was significantly smaller than expected because of a number of patients not receiving the intervention (patient or clinician withdrawals, or clinicians simply not delivering the intervention), as well as a number of patients withdrawing their consent for video recording.
Control sample
We were successful in video-recording 20% of the control sample as planned, collecting 13 videos from nine clinicians.
Procedure
The DIALOG+ adherence scale (see Appendix 5) was developed by the research team to assess clinician behaviours specific to the administration of the DIALOG+ procedure, as outlined in the manual and training programme. It is composed of 16 items, scored using either a two-point scale (0, 1) to indicate either the absence or implementation of a specific behaviour, or a three-point scale (0, 1, 2) to indicate the absence, partial implementation or full implementation of a behaviour. There are two subscales, corresponding to the initial DIALOG assessment (nine items; a total score of 15) and the four-step approach (seven items; a total score of 13). An example item is ‘satisfaction’, which asks ‘For how many domains does the patient rate his/her satisfaction?’ and can be rated as ‘0 – no items are rated’, ‘1 – more than three and less than eight items are rated’, or ‘2 – more than nine items are rated’. Scores are added together to give a total score between 0 and 28. As more than one domain can be discussed per session, each domain is rated on the adherence scale, and the highest-scoring domain is recorded.
Analysis
Adherence
Each clinician was rated using the DIALOG+ adherence scale. In cases where more than one video was collected per clinician, each video was rated and an average score calculated. One researcher rated all of the clinicians to assess adherence to the intervention, with a second researcher rating a random selection of clinicians in 25% of the videos. Control videos were assessed for adherence on two criteria: (1) whether or not the clinician facilitated the patient in independently completing the DIALOG scale; and (2) whether or not the clinician ensured that there was no discussion of the ratings with the patient.
Implementation of DIALOG+ in practice
The qualitative analysis of the videos was conducted by the researchers using an inductive approach whereby they decided the direction of the analysis based on the data. 75 This followed the principles of thematic analysis. 76 Both researchers independently familiarised themselves with the data by viewing six videos and generated a list of variables of interest. These were entered into a Microsoft Excel® (version 2010) (Microsoft Corporation, Redmond, WA, USA) database as codes with related memos. They were subsequently refined and organised into coherent themes. These themes were discussed with a senior qualitative researcher (RMC) and a rating sheet with themes of interest for intervention and control videos were developed (see Appendices 6 and 7). Each researcher then rated videos that they had not previously viewed. Once all ratings were complete, the information was collated using frequency counts and a narrative summary was used to integrate the findings.
Results
The results are reported in the following order: (1) adherence of clinicians allocated to the intervention arm of the trial to the DIALOG+ manual; (2) adherence of clinicians allocated to the control group to the instructions provided; and (3) implementation of the DIALOG+ intervention in practice and comparisons with control sessions.
Adherence of clinicians to the DIALOG+ manual
Adherence scores for the 12 clinicians are presented in Table 6. Overall adherence to the intervention was 16 out of a possible 28, a little over half the possible score. Adherence to the prescribed procedure of the initial DIALOG assessment was less than half the possible score, whereas adherence to the four-step approach was higher (i.e. 8.18 out of a possible 13).
Component of DIALOG+ intervention | Mean | Median | Min. | Max. | Max. possible score |
---|---|---|---|---|---|
Satisfaction | 1.67 | 2 | 0 | 2 | 2 |
Additional help | 0.64 | 0.33 | 0 | 2 | 2 |
Use of iPad | 0.97 | 1 | 0 | 2 | 2 |
Comparison | 0.17 | 0 | 0 | 1 | 1 |
Positive | 0.25 | 0 | 0 | 2 | 2 |
Special attention to mental health | 0.25 | 0 | 0 | 1 | 1 |
Number of domains | 0.72 | 1 | 0 | 1 | 1 |
Patient involvement | 1.13 | 1.5 | 0 | 2 | 2 |
Selection of domains | 0.61 | 1 | 0 | 1 | 2 |
Step one: understanding | 1.54 | 2 | 0 | 2 | 2 |
Step two: looking forward | 1.26 | 1 | 0 | 2 | 2 |
Step three: exploring options | 1.35 | 1.5 | 0 | 2 | 2 |
Step four: agreeing on actions | 1.67 | 2 | 0 | 2 | 2 |
Four-step approach order | 0.21 | 0 | 0 | 1 | 1 |
Recording of action items | 0.78 | 1 | 0 | 2 | 2 |
Appropriate action items | 1.38 | 1.75 | 0 | 2 | 2 |
DIALOG assessment | 6.40 | 7 | 0a | 10 | 15 |
Four-step approach | 8.18 | 8.33 | 0a | 12 | 13 |
Total score | 14.58 | 16 | 0a | 21 | 28 |
High-scoring items included ‘satisfaction’, showing that most or all items were rated, and ‘patient involvement’, showing that patients were involved in selection of domains. Low-scoring items on the DIALOG assessment subscale were ‘comparison’, showing that few clinicians compared current and previous ratings; ‘positive’, suggesting that positive feedback on high ratings of satisfaction was rarely given during the review of the ratings; and ‘special attention to mental health’, showing that such attention was not particularly applied to that domain.
High-scoring items on the four-step approach were ‘step 1: understanding’ and ‘step 4: agreeing on actions’, demonstrating that clinicians were adherent in exploring both positive and negative aspects of the patient’s chosen domain and in agreeing and documenting action plans. Low-scoring items included ‘four-step approach order’, showing that few clinicians completed the four-step approach in the specified order and ‘recording of items’, suggesting that clinicians were not adherent in waiting until the end of the four-step discussion for this.
Adherence of the control group to instructions
All 13 videos demonstrated that clinicians were adherent to the instructions provided to them (i.e. clinicians facilitated the patients in completing the DIALOG scale independently, without any discussion of the ratings). Clinicians usually introduced the iPad and gave a brief procedural reminder to the patient such as ‘remember to choose a point on the scale by pressing the number’. The majority of clients then completed DIALOG unaided while the clinician sat quietly in the room (n = 6) or waited outside (n = 4). Two patients required procedural help; one sought help with proceeding from one item to the next, while the other required language assistance.
Implementation of the DIALOG+ intervention in practice
The DIALOG+ intervention was delivered in 15 of the 16 videos recorded; thus, these 15 videos were analysed to identify important variables regarding how the DIALOG+ intervention was implemented in practice. In the 16th video, the clinician failed to deliver the DIALOG+ intervention.
Review of previous ratings
Clinicians were advised to begin sessions by reviewing previous actions. However, the majority of clinicians did not do this. Where this review was attempted (n = 4), it was not deemed to facilitate the DIALOG+ session, as clinicians did not involve the patient actively in the review process (n = 2), were unable to operate the review function of the app (n = 1) or started to review but stopped when the patient reported that they had not completed actions (n = 1).
Procedural reminders
In virtually all of the videos (n = 14), the clinician reminded the patient of the procedure, except for one patient who completed the scale independently and needed no reminder. Clinicians were not advised how to remind the patient of the session and, therefore, the styles of reminders varied from extensive procedural reminders to brief prompts during the rating. Almost all reminders were useful, although in some cases reminders were too brief, too extensive or were not well-facilitated as the patient could not see the iPad.
Initial DIALOG assessment: time taken and shared viewing
Across the 15 sessions, approximately 11 minutes, on average, were spent completing the initial DIALOG assessment (range 4–23 minutes). In almost all (n = 14), the clinician and patient sat in close enough proximity to share the iPad during the assessment and choosing of domains. Shared viewing was attempted in most cases (n = 13), but was considered to facilitate only around half of the sessions (n = 8). Factors that facilitated shared viewing included the use of the iPad cover to prop up the iPad so that the screen could be seen by both clinician and patient. When shared viewing was unhelpful (n = 5), this tended to be when the iPad was primarily located in the clinician’s line of sight and turned towards the patient only occasionally. Shared use of the iPad (i.e. the patient partaking in the operation of the iPad) was observed in a minority of sessions (n = 3), with patients completing the ratings independently, semi-independently or touching the screen briefly.
Patient discussion of domain items
In almost all of the sessions (n = 14), there was at least one instance where patients responded to a question on satisfaction by elaborating on their situation, rather than providing a rating. For example, one clinician asked ‘how satisfied are you with your accommodation?’ and the patient responded ‘they are coming to fix my kitchen and bathroom’. In most cases (n = 5), clinicians were able to deal with this effectively by listening to the patient’s response and then asking some variation of ‘so how would you rate it?’. However, in many sessions (n = 6), clinicians did not deal well with patients’ elaborations as they dismissed concerns, interrupted patients or let patients talk at length, resulting in repetition of information later on, during the four-step discussion.
Clinician rephrasing and personalising the language of the DIALOG+ assessment
In all videos where the clinician poses the questions on satisfaction aloud to the patient, there was some rephrasing of the questions. Most frequently the alteration was ‘how do you rate [domain]?’, rather than ‘how satisfied are you with [domain]?’. This change was not viewed as especially problematic, as it had little impact on patients’ ability to complete the scale (although it was a conceptually different question). However, in around half of the sessions, clinicians’ rephrasing led patients to a premature discussion of the domain, for example, ‘how are things with [domain]’, or involved leading questions such as ‘are you satisfied or not?’.
In many videos (n = 7), clinicians used personalised information in facilitating patients’ ratings, for example, on the partner/family domain (‘I know you do not have a partner but with the arrangement with your family?’) and job situation (‘I know you’re not in work at the moment’). This personalisation was appraised as useful on the whole (n = 4), as it represented a good therapeutic relationship between the two parties. However, this was unhelpful if clinicians used judgement in ratings; for example, one patient rated 6 (very satisfied) for job situation and the clinician responded ‘But you said you do not have a job’ and then encouraged the patient to choose another score.
Additional help item
After each rating of satisfaction, clinicians are instructed to ask ‘do you need more help in this area?’ with ‘yes’ or ‘no’ response options. If additional help is requested, this becomes a criterion for selecting the domain for further discussion. The item is purposefully worded as ‘more’ to set the occasion for patients to ask for help if needed, rather than ‘any’, which does not invite a ‘yes’ response. 77
The precise phrasing of the help question was used in only one-third of videos (n = 5) and in only one of these videos was it used consistently across domains. Most commonly (n = 9) the phrasing was altered to ‘do you want to talk about that?’. Occasionally (n = 3) this phrasing encouraged the patient to begin discussing the domain and the clinician had to interrupt and explain that the discussion would take place once the rating scale was complete. Another common finding was that clinicians assumed the answer to the help item based on the patient’s score and made statements such as ‘so you do not need any more help with that?’ or ‘let’s discuss that’. ‘Any’ was often used in place of ‘more’. One clinician was observed in two videos with different patients not to ask the help item at all. As the item was mandatory in the software for any completed rating of satisfaction, this clinician must have responded to the item without consulting the patient.
Special attention to mental health
Given its central importance, clinicians were advised to give special attention to mental health, checking for patient distress or concern and proposing selection of mental health for further discussion, if necessary. However, in only a small number of videos (n = 4) did clinicians give special attention to mental health and, when this was done, clinicians demonstrated a misunderstanding of the instructions as the clinician insisted on mental health being discussed, rather than negotiating this with the patient.
Number of domains rated
In most sessions, all 11 domains were rated (in one session one domain is missed). In two sessions, the same clinician asks patients to rate only the items they wish to discuss (as a result, four items out of 11 are then rated). This is non-adherent to the manual and fails to generate the patient’s perspective on all 11 domains.
Comparing current and previous ratings and using positive feedback
After completing the assessment of domains, the clinician and patient can see an overview of all ratings which, from the second use of DIALOG+ onwards, may be used to make comparisons with a previous session. Clinicians were encouraged to briefly comment on positive ratings (≥ 5) or improved ratings, to ensure that positive thoughts, feelings and behaviours were reinforced. However, the use of this software feature was observed in only three videos. One clinician used pen and paper to write down and compare scores and another recalled a patient’s previous score from memory and challenged the patient on why this had deteriorated, causing the patient to instead give a higher rating.
Selection of domains
The following criteria were given to guide the selection of domains:
-
Select no more than three domains for further discussion.
-
Focus on domains where satisfaction is < 4.
-
Focus on domains where additional help is requested.
-
Focus on mental health if distress is reported.
-
If no domains meet this criteria, select domains with a score of 4 or a deteriorated score.
-
Selection is based on clinicians’ and patients’ discretion and should be a joint decision.
The average number of domains discussed across the data set was three (range 1–5), as suggested. It was most common (n = 8) for domains to be chosen if additional help was requested (including if the clinician used a rephrasing such as ‘do you want to talk about it?’). Occasionally, domains were chosen based on low scores (n = 3) or a combination of requests for help and low scores (n = 2).
Selection of domains usually arose from a joint decision (n = 9), although occasionally it was the clinician who chose, but the topics selected were verbalised to the patient (n = 2), for example ‘I can see that you scored job situation as 1 so we are going to talk about it’. However, in four cases the clinician chose the domains independently and did not verbalise this process, for example, ‘so the first one we are going to select is job situation’ and it was observed that patients found this more difficult to follow than the collaborative discussions.
The most commonly discussed domain was job situation (n = 11), followed by physical health (n = 7) and then mental health (n = 6). Each of the domains was discussed at least once, although some (friendships, practical help, meetings with professionals) were selected once only.
Four-step approach: understanding
The goal of step 1 was to understand the reasons for the negative rating or wish for more help in the given domain. The patient was also encouraged to consider his/her existing strengths or coping strategies within the situation.
Clinicians always (n = 15) explored reasons for dissatisfaction with the patient, although they did not always discuss the existing strengths or coping strategies (n = 10). Clinicians often used the exact wording of the questions from the manual, which was visible in the software. They also used amendments, personalisation or prior knowledge to explore satisfaction levels with patients. For example, ‘so what makes you dissatisfied we’ve discussed before . . . so what works, can you see any positives about the accommodation at the moment?’.
Clinicians sometimes were not able to facilitate a discussion in step 1. This was because of the use of double questions, for example, asking ‘what makes you dissatisfied?’, followed immediately by ‘what works?’. Often, it was also the result of clinicians asking questions from another unrelated step of the approach, for example, ‘so what do you think would work?’. This is a variation of the type of question intended for step 3, exploring options for improving the situation. Other limiting factors included patients being unable to answer the questions, so the clinician moved on without rephrasing for the patient.
Four-step approach: looking forward
During step 2, the patient was asked to imagine what changes he/she would like to see replace the current undesirable situation. This could focus on long-term preferred outcomes, through the eliciting of the patient’s ‘best-case scenario’, and more short-term changes, through the eliciting of ‘small improvements’.
Clinicians attempted to explore patients’ best-case scenario, that is, their ideal outcome, in the majority of sessions (n = 12), although only half of clinicians asked this for every domain discussed. Only two patients described their desired improvement as the presence rather than the absence of something, for example completing a higher education course compared with negatively expressed goals such as experiencing less pain. Other patients identified unrealistic scenarios such as owning a mansion. Some patients struggled to identify best-case scenarios and thus clinicians rephrased the question, for example, ‘what do you look forward to?’ and ‘what would be the best outcome?’. However, some patients were not able to answer this question despite rephrasing from the clinician.
In the rest of the DIALOG+ sessions, clinicians either did not ask about the best-case scenario (n = 3), used prior knowledge to summarise what works on behalf of the patient (n = 1) or used complex or double questions (n = 3). For example, when discussing physical health, a clinician asked two questions from step 2 and one from step 3 about patient action:
What would be the best scenario?
[Begins a response.]
Or the smallest improvement?
I am improving every day.
I understand that, but what I am asking is, you are still waiting for physiotherapy, what would be the best scenario, what would be the best way for you to get this appointment?
Just over half of the clinicians (n = 8) asked patients about the small improvements they would like to achieve, whereas the rest (n = 7) omitted this completely across all domains discussed. Discussion of small improvements was rarely used in the intended way (n = 3) (i.e. for the patient to describe a specific sign or behaviour that would indicate an improvement). Instead, they sought options or immediate actions from the patient. For example, ‘what would you say would be a small change that would be positive for you with the progression, what other things would you need to happen?’. Another example was a clinician asking, ‘can you identify the smallest change you could take towards this issue to not fainting?’.
Four-step approach: exploring options
The goal of step 3 was to explore a number of options that could help to bring about the desired changes, and think about what the patient, the clinician and others could do.
In all videos, there was some exploration of options and in six videos patients were asked about all three. However, in most sessions (n = 9) clinicians did not ask about options for all three. For example, they asked questions such as ‘what do you think we could do practically?’ without distinguishing between parties.
Four-step approach: agreeing on actions
The goal of step 4 was to reach an agreement on what action(s) should be taken and by whom. Either the clinician or patient could take the lead in this discussion and once an agreement had been reached, clinicians were instructed to briefly and precisely document the action in the text box provided in the software.
In all DIALOG+ sessions clinicians were observed verbally summarising and/or documenting actions, and most did both (n = 12). However, only four clinicians waited until the end of the discussion to document action items. For the most part, actions were documented during step 3 (n = 5) or step 2 (n = 1), or non-action items were documented during the conversation (n = 5), for example a description of why the patient was dissatisfied.
Non-sequential four-step discussion
In the majority of the DIALOG+ sessions (n = 12), clinicians were observed completing the four-step approach in a non-sequential order. In most cases (n = 6), the merging of steps was not as logical, did not seem to facilitate the session and was usually a result of the clinician using multiple questions from various separate steps simultaneously. For example, a patient explained dissatisfaction with their medication (step 1), so the clinician began discussing options and suggested that the patient could speak to their psychiatrist about it (step 3), with the clinician then exploring the reason the patient would like to stop medication and eliciting their desire to regain their driving licence (step 2). It was also common for clinicians to merge steps 3 and 4 (n = 5).
Note-taking/interruptions to the therapeutic relationship
The action item text box was often (n = 6) used to document general statements during the four-step discussion. This hindered effective implementation of the four-step approach because it disrupted eye contact and caused the clinician not to listen to the patient. In one example, nearly 6 minutes were spent inappropriately documenting what the patient was saying.
Some clinicians (n = 4) used pen and paper to take occasional notes during the sessions, which was considered appropriate and non-disruptive. In one case, the clinician made notes of the patient scores from each domain, despite these being recorded on the iPad.
Collaborative discussion and premature suggestions
The DIALOG+ approach was intended as a patient-centred intervention to equip both the patient and the clinician with a model for dealing with the patient’s concerns. Mostly, clinicians and patients collaboratively discussed domains (n = 9) and, occasionally, patients were able to propose possible actions without the help of the clinician (n = 3). However, in some sessions (n = 4), clinicians did not give patients the opportunity to identify practical actions that might bring about a desired change. For example, in a discussion of physical health, one clinician suggested that the patient should wait to see if their pain improved, without asking the patient to generate an option.
Clinicians were frequently (n = 10) observed making premature suggestions to the patient (i.e. before the patient had the opportunity to consider a response). Although clinicians were encouraged to help the patient generate options (in step 3), often premature suggestions were not collaborative, for example ‘it could be what you are eating that is causing the vomiting, what do you think you have eaten recently that you never used to eat?’
Shared viewing of the iPad during the four-step approach
Shared viewing of the iPad did not occur during the four-step discussion in the majority of sessions (n = 12), where clinicians had the iPad on their lap or a table, facing only them, used to regularly document non-action items, or occasionally as a prompt. Three clinicians made a conscious attempt to share the iPad with the patient during the four-step discussion by having the iPad propped up between the two parties on the table or their lap.
Completion of session and close
In the majority of sessions (n = 13), patients and clinicians discussed all of the chosen domains. However, most clinicians did not summarise the session or the agreed action items prior to ending the session (n = 10). Two clinicians were observed giving patients reminders of the actions that had been agreed; for example, one clinician accessed the ‘review action items’ screen and asked the patient whether or not he was happy with all of the agreed actions, referring to them as the patient’s ‘shopping list’. Other clinicians were observed simply reading summaries aloud, which sometimes included non-action items. This made the session seem repetitive and one patient appeared visibly bored by this.
In the 15 sessions, an average of 32 minutes was spent completing the four-step procedure (range 15–54 minutes). The total session length was 40 minutes on average (range 24–59 minutes).
Discussion
Although the trial resulted in improved outcomes for patients over a 1-year period, the findings from the current data set (however small) suggest that the implementation of DIALOG+ was inconsistent and that further refinements of the manual are warranted. Interestingly, many of the findings reflect similar themes from video data in the original trial (see Chapter 2, Study A1: analysis of video-recorded DIALOG sessions). This underscores the central importance of training. A number of recommendations follow from the findings of this video analysis, which can further inform the DIALOG+ manual and training programme.
-
It is helpful for the clinician to remind the patient of the procedure on an ongoing basis throughout the session, commenting on what is happening and what is going to happen next; for example, ‘so we’re going to select three items to discuss from the topics you have already rated. We’ll focus on them for the rest of the meeting. Is that all right?’.
-
Constant sharing of the tablet throughout the session is essential in focusing patients and maximising their engagement, so that DIALOG+ is meaningful to them. Clinicians should place the tablet in a position where it can be viewed by both parties throughout the session. Sharing is good practice as part of both the initial DIALOG assessment and the subsequent four-step approach. Propping up the tablet by using its supporting case as a stand helps to facilitate this. Having the tablet in such a position will also facilitate patients’ operation of the tablet, if they feel comfortable and able. This could also aid in encouraging the patient to internalise the four-step approach.
-
If a patient struggles to answer a question on satisfaction, the clinician should reword it as necessary, although without changing the concept of what is being asked. It is reasonable to reformulate ‘how satisfied are you’ as ‘how pleased are you’, for instance, despite the conceptual difference between these two questions; however, it is important not to deviate from the concept of satisfaction entirely with statements such as ‘how are things with your mental health at the moment?’.
-
If the patient is inclined to elaborate on their current situation when asked questions rather than providing a rating, it may help for the clinician to orientate the patient to the questionnaire and guide them to the specific question being asked. It is good practice to listen to the patient’s elaboration in the first instance, and then remind them that a more detailed discussion will take place after the initial DIALOG assessment is complete, in which they will have the opportunity to discuss what is important to them.
-
It may be helpful to use knowledge of the patient’s personal situation to facilitate their rating of topics, but clinicians should be mindful not to bias patients’ ratings in doing this.
-
Clinicians should note the wording of the question ‘do you need more help in this area?’ and make efforts not to deviate from this wording. It is specifically intended to set the occasion for patients to report needs for more help where necessary.
-
Comparing current ratings with those of previous sessions is a valuable function of DIALOG+. Patients are keen to regularly review progress to evaluate their current situation (see Chapter 7). Not only should clinicians compare and comment on improved scores, and subsequently encourage patients to consider the reasons for any improvements, but they should also offer their clinical opinion on the patient’s current situation, and invite the patient to comment on how well they feel they are doing. If a change in a patient’s rating(s) is surprising or alarming to the clinician, they should refrain from offering a judgement either during the rating process or during selection of domains. The patient should be allowed to raise what they feel is important during the selection of topics for further discussion. Clinicians should bear in mind that the aim of the rating exercise is not to generate high scores across all domains.
-
Clinicians appear to find it helpful and intuitive to use patients’ requests for more help as the basis for selecting priorities for discussion for the remainder of the meeting. This may be superior to the somewhat complex criteria outlined in the original manual. Therefore, explicit requests for more help should guide the selection of priorities in future implementations of DIALOG+. However, this should not be automatic or implicit. During the review, clinicians should note the topics with which patients have requested more help, suggesting that it might be helpful to discuss these in more depth, but giving the patient the opportunity to negotiate this; for example, ‘we can see that you wanted more help with job situation, accommodation and medication. Shall we agree to talk about these three today, or would you like to cover any of the other topics?’. The final three topics for discussion should be verbalised nonetheless; for example, ‘OK, let’s go with job situation, accommodation and medication, in that order’ and the patient should be invited to view the corresponding three domains that become highlighted on the screen following selection.
-
Clinicians may find it helpful to take notes during the meeting; however, the text box provided as part of step 4 was intended for the documentation of action items only. Use of this text box for routine documentation was distracting and appeared to impact negatively on the therapeutic relationship. Yet, it is understandable that clinicians would want to use the technology available to them to take notes, rather than tending separately to a physical notepad. That being said, it was noted that clinicians did not make notes in the control group. Although DIALOG+ could go further in supporting the clinician to carry out aspects of their day-to-day jobs, it is beyond the scope of DIALOG+ to do this, and any such attempts might dilute the active ingredient(s) of the DIALOG+ intervention. One solution might be to revise the DIALOG software so that the text box is not available until steps 1, 2 and 3 have been explicitly completed. This warrants further consideration.
-
More extensive training is needed to facilitate clinicians in effectively implementing the four-step approach. Video data revealed that this was implemented rather inconsistently. This may be partially attributable to the limited training programme provided in advance of the trial. An online e-learning training programme may facilitate a more extensive training programme, whereby trainees can access a wider range of materials and undertake refresher training. Although this training should strive to remain brief, time constraints will be less of a concern than was the case in the trial, as clinicians may access the training module as and when needed. Thus, a more in-depth illustration of the four-step approach can be provided.
-
It may be good practice for clinicians to provide action items as a printed summary at the end of sessions, to facilitate patients’ completion of action items. Clinicians require further training on reviewing actions at the subsequent meeting, including guidance on how to respond when a patient has not completed previously agreed action items, and how to go about the discussion, so that the potential monotony of the clinician listing action items while the patient passively listens can be avoided. Future technological implementation might facilitate patients having access to their own ratings and action plans online, for them to review between sessions with the clinician.
In the light of the recommendations described above, the DIALOG+ manual has undergone further refinement. The updated version is available to download for free on the DIALOG website (see Chapter 9).
Chapter 7 Focus groups with patients who experienced the DIALOG+ intervention
Introduction
A randomised controlled trial tested the effectiveness of DIALOG+ (see Chapter 4). The current study sought to explore the views of patients who experienced the DIALOG+ intervention in the trial.
Methods
Study design
This study was a qualitative study involving focus groups. Advantages of qualitative studies to complement quantitative research include their narrative, open-ended and holistic nature. 78 Focus groups facilitate a permissive, non-threatening environment among peers;41 synergy and spontaneity between interacting participants;42 and greater elaboration as a consequence, relative to individual interviews. 43 The study received a favourable ethical opinion from the same NRES as in the trial (London Stanmore; reference number 12/LO/1145).
Participants
Participants were patients allocated to the intervention group in the randomised controlled trial described in Chapter 4, who had passed the point of 6-month follow-up. All patients who had consented to being invited to a focus group on enrolment in the trial were approached, with 19 patients agreeing to a total of five focus groups. Patients were allocated to groups based on their availability and were diverse with respect to age, gender and ethnicity across groups. Four of the groups had four participants each, with the fifth group having three participants. Written informed consent was obtained on their enrolment in the trial, with no participants withdrawing their consent during the group.
Procedure and data collection
A semistructured interview schedule was developed by the research team (see Appendix 8). This was piloted with a service user reference group consisting of three service users with experience of psychosis and treatment in a CMHT, and subsequently refined. The group sessions were conducted throughout 2013. Each group session lasted between 1 hour and 1.5 hours. All group sessions were conducted by an experienced facilitator and co-facilitator, audio-recorded and later transcribed verbatim. After the transcripts were completed, audio-recordings were destroyed. Patient participants received £20 for participation and their travel expenses were reimbursed. Following the fifth patient focus group, the research team agreed that data saturation had been reached given that an initial review of the transcripts showed that similar data had arisen across the five sessions.
Analysis
Transcripts were independently reviewed by the facilitator and the co-facilitator. Both analysts independently coded the transcripts and categorised the themes. This ensured the reliability of analyses and reduced any potential researcher biases.
All interviews were coded line by line. An inductive approach was used to identify themes that were strongly linked to the data. Following the guidelines of Braun and Clarke,76 the researchers scrutinised each transcript, highlighting relevant passages to identify recurring patterns of meaning or ‘themes’. Next, related passages were grouped under the same theme into one broad category of themes. All themes and categories were entered into a database, which was used for ongoing comparisons and referencing across interviews. The analysis process was an iterative one that involved regularly revisiting the data set and revising themes and categories as often as required.
The researchers summarised the opinions of the participants using verbatim quotes, representing difference of opinions, consensus and consistency across groups, where applicable. An initial summary was presented to the wider research team and themes and categories were subsequently finalised.
Results
Thematic analysis yielded three main themes: (1) self-reflection through DIALOG+, (2) therapeutic self-expression through DIALOG+ and (3) the role of the clinician in DIALOG+.
Self-reflection
Many participants reported that DIALOG+ helped them to identify how they were feeling in the present moment and evaluate their current situation objectively:
Sometimes it’s nice to be able to . . . sometimes you get so caught up dealing with things on a daily basis that you don’t really check yourself, and when you’re asked these questions on a scale of 1–10 [sic] or whatever it kind of gives you more of an insight into how you are actually feeling.
P3, FG3
The fact that you have to score some of the answers in the sense that 1 is poor or 4 is good, it makes you self-evaluate yourself.
P4, FG4
It helps you to think.
P4, FG3
It makes you, makes you think . . . Think about your, your answers and the questions.
P3, FG4
The DIALOG+ intervention also helped them to monitor how they were doing from month to month:
It helped me track my progress . . . On a monthly basis . . . My lifestyle, my accommodation, my safety, medication, all those things.
P3, FG4
It kind of showed me . . . You kind of get to see what you said before and if you kind of made any improvements.
P1, FG2
With this increased insight, they were able to reflect more widely on how they were doing overall:
The questions that they put to you . . .
P3, FG5
They make you reflect on your life . . . makes you, makes you, what would you say, it makes you rationalise what you need to be thinking about because you’ve got mental health issues.
P2, FG5
The questions . . . made me look and reflect on my life . . . as the situation that I’m in right now. You know, it asks you about, are you happy with your life, um, how’s your progress, how’s your safety, how’s your property and what not . . . I’d never addressed some of the issues that I came across in the questionnaire, like, they’d never actually crossed my mind before.
P4, FG4
It helped me to identify my needs . . . like what I have to get better with . . . you know, my mental health, if I was in the middle or the top or the low category.
P4, FG2
I could see that I was making progress. It was good.
P3, FG4
Regular self-reflection incited them to consider the changes they needed to make in their lives:
It made me really stop and look at my life, and basically, like, the progress I need to make, or um, things that I need to stop doing . . . You watch your progress and you watch where you need to make improvements in your life . . . You make good use of the knowledge, that you need to improve yourself . . . Maybe prior to using [DIALOG+] it never actually crossed your mind. But the questions are so in your face that now you realise that, yeah, this is something that I need to work on.
P4, FG4
It can show you a bit about yourself once you’re answering the questions and you’re looking at what’s been ticked . . . I think you can get a fair idea of how you feel or what you want or what you’d like to happen.
P4, FG3
Improved insight arising from self-reflection seemed to have a positive effect on patients:
Hmmm, it’s quite interesting to know what mental health is about and how it analyses me and I how I analyse it, and how I take that on board and use it as a tool and a skill to learn my own mind, thoughts, to know that I am the same as everybody else, I’m not any different, that helps.
P2, FG5
In addition, noticing and reflecting on improvements from month to month had the potential to give patients hope:
Sometimes you get so caught up in life you’re just praying for the good days, so it’s nice to know that . . . you can reflect and say, ‘well, last month I was feeling [low] but this month I’m all right’, and it gives you a bit more hope for the future, so in that way it’s good.
P3, FG3
The solution-focused approach to problems in particular seemed to have an empowering effect:
It was like mind stimulating that you could build upon something that might have started off as negative and no options.
P2, FG5
It does help you because you can find positive things about a negative thing even.
P2, FG2
This increased self-reflection arising from the DIALOG+ intervention appeared to foster a sense of autonomy in some participants, and encouraged them to take the lead with their own treatment:
It can make you think about what you’re going to do for your life, obviously when you’re asked questions it can make you think about what you can do for yourself as well.
P4, FG3
You start improving yourself because you’re aware of it now . . . It made me realise what I needed to do. And then if I needed that assistance, I would approach my care coordinator and let him know that, ‘you know what, I’m lacking in this department’, or ‘I’m doing well in this department, so, what can we do to improve myself’.
P4, FG4
However, not all participants reported better self-reflection after using DIALOG+:
To me it’s just another way of putting down thoughts isn’t it, just another way of getting information and that was that . . . It got down my thoughts in the same way . . . I just went through with it and got it over and done with and that was it, and my day just went on and I did what I had to do, and all that. That’s it really.
P2, FG3
Some patients expressed a desire for more information on the ratings they were amassing over time:
I think that what would be nice is if, erm, all of the information you are collecting, you was able to do like, I don’t know, maybe like a short profile on what you think the individual’s mental state is.
P3, FG3
I can understand what she’s saying about feedback, like, maybe a little summary or something like that, yes, that could be quite helpful . . .
P2, FG3
Therapeutic self-expression
Many participants mentioned that DIALOG+ helped them to express themselves and almost invariably linked this self-expression to improved affect. It appeared to be patient-centred questioning and exploration of a range of relevant topics that helped patients to be expressive and which made them feel better as a result:
Sometimes you’re . . . not feeling good inside and you’re holding things inside and . . . once you come in front of the computer, she would ask you questions and . . . that would . . . help you to express your feelings and things . . . I think it was the best way to . . . get yourself to express the feelings, held inside.
P3, FG1
Like what this gentleman said earlier [P3, FG1]. It brought out different issues and it was nice to express yourself. And talk about things . . . Sometimes you keep it bottled up and you’re not expressing yourself. It was quite therapeutic in a way . . . [I] was pretty low at the time, and it helped to just talk about things . . . It was more of a holistic sort of approach, like, discussing all areas of mental health.
P1, FG1
Yeah. Felt better afterwards . . . Just eh, talking about things. So, very many complex subjects . . .
P4, FG1
It’s like an offload, isn’t it, dust yourself off, you’re up to speed . . . It’s like you’ve been rebooted with [DIALOG+], it’s like a different kind of therapeutic feeling . . . This cheers me up . . . I feel happy to do it.
P2, FG5
I just felt better after the end of the session, than I did without the iPad.
P2, FG2
The DIALOG+ intervention elicited patients to talk more about different topics that were important to them:
Every month, once a month. Taking about my, my medication . . . Talking about my accommodation . . . Talking about my family . . . They were talking about so many things in my life. I’ve been enjoying it . . . My family. My future. How my future will be. Talking about so many things.
P1, FG4
It did bring up new things as I said, every day is different isn’t it, so if you go there every 3 weeks . . . the assessment is going to be different, your levels are going to be different, you’re going to have different situations happening in your life.
P2, FG5
Yeah. More sort of detailed. About how you’re feeling. Everything with your medication, housing, activities.
P3, FG4
[When using DIALOG+] We talked about [things] a bit more.
P5, FG5
I felt like it covered quite a lot of things . . . It just gives us more to talk about.
P2, FG3
Often, it was the rating element of DIALOG+, in particular, that helped them to express themselves:
It means that you can talk about your feelings and talk about your mood and rate it . . . so you can express yourself better.
P5, FG5
It kind of made it more simple, instead of going on and on trying to describe how you’re feeling it was like with one number you could tell everything, isn’t it.
P1, FG2
Yeah it does, I would agree with that.
P2, FG2
I find it very helpful to use the scale, you know, because like 1–7 makes you answer the questions more.
P4, FG2
Everything was there for you. Um, in, in the palm of your hand, basically . . . It’s practical, very logical . . . ‘Cause the format’s there . . . As the questions are asked, you just answer . . . It was multiple choice as well.
P4, FG4
Patients’ self-reported ratings did not appear to be vulnerable to the clinician’s influence:
My care co-ordinator, he would say to me, ‘Oh [patient name], don’t you think you should score yourself a bit higher or a bit lower on that?’ I goes ‘No, that’s what I think the scale is’ [laughs], you know what I mean?
So [the clinician] would make suggestions to you?
Yes, but I’d stick with what I was thinking.
One participant described how DIALOG+ facilitated her being honest with the clinician:
The questions were going so fast that I had no choice but to be honest . . . I had no time to think and lie and think like ‘Oh, should I tell her I’m depressed, just in case I get incarcerated.’
P2, FG5
Although therapeutic self-expression was a dominant theme, it was not universal. Not all participants found DIALOG+ wholly therapeutic:
He just bombarded me with questions and I’m like ‘Yes, no, maybe, 1, 2, 3’.
P3, FG3
I’m just taking it as it comes, just getting it over and done with . . . Last time . . . it went on for an hour and it was a little kind of stressing me out for a little bit for about a minute or two, that’s because I was hearing voices at the same time . . .
P1, FG3
It’s just questions and you can’t concentrate, it’s like you’re in your own world and even looking at things take on a different meaning, so when you’re actually in psychosis it’s very hard to concentrate and sometimes it can even be annoying.
P3, FG3
I felt good after the first one but the second, this was draining me, I’m bored . . . To me it was like a bit confrontational . . . Like, why you asking me why why why, 1 to 7, why I’m happy with 5, you know, insisting on how my emotion was set . . . It was more intrusive . . . The police don’t even question you like that!
P2, FG5
The initial assessment of the 11 topics had the potential to restrict self-expression, according to a minority of participants. Some found the core questions difficult to understand, although this was helped by input from the clinician:
Choosing the answers, and some questions, were not too clear to me. So I’d have to ask my care co-ordinator like to explain it to me, break it down.
P4, FG4
A lot of my time was spent on . . . The care co-ordinator repeating the question. ‘Cause I didn’t understand it.
P3, FG4
It can be bad sometimes . . . It’s easier not to use it [DIALOG+] . . . Sometimes I don’t understand it.
P2, FG4
When I originally started I didn’t know what it’s all about but when [the clinician] explained to me . . . Because most of the answers, I didn’t know the answers to most of the questions . . . Then I started enjoying it.
P1, FG5
Others encountered difficulty with providing ratings:
Sometimes when they give you the questionnaires, you can think of . . . other answers. Than the ones you’re supposed to tick . . . Sometimes the answers they give you . . . don’t fit the bill . . . Compared to other answers that you thought of when you read the questions out.
P4, FG1
It was sometimes difficult, ‘cause, like, my mental health . . . it fluctuates. I could rate myself . . . pretty low one minute, and then after an hour, it could be . . . quite high. So it’s difficult to pinpoint exactly where you would be. [Also] because it was repetitive, sometimes I needed to change the answers to make it a little different? So, one month if I remember giving a score of 2, and I’m thinking, do I always say 2, 2, 2, every single month, so some month if I could remember me scoring a bit lower I would give like a 3 or a 4. Just to change things a bit [laughs] but actually I felt the same.
P1, FG1
[Discussing the choosing of ratings] I did find it hard sometimes.
P4, FG5
I found it hard at times.
P2, FG5
Some of it was helpful.
P4, FG5
Some of them was hard for me.
P3, FG5
I just answer the question to the best of my ability, just answer it to the best of my ability, that’s all I can do, but sometimes some of the questions can be tricky, very tricky, so you don’t know what score to rate it at.
P4, FG5
When I originally started I didn’t know what it’s all about but when [the clinician] explained to me . . . Because most of the answers, I didn’t know the answers to most of the questions . . . Then I started enjoying it.
P1, FG5
One participant felt that the 11 topics of DIALOG+ were not entirely comprehensive and expressed a desire to add further topics:
Sometimes the 11 topics that were chosen did not . . . address all of the issues. So there could be certain other issues that needed to be brought up . . . I think . . . we should be allowed to design some of our own topics . . . Two topics that we could bring up, and we could talk about every month . . . I think it would be helpful.
P1, FG1
The role of the clinician in DIALOG+
In describing their experiences with DIALOG+, many participants emphasised how DIALOG+ was for the benefit of the clinician as much as for themselves. Often they reported that DIALOG+ helped the clinician to do a better job:
It stopped your key worker forgetting everything. I thought it was more thorough than usual with my key worker.
P2, FG2
My care co-ordinator kind of had several topics with numbers on so and then it made a graph or something so he could tell where we was going wrong or something . . . It helps with the survey, they kind of know where you’re going wrong.
P1, FG2
For me with my care co-ordinator I think it helped him, he gets to know a bit more about you and it’s helping him with his job.
P2, FG3
I think it gives your care co-ordinator more of an understanding about you than it will do yourself because . . . your care worker is doing it with you, he can focus on the answers and look at different forms of helping you.
P4, FG3
[DIALOG+ was] more structured, more professional . . . more focused . . . She would make actions where she would say, ‘What could be done about it?’. So, she would make notes in, like, say ‘Contact so and so for this’ . . . And I think that was better, because things got done, in that way . . . Issues got addressed . . . Constructive things were being done about certain issues. So I think more and more was being done, with [DIALOG+] in place.
P1, FG1
Rightly or wrongly, many perceived the clinician’s use of a computerised tool to be automatically more professional:
This was 100% advantage . . . Because the question is there, the topics are there, you just choose a few of them and answer it, I mean and it goes on [the electronic patient records system of ELFT] . . . and you can write the answer there instead of on paper, this will never get lost like paper.
P4, FG2
It’s filed up, filed, it’s in the system, it won’t easily get lost, you can easily get it back or whatever and have a look.
P1, FG2
All about using this software . . . The software I think was very professional. Rather than using on papers or . . . or, diaries, or things like that. This way of using the computer was a quite professional approach.
P3, FG1
It was more monitored. So I think it helped the performance of like the care co-ordinators. When things are monitored or supervised, I think people perform better. You know someone’s gonna [sic] check up on the . . . progress, and everything’s gonna [sic] be addressed, so it makes the care co-ordinator perform better . . . So I think, in that sense, I thought it was more professional.
P1, FG1
Some patients described how DIALOG+ facilitated a better therapeutic relationship with clinicians:
What I found with my care co-ordinator that he’s like in, he gives me injection, like ‘Hi, how you know are you’, and he’s out and he gives you the impression that he just doesn’t give a shit and he’s there to do his job, whereas when you get asked questions about how you’re feeling, what your well-being is, what are your hopes and aspirations, it makes you feel more like somebody [cares] . . . Sometimes it’s like when you’ve got mental illness it’s like you can feel isolated, so just having someone asking you questions, it’s interaction isn’t it, and at the very basic of it is company at the end of the day.
P3, FG3
I was close enough to see the tablet so I was interacting with her and the tablet . . . It is quite exciting because even if you’re sat beside someone playing the computer and you’re not actually playing yourself but you’re watching the screen, you get into it, like ‘Ooh, watch, no, jump jump’, that kind of thing, so it was an emotional connection to the care co-ordinator as well as the tablet itself, it created feelings.
P2, FG5
In some cases, patients held a rather more passive sense of the tool, seemingly viewing the clinician’s use of a tablet primarily as a checklist to follow or for routine documentation of notes, rather than something for them to be concerned with:
I just know that the guy was writing down and ticking off, ticking off things . . . I didn’t have too much [sic] dealings with it . . . My care co-ordinator asked me, did the iPad thing with me, I believe maybe once or twice.
P3, FG3
So I was sort of asked questions, so what would I like, or what would I want to do, or what would I like to happen? And I mean he took all those notes down and on the iPad as well . . . It goes onto the records so that’s for them to deal with.
P4, FG3
It was OK . . . It was just like, saving some conversations and things. It was OK.
P3, FG1
It was OK. . . . It was just electrical equipment and I seen it used before, so . . . It wasn’t . . . It wasn’t difficult.
P1, FG1
Whatever, whatever whatever I answered to there, to the questions, he put down on the iPad.
P2, FG3
He used it for me, I don’t know how to use it. He asked me questions . . . Because I don’t know how to use the iPad, I don’t know how to work it. He use [sic] it . . . Anything I say he write [sic] it down.
P1, FG4
I felt like it was just using a phone, basically, yeah, using a phone, and I’m not really fast at good at the latest phones, if you get what I mean, and when I saw him tapping into stuff I wondered what he was doing at first . . .
[Laughs] Yeah, if he’s sending a message or something on the tablet.
Yeah.
This detachment may have been related to individual clinicians’ implementation of DIALOG+, which did not always seem to be patient-centred according to patients’ accounts:
When I asked my care co-ordinator, I think the way he explained to me I didn’t really get it, even until now I didn’t really know the real meaning of the iPad work, I don’t know the meaning, what we are using it for.
P1, FG5
No, I wasn’t too sure either where it was leading either, I did it because he told me to . . .
P2, FG3
My care co-ordinator has discussed with me and I tried to get it over and done with as soon as possible.
P2, FG5
Clinicians’ implementation of the tool was inconsistent, with some core elements of the intervention reported to have been omitted in some cases. One such element was shared decision-making between patient and clinician regarding which topics would be discussed during the meeting:
It was me, I am choosing but she has given me the topics, which one to choose from.
P4, FG2
I would’ve liked to have been more involved and be specific about which ones I would like to work on.
P1, FG2
You would choose them from the list, yeah, your care co-ordinator doesn’t choose anything for you, she gives you options, how do you feel.
P2, FG5
For me he’d bring them up on the list and say ‘we’re going to talk about this one today and this one and this one’ and he’d pick them out and we’d talk about them.
P2, FG3
I don’t remember being given options of what I wanted to talk about, he just asked me questions and I answered them.
P3, FG3
So you didn’t pick out formally any two or three topics for discussion at the end of the initial assessment?
No, not to my knowledge.
Other elements included comparing ratings from the current session with ratings from a previous session, and special attention to mental health:
Did you ever look back to previous ratings to compare?
No, I didn’t get the chance.
Was there any special attention given to mental health?
Not that I remember.
No, that I can’t remember.
Participants were particularly unfamiliar with the specific steps of the SFT approach to problems. However, it is unclear whether or not clinicians omitted these parts of the intervention or if patients did not notice or remember them, as some patients had problems with recall:
I don’t remember it. I remember some of it but not all of it, even though, even though it went on for about an hour or something like that, you know what I mean, but I don’t remember all of it.
P1, FG3
I can’t remember actually, I remember the iPad, she asks the questions, but I can’t remember.
P3, FG5
I don’t have that strong a memories of it. I remember one time using it but I don’t remember using it more than twice.
P2, FG2
Some patients reported that there had been limited progress with respect to action plans that had been agreed:
We did [make action plans] but I can’t remember the action plans . . . We’re a bit sceptic about where it’s going, it’s a bit mystic.
[Laughs]
I can’t remember everything . . . We spoke about certain things . . . But I haven’t had much as I say feedback to where we’re going with it. I’ve answered the questions that’s all I can do, about my physical health, my mental health, I’ve answered all that . . . I did make action plans but I haven’t got anything back on top of that like I said.
I was working bits and pieces myself and she said she was going to help me [find a job] as well but it didn’t go any further . . . I was like same place, still there, from day one, I was like still the same place, I couldn’t see much difference . . . It wasn’t moving, it wasn’t going nowhere, it was like we were talking but we weren’t going nowhere . . . She said someone from [name of local back-to-work scheme] was going to see me but then that didn’t happen so I kind of lost interest.
P1, FG2
My main problem was my legs, that I couldn’t walk properly. She said ‘what would make it any easier’. I said getting physiotherapy. So I had to see my GP and he gave me the referral to the physio[therapist]. But I haven’t got the appointment back yet.
And did you talk about what your care co-ordinator could do to help?
Yeah . . . It was just the chasing up the physio[therapist] . . .
[later on]
For me, nothing really changed.
Yeah.
It was just the same thing personally. The only added bit was the iPad. Which I found was useful.
Likewise.
One participant articulated that DIALOG+ is only as good as the clinician implementing it:
[Discussing DIALOG+ versus usual meetings] I would opt for any of the two, because my key worker . . . is the main factor . . . not the iPad programme.
P4, FG4
Discussion
This study sought to gain the experiences of patients who received the DIALOG+ intervention routinely as part of their care in the community while participating in the trial. Following focus groups with five patient groups, thematic analysis yielded three themes: (1) self-reflection through DIALOG+; (2) therapeutic self-expression through DIALOG+; and (3) the role of the clinician in DIALOG+.
The first theme, self-reflection, refers to many patients’ reports of learning more about themselves and their current state through DIALOG+. The intervention helped them to identify how they were feeling about their current situation and evaluate it objectively, to track their progress from month to month, and to reflect more widely on their lives. Providing ratings on key topics and regularly reviewing them was central to this. Some patients, in turn, managed to consider changes they needed to make in their lives and to become more positive in their evaluations of their situation, with the result that they were taking the lead with their treatment and becoming more autonomous. Patients expressed a desire to learn more information about their own ratings as they amassed over time, in the form of a report or summary.
The second theme, therapeutic self-expression, refers to patients’ reports of being better able to communicate their current situation to their clinicians with DIALOG+, to ‘offload’ how they were feeling, and to cover a range of different topics, with the result that they felt better afterwards. Again, addressing key topics and providing ratings appeared to be the essence of this.
Interestingly, patients’ memories of DIALOG+ and positive accounts of the intervention largely pertained to the process of rating their satisfaction and considering their own reports, with less emphasis on problem-solving or solution-focused strategies. These aspects were at the core of the original, more basic DIALOG intervention, but were considered only part of the intervention of DIALOG+. Yet DIALOG+ may have optimised this process. Patients undertook the assessment on a monthly basis in DIALOG+ as part of their regular contact with their clinician (as opposed to 2-monthly or quarterly, as in the original DIALOG trial). Regularly undertaking a fixed assessment as part of routine meetings may have provided a more structured approach and made this part of the intervention very accessible to them. Furthermore, clinicians explicitly conducted reviews of the ratings with patients, commenting on positive aspects and providing feedback. This in itself may be unique to DIALOG+, given the evidence from study A2 that clinicians often neglected to discuss patients’ ratings with them after they had been submitted. This discussion of the ratings seemed to be perceived as a generally more thorough conversation with the clinician, although it was not necessarily identified by patients as a solution-focused approach to problems.
It is prudent to note that not all patients shared the view that DIALOG+ helped them to self-reflect or become more expressive or that the experience was therapeutic. Some participants were indifferent to the intervention, whereas for others, the experience of answering questions was intensive and at times stressful. Patients mentioned the difficulty of concentrating on a questionnaire when experiencing interference from positive symptoms of schizophrenia. One patient stated that he preferred not to use DIALOG+ as he found it too difficult to understand. Difficulties with understanding the questions and the response options were echoed by other participants, although for some these were overcome by repeated use of DIALOG+ and input by the clinician. We might conclude that the intervention may not always be suitable for every patient, but that problems with understanding should be addressed through effective collaboration and interaction between patient and clinician. Indeed, the crucial role of the care co-ordinator in the effective implementation of DIALOG+ constitutes the third and final theme.
Overall, there were mixed accounts of clinicians’ implementation of DIALOG+. Although many patients reported that clinicians’ use of the tool was useful in helping them to do their job and be more professional, and that it facilitated a better therapeutic relationship, there were accounts of clinicians neglecting to share the device, with some patients not understanding why the clinician had a tablet computer in their possession or assuming that it was there for the clinician to make notes. In addition, patients did not recall certain elements of the intervention being delivered, in particular the specific steps of the four-step approach. Some reported action plans being made, but progress between meetings was limited. Although DIALOG+ achieved modest success in setting the occasion for more patient-centred meetings than normal, it was seemingly less successful in influencing patients’ and clinicians’ behaviour outside meetings, based on these reports.
Whether patients had no recollection of the intervention or the intervention was not properly implemented, similar conclusions can be drawn as to how the intervention can be improved. Clinicians should be trained not only to follow the instructions of the manual as it is set out, but also to involve the patient in following this structure. They should explicitly inform patients that the conversations they are having around topics flagged as problematic are based on SFT and that they are working in a novel, systematic way. Patients should be invited to view and understand the model of the four-step approach, and sharing of the device is crucial to this.
However, it is noteworthy that sharing the device during the four-step approach to problems may be less intuitive for clinicians than during the initial DIALOG assessment. As the SFT probes appearing on the screen are fixed and do not change, there is less opportunity for patients to use the touch–screen interface and provide input into the tool; as compared with the initial DIALOG assessment, where they can be active in touching the screen to provide ratings, or at least, they should be able to see the screen in order to give their answers. Patients’ central involvement during this assessment may account for their strong recollection of, and engagement with, this part of the intervention. Revisions to the software may be warranted to make the four-step approach more interactive.
That being said, the intention is for the patient to interact with the clinician, not the tablet device. An alternative solution may lie in patients’ request for regular reports or summaries. Good practice in DIALOG+ might be for the clinician to print a summary of the four-step approach at the end of each session, complete with the action items agreed. This would give patients the opportunity to reflect on the four-step approach between sessions and with repeat sessions, might help them to internalise the procedure more effectively than was seen in the trial. Accordingly, a print function has been implemented in the DIALOG software subsequent to the trial, allowing clinicians to print a screenshot for patients easily by connecting DIALOG to any wireless printer. Where printing is not available, for instance when clinicians conduct home visits with patients, the clinician might provide a summary of the four-step approach to keep. Accordingly, a summary page explaining the four-step approach has been developed for patients and checked by service users, which will be available to download for free from the DIALOG website (see Chapter 9). As part of their web-based training, clinicians will be encouraged to print this and pass it onto their patients when conducting DIALOG+.
Chapter 8 Focus groups with clinicians who delivered the DIALOG+ intervention
Introduction
The previous chapter reported patients’ experiences with the DIALOG+ intervention. The current study sought to explore the views of clinicians who implemented the DIALOG+ intervention with patients in the trial as part of their routine care.
Methods
Study design
This study was a qualitative study involving focus groups and individual interviews. Participants were invited to participate in a focus group in the first instance; however, when this could not be practically arranged, an individual interview was arranged instead. Focus groups were preferred, as they generate rich data arising from interaction between group members. 79 Clinician focus groups were arranged separately to patient groups, to reflect differing experiences. The study received a favourable ethical opinion from the same NRES as in the trial (London Stanmore; reference number 12/LO/1145).
Participants
Participants were clinicians allocated to the intervention group in the randomised controlled trial described in Chapter 4. Nineteen clinicians were recruited to a total of four focus groups (two groups of four and two groups of three) and five individual interviews. Clinicians were allocated to groups based on their availability and were diverse with respect to age, gender and ethnicity across groups. Written informed consent was obtained on their enrolment in the trial, with no participants withdrawing their consent during the group. Clinicians were not paid for their participation in focus groups, as the groups took place within their regular working hours.
Procedure and data collection
A semistructured interview schedule for the clinician groups/interviews was developed by the research team, informed by the procedure outlined in the DIALOG+ manual (see Appendix 9). This was checked by the service user reference group for user-friendly language and subsequently refined. The clinician interview schedule was piloted with the trainers who delivered the DIALOG+ training programme, and subsequently refined. Groups were conducted throughout 2013. Each group lasted between 1 hour and 1.5 hours. All groups were conducted by an experienced facilitator and co-facilitator, audio-recorded and later transcribed verbatim. Once the transcripts were completed, recordings were destroyed. Following the fourth focus group and the fifth interview, the research team agreed that data saturation had been reached, given that an initial review of the transcripts showed that similar data had arisen across the nine sessions.
Analysis
Transcripts were independently reviewed by the facilitator and the co-facilitator. Both analysts independently coded the transcripts and categorised the themes. This ensured the reliability of analyses and reduced any potential researcher biases.
In the same method as that which is described in Chapter 7, all interviews were coded line by line. An inductive approach was used to identify themes that were strongly linked to the data themselves. The researchers scrutinised each transcript, highlighting relevant passages to identify recurring patterns of meaning or ‘themes’. Related passages were then grouped under the same theme into one broad category of themes. All corresponding themes and categories were entered into a database, which was used for ongoing comparisons and referencing across interviews. This iterative process of analysis involved regularly revisiting the data set and revising themes and categories as often as required.
The researchers summarised the opinions of the participants using verbatim quotations, representing differences of opinions, as well as consistency and consensus across groups, where applicable. An initial summary was presented to the wider research team, after which time the themes and categories were finalised.
Results
Thematic analysis yielded four main themes: (1) efficiency of DIALOG+; (2) empowerment; (3) the role of technology; and (4) optimising use of DIALOG+.
Efficiency of DIALOG+
Many participants reported that DIALOG+ helped to make their routine meetings more efficient. One aspect of this was how DIALOG+ lent a comprehensive structure to meetings that was helpful to clinicians:
It is actually very beneficial because you have your questionnaire that has been structured in such a way that you can cover almost everything that you need to cover with a client, from the psychosocial intervention to other things like their physical health, their mental health, their social life, their relationships . . .
P1, FG2
I find it, you know, very thorough . . . All the questions asked, they are the things we should be asking our clients and what we should be doing daily with them, but sometimes we don’t get down to the nitty gritty, grading . . . their mental health . . . It’s just more comprehensive in such a way that you can touch everything, because if you are not really organised you miss a few things that we need to ask them and that gives you an edge.
P3, FG1
I found that you don’t forget anything, you go through everything.
P4, FG1
This structure helped clinicians to broaden the scope of their conversations with patients and cover areas outside mental health:
[Prior to DIALOG+] Mostly we talked about, when I do a review, we sort of talk about mental health, and I get them to rate their mood and that, but with this one it gave them the advantage of being able to rate all the other aspects of it.
P3, FG2
I think that before it was our focus . . . more sort of mental health, and the positive and negative symptoms, whereas [DIALOG+] explores more sort of their physical health, their social background, and other aspects of their life.
P2, FG2
It’s kind of given me or us professionals working with our service users a wider scope of areas to look at ranging from A–Z of what people go through.
P1, FG2
Sometimes you just focus on mental health and you forget about all the little social things that contribute to making that person feeling unwell . . . And I suppose that was the good thing about it, because it kind of captured a great deal, so you could . . . form a holistic understanding of that individual.
INT4
Many reported that their meetings with patients tended to be more focused as a result of the DIALOG+ intervention:
[DIALOG+ was] a mechanism for formulating an agenda and, you know, letting them know that ‘we’ve got an hour together, let’s make the most out of what you need to talk about’ and this enables us to do that . . . It just was a catalyst, it was a useful way to focus on relevant topics rather than irrelevant topics.
P1, FG3
[Typically] I ask them, I say ‘How are you, are you hearing voices, how are you sleeping, how many hours you sleep at night’ and so forth, it’s haphazard isn’t it? But this one, because it was structured, it was easy for them also to follow what we are talking about. And I find the structure much easier and much more beneficial than just haphazard a couple of questions.
P4, FG1
Reflecting the theme of ‘self-expression’ identified in the patient focus groups, many clinicians reported that DIALOG+ helped their patients to open up more and clinicians learnt more about them as a result:
I thought it was really good . . . you got a lot of information. One client actually, I got so much more information, background information out of him because he really responded to this structure actually very well . . . A client who usually when you ask him ‘How are you feeling’ usually answers ‘I’m fine’ . . . he was giving away so much more about his past history.
P2, FG1
I have this person, one of my service users, he doesn’t really sort of answer, you know, answer anything, and if you sit with him it’s really difficult to have a conversation about anything, but something like that [DIALOG+], you might be able to sort of get them to focus a little bit certainly more on different components, and get a bit more.
P1, FG1
‘Cause of the headings it makes you want to explore all of those . . . you can give that person an opportunity to disclose areas where they are finding it difficult or where they may need help, so yeah, you do gain more information from that.
INT4
Being able to disclose . . . I felt that at the end of it, some of the information that they wouldn’t have given me normally in my review, most of them did open up and really try, and it did help.
P3, FG2
[It] does give an insight into the problem . . . That was the best part of it, because some of the aspects . . . We found the questions, sometimes we don’t even think of it.
P2, FG4
Sometimes we might think that this is an important area for them and they will choose they will choose something else altogether.
P1, FG3
However, although the efficiency and helpfulness of the tool in structuring conversations was a dominant theme, some participants felt that DIALOG+ had the potential to detract from the efficiency of meetings at times. They found adhering to a structured approach limiting in some cases:
I found it sometimes a bit limiting actually as well . . . because you always follow the structure and I had this quite often actually, that, erm, what happened is that you were sort of, you know, leaving this path and ending up in longer discussions about something . . . and then it sort of went off to somewhere else. And then I find myself sometimes to bring the clients back to [DIALOG+]. [Others ‘mmm’ in agreement.] And erm, sometimes it was good, sometimes it was disruptive actually . . . almost putting something upon them because they were talking about things anyway.
P2, FG1
For that individual some days it was very difficult because he came in with things that were preoccupied on his mind so it was difficult to get ratings . . . and for him to identify his problem ‘cause he was so fixed on the problem he brought.
INT4
[My patient] literally walks in and straight away she wants to discuss things and the computer seems like erm, you know, like a hindrance really . . . So it was really hard sometimes to focus her, you know . . . I think for some of our service users it doesn’t quite work . . . Also with [a second patient], similar story . . . That’s the kind of client who just loves talking and doesn’t like being constricted . . . I found it quite hard to focus her . . . She disappeared at some point and then there was a chaotic event of domestic violence . . . Yeah, so those two events sort of disrupted the process . . . She reported domestic violence so obviously it wasn’t appropriate to come up with DIALOG . . . They were just, just too chaotic in their presentation to have a more structured approach I think.
INT5
Some participants expressed how DIALOG consumed valuable time without being fruitful:
In the early days it sort of took up the whole session going through the whole thing about scoring, where are you at, and justifying which of these three are we going to talk about today, and by the time you got there you know looking at your watch and there ain’t [sic] a lot of time left . . . When you really know that you’ve got to get on and do some work with somebody but you’re spending sort of 40 minutes applying time to something that is almost irrelevant . . . you’d be better off spending the time filling out the housing benefit form that you know is going to come at the end of the session . . . So it was almost an intrusion into limited time . . . It would just completely mitigate against anything else that I’ve got to do with them . . . Because when I see my client once a month I’ve got so many things I’ve got to achieve and do with them.
P1, FG3
However, others specifically stated that the intervention was not time-consuming:
It was not time-consuming because it is the amount of time you use anyway.
P1, FG2
One participant held that DIALOG+ did not make routine meetings any more efficient than they would be without the tool:
[My patient] would’ve had all of those things addressed regardless . . . I’m not the sort of care co-ordinator that would ever not enable somebody to tell me what was troubling them . . . I didn’t really need . . . an iPad and a structure to get that out of anybody . . . This isn’t rewriting the wheel . . . in truth this is what good care co-ordinators have been doing is to develop ways of encouraging clients at their own pace.
P1, FG3
Some participants verbalised how patients repeatedly choosing to discuss matters over which the clinician had no bearing was a source of frustration for both patient and clinician:
You do tick those boxes and then you ask them what they want to discuss, so most of the time . . . they seem to be choosing the same thing . . . So it’s like so after about 3 months . . . seems like you revolving around the same thing all the time, and you’re not helping with it . . . They become fed up with you repeating yourself most of the time and they repeating the same answers to you.
Yeah, especially if you haven’t got the answers to that questions. Then it’s like [from the patient’s perspective] ‘I’m saying things, you’re asking me all this, it’s not as if you are gonna [sic] do anything to it anyway’ . . . For instance . . . accommodation, which our hands are tied on because it’s with [the] council and then they say ‘I am not happy with my accommodation, I want to move.’ . . . It’s gonna [sic] be a problem to them because it’s not going to be solved with us, so it’s like you know you’re asking them, ‘what have we got to do about it’ – nothing. But it is there, it is a problem . . . It’s still the same: ‘I still want a place, I want you to move me, but you can’t do it, yeah.’
One of the big things for [one of my patients] is her housing . . . There is nothing I can do about that issue for her . . . I ended up with a lot of time on something that I felt was using up valuable clinical time . . . I thought it raised expectations in her that I couldn’t fulfil . . . It can give the illusion that you’re saying you’re going to be able to deliver something . . . Sometimes what people identify as something that would help . . . isn’t available or isn’t possible for us to deliver.
INT2
A related, but opposite, problem was instances of patients choosing to discontinue discussion of the topics from the previous month, which undermined the work that had been done and created a barrier to making sustained progress with a particular problem:
I think the difficulty was he changed the areas every time we’d meet . . . We’d set out some kind of interventions . . . for smoking for example . . . and then although one month he’d say he’d want to look at one thing and even though I’d remind him ‘well, last month we looked at this let’s look at that again’ he would say ‘no, that’s absolutely fine now, I’ve got no issues with that area’. . . I’d want to bring him back to areas that we’d looked at before and take a slower process with him . . . but focus on those areas . . . stay more with those areas that we previously discussed . . . One month was looking at physical health and I mean that’s long-term process . . . so for him to say ‘well you know I don’t want to think or talk about it, my physical health, it’s fine’ when he spent a good half an hour talking about it last month, it’s really not that helpful . . . You know just typing in these ideas or interventions and then just losing them because next month he wouldn’t talk about them.
INT1
Another clinician described how their patient would forget that action plans had been made the previous month:
I think very often people sort of forget because the idea is that we go back to what was discussed the last month, and very often they forget what’s been discussed and that we set up some goals.
INT5
Empowerment
Similar to the findings from the patient focus groups, clinicians reported that their patients demonstrated better self-reflection and insight arising from the rating component of DIALOG+ and that this could be therapeutic:
[One of my patients] is quite interested in pointing at his own [ratings] and averaging his [ratings] . . . He says ‘oh, that’s moderate’ and ‘no, it’s not moderate, it’s actually severe’ and he will change it.
They would rate it, say, 2, and then when you try to go, when you try to explore number 2, they say ‘oh, I should’ve been on 4′ because they look at the negative aspect rather than the positive aspect of their lives . . . So it actually help them to think about the positive aspect rather than the negative aspects . . . You can actually have a look over the length of time where they are and they can actually see the positive aspect of what they’ve done . . . It shows them they’ve improved . . . It increases their self-esteem.
It gives them the opportunity to think, what makes it 3, how can it be 5, or 7, or even when it was 3 and now it’s 7, what has happened for it to go from 3 to 7 . . . It gives them thought to what has happened in the weeks or so since you’ve not met.
Some of them don’t even believe in their diagnosis of mental illness and sort of how to manage it and that, and they find it difficult, that ‘I don’t think I have mental illness’, but going through this . . . they are able to understand that ‘this is sort of something that I have to live with’ for probably the rest of the rest their life, and so ignoring medication is not going is not going to work, but actually understanding that this will help them to prevent hospitalisation.
Clinicians reported that the tool helped patients to be much more involved in their own treatment, through patient-centred conversations in which the patient was empowered to play an active role in meetings:
I think it’s a really good approach actually, to involve them much more . . . Definitely empowering.
P2, FG1
It was empowering because you are . . . shifting the care co-ordination responsibility and letting them make decisions . . . it was positive.
P2, FG3
I saw that as very good giving the on them to determine which area of the spectrum they want to talk about.
P1, FG2
The client has been a part of a process that is good and there’s an output . . . That’s kind of engaging with services instead of just using the service . . . They liked the idea of seeing what they were doing and what they were achieving . . . Having quite clear ideas of what we are working on that feels much more recovery-orientated and more belonging to the client.
INT3
Making the individual like he was in a position of power . . . whereby he can feel the service is meeting his needs, I’m not . . . dictating how this service should be led . . . I’d say the approach to this has enabled [my patients] to see mental health services in a different light.
INT4
Usually you are saying to the service user ‘OK, well this is what I think we should do’ sort of thing but it’s not like that any more here, sort of trying to work out what they should do sort of and they’re sort of leading . . . That was really beneficial and helpful.
P1, FG1
I felt it gave her more of a voice. I really felt for her it enabled her to communicate with me . . . I thought there was a sense in which she really felt more empowered and participating because, she scored it, she decided we were going to talk about it and it did feel very led from her and our and our relationship was you know, that that really helped our rapport.
INT2
I found it the most empowering tool in the 10 years I have been qualified as a psychiatric nurse. By far. It helped me to empower my client . . . By the time we had got to the end, he had taken the reigns into his own hands . . . It definitely changed our therapeutic relationship . . . By the end really he was very very much in control of his own care.
P4, FG1
Many pointed to elements of the solution-focused approach to problems in particular as an empowering aspect of the DIALOG+ intervention:
Normally some of them will want you to do everything for them but with this solution-focused therapy it did help them to see how to actually help themselves rather than being spoon-fed by the care co-ordinator . . . It did help them to understand how to actually come up with a solution rather than me saying ‘this is how to do it’ . . . They come to you thinking ‘how can I do this’ but . . . it’s like an eye-opener to some of them . . . They felt empowered to say that actually ‘I can do this by doing this or doing that, I can actually find a solution to my problem’.
It also gave the clients something to do, to not becoming over-reliant on one person but look at other sources that they can get inspiration and support from outside the professional they were working with . . . Letting them know that they have a role to play and what role can they do play to solve the situation and before what others can do for them or what you yourself can do as a professional.
A few of them recognised that actually they’re more independent in themselves . . . It’s just that sometimes they have this sort of . . . sudden anxiety, erases all the things that they could’ve done easily . . . [My patient] is able to change his overall insight into things that will benefit him, his strengths really, because he’s got a lot of strengths really which he didn’t recognise, but that helped him to recognise a lot of strengths.
I thought it was quite good for them to break it down, I think too often it’s too big . . . I know one of them was wanting to go to work, but the idea of benefits ending and then just going to work was too big . . . and to just have a conversation about . . . ‘what do you like doing’, let’s start from the very basics . . . I think it was quite helpful because when it’s broken down it doesn’t feel so big.
INT3
One of my clients it gave him the opportunity to test his own hypotheses . . . He went out and did whatever the plan we set to do and he did it and he found that it actually helped him, so actually in a way, thinking and testing your own hypothesis, if we do this they will be rewarded.
P2, FG2
Many participants emphasised how their own behaviour changed as a result of DIALOG+, and how this helped them to empower patients:
It also helped me as a professional . . . to think of it as leading from a different place . . . Because you didn’t have to you know, think of the solutions to the problem all the time . . . Before, you’d be racking your head in how you can be helping them and I’ve quite a number of service users who never take the lead on anything . . . They’d come and say ‘Oh I have a problem with housing, I have a problem with benefits’ . . . Now they will sort of say ‘OK, I’ll go to community links, I’ll go and seek adv-’, you know, so I think it was a bit different . . . and challenged them in some ways to do something about [their problems].
P1, FG1
This approach already was, you know, it also was a teaching for me. Yes. It actually taught me to empower them. To, to help them to problem-solve. They’ve got the problem, what are you going to do . . . I will come into it as well with what I will do as well but first you what are you going to do.
P4, FG1
The whole aspect of getting the service user to be the one that comes up with what do they think and what do they want to talk about was quite sort of new, it was almost a revelation . . . I’d never realised that [my patient] was so capable of having a view and an opinion and a position . . . I could watch her grow through the process, it was good . . . I was impressed to see how one individual actually embraced having that sense of power and autonomy and enjoyment about, emm, being asked her opinion.
P1, FG3
Some participants relayed anecdotes of noticeable and lasting changes in their patients’ behaviour, which they attributed to the DIALOG+ intervention:
The other positive of the project is another client who actually was quite socially isolated [and who has] chronic schizophrenia . . . Using the positives in terms of the focus when we look at the item, he always picked up about the making friends, friendships, and he’s always talked about that, and now he’s going to the pub and sort of slowly socialising with other people . . . You can see sort of a gradual improvement . . . he’s never done that for say the last 20 years . . .
P2, FG2
Although most participants were largely positive about the solution-focused approach to problems and maintained it had an empowering effect, some did encounter barriers when applying this approach to patients, particularly when patients struggled to come up with solutions to their own problems:
You seem to be prompting them . . . kind of speaking for them and doing the tasks for them . . . so it kind of defeats the purpose.
P3, FG4
I think most of the time they get at a loss, if someone is really lost now then ‘what would I do, if I knew what to do I’d be doing it isn’t it’, yeah, so that’s when we would start talking and see together, what steps could be done.
P4, FG1
Sometimes they found it difficult and sometimes that is basically where you have to help them, you know give them some examples, you know, like, and then make them choose.
P2, FG1
It was almost like rubbing salt into the wound forcing him to identify areas where he was failing and then trying to come up with ways of him helping him to recover and find solutions to things . . . It’s all very well setting outcome measures and action plans but he was never going to meet them . . . Actually it become a source of irritation between us . . . To tell them ‘what can you do,’ I mean very few of them really are capable of coming up with a strategy of helping themselves, and although I can appreciate this whole concept of ‘well, at least it’s good to get them to think about it, you know, to contribute in some way to what they can do, you know, what is the smallest thing they can do to make a difference’ . . . it’s only the intelligent ones that want to then take it on and say ‘oh right, okay, what can I do then’ . . . It’s like, ‘well, you tell me, you tell me’ . . . And once they sort of get into that belligerence . . . with some of them it’s like trying to get blood out of a stone.
P1, FG3
Their situations are so dire . . . Most of the people that we work with are the most vulnerable of society . . . They’ve been in a situation like that for many years . . . And for a person like me to ask them ‘what is the best-case scenario for you?’, it’s kind of difficult . . . A different word for wishful thinking . . . So it’s kind of patronising. ‘Cause a lot of them won’t get to the position like all of us in this room are in . . . You know what I mean.
P3, FG4
However, some of the same participants acknowledged that it was important nonetheless to challenge patients’ thinking:
It’s difficult you know.
And it’s just quietness you know [whistles].
You know trying to get out of the service user and all you get is ‘I don’t know’, nodding head, ‘yes yes’, but you really want solutions, so you end up giving clues.
Well you have to, don’t you, have to reframe it.
Mmm hmmm.
So you say, ‘have you thought of this, have you thought of that’ but the feedback came back [during follow-up training] ‘no, you are already giving the solutions, let the service user . . .’ you know, and then sometimes all you get from them out of the sessions is ‘yes yes, OK, I don’t know, I don’t know’ so that compels you to end up . . .
Absolutely, that’s why it becomes so necessary to give that individual time to, even if it’s something really pathetically small that they come up with that’s almost inappropriate to what they’ve been asked, but it’s a start, and it’s almost like building on a start and letting them feel okay about saying nothing or giving a tiny little snippet.
The role of technology
A theme that was far more prominent in the clinician focus groups than in the patient groups was that of the technology of DIALOG+. Independent of the psychotherapeutic aspect of the intervention, there was considerable apprehension among clinicians about use of novel technology during meetings:
I was actually really apprehensive as well because I’m very new to gadgets and all that . . . I was thinking about ‘will I be able to do it correctly’ because I won’t like to mess up and all that.
I think I was, yeah, apprehensive as well I think. There were also some anxieties, I mean I wanted to make sure I sort of do it correctly and, you know, not mess up so to speak.
Small oversights in regards to how best to operate the technology had the potential to be surprisingly disruptive:
Yes, in terms of the screen, because you have to put the code again if you were talking about for more than 3 minutes, the screen locks, so it interrupts the flow of the meeting, so you have to put it in one way and you put it in again.
P2, FG2
However, initial apprehension seemed to dissipate after repeated use:
What I would do is be annoyed if I . . . couldn’t remember how to pull the menu structure apart to go back or lose something . . . In the early days when we couldn’t get that stuff up it was like, it must be me, I must apologise, and all the anxiety that goes with that, you know, you feel like throwing it out of the window, but once again once all that’s settled down and the problems are resolved . . . I was enjoying it.
P1, FG3
Aside from anxiety about being responsible for operating the tool, participants were concerned about a potentially negative impact on their therapeutic relationship with patients:
I like to sort of have a face to face you know . . . eye contact with my service user . . . But just having this device in between, you know, it kind of you know made it a bit . . . erm, impersonal really.
P1, FG1
During the whole DIALOG+ session it felt quite robotic, it felt quite centred around the iPad . . . I found that focusing on what they were saying and trying to type it and get it correct and all that a burden, really being attentive or being present with the service user, so I feel it was a mixed bag for me.
P3, FG4
They prefer talking to you rather than talking to you and you going typing.
P2, FG4
I did make me feel distracted . . . I wasn’t having eye-to-eye contact. Speaking with the individual, listening to him because I was . . . fiddling with this . . . It gives them the impression that I’m not listening to their concerns . . . It’s like I haven’t got their attention . . . To an extent it comes across like you’re being rude.
INT4
There was some evidence to confirm that not all patients were fully engaged with the tool:
What I thought was interesting one of my clients that dropped out told his support worker, you know, he referred to what I was doing as a computer course, you know as far as he was concerned it was a computer course . . . He just never got over the fact that me getting out a computer . . . It had no meaning to him that there was anything else going on apart from using the iPad.
P1, FG3
[Patients were] ambivalent to it being there.
INT3
Some clinicians believed that patients were paranoid about the technology:
Yeah they are paranoid sometimes. They’re paranoid thinking . . . what are you typing about . . . depending on their state of mind at that point in time.
P1, FG4
Quite a lot of them . . . will always believe that the piece of tech[nology] being bugged and they being monitored and you with the iPad sort of thing.
P1, FG1
You want to focus more on your client and once they see you with that gadget some of them just tend to change their mind about one or two things that they would have disclosed, so it took a bit of encouragement for them to understand that, because they always very, you know, paranoid and suspicious about anything electrical.
P3, FG2
However, not all participants found the technology disruptive:
I didn’t find actually having that piece of equipment with me intrusive and the clients didn’t seem to be off-put by having that in the room. Everybody, erm, you know is aware of technology, erm, and and was quite comfortable with that so that was okay . . . The physical presence of it is not intrusive. I thought before ‘it’s going to impinge on my rapport’ and it didn’t happen, didn’t happen.
INT2
As with clinicians’ initial apprehension, patients’ apprehension appeared to subside with time:
After a period of time people were settled, they felt comfortable, it was like there was nothing on the table which I was doing.
INT4
I’m sure that with time and the evolution of technology they would get used to what is happening and I’m sure those who have been in the system for quite some time have evolved in our approach to the service.
P1, FG3
Many participants reported that sharing the iPad and/or passing it over the patient was helpful to ease any such concerns:
I tried all the way through with all three of them to not make, it was not my iPad, it was ours, it was the piece of kit given to both of us for this session, you know what I mean, and I made sure that it was facing them and that they could touch the screen and ‘do you want to do the typing, do you want to write this in’ . . . this wasn’t my notepad, this was ours.
P1, FG3
I handed it over to them . . . I thought they might as well try and they did and they did it better . . . and that’s empowering for them . . . I physically gave it to her and she was quite, you know, quite surprised that I did that. Which is quite shocking in a way, that somebody should actually be surprised that you’ve just given them that little bit of autonomy.
INT3
In fact, the technology itself appeared to have a positive impact on some patients:
Also the whole fact that you are doing it on computer or iPad, is a novelty as well . . . So it’s something to, to play around with, ‘cause you can use it, or they can use it and it’s a form of interaction. So if the person is not really interacting with you and you’ve got that possibly you can break the ice.
P3, FG4
She actually got quite excited and loved the idea of learning about how that machine worked and being able to tell me and often did I’d forget how to get back onto the menu structure and whatever and she’d remember and you know she enjoyed that, that was a very nice thing for her to have that.
P2, FG3
She enjoyed the technology, I think she felt quite special . . . It provided a point of connection, a familiarity, and I think she actually enjoyed it. It was a little bit like a game, quite light-hearted.
INT3
Optimising use of DIALOG+
Overall, most participants maintained that the positive aspects of using DIALOG+ outweighed the negatives, and made suggestions for optimising the intervention.
Rather than applying DIALOG+ as a one-size-fits-all approach in community care, they proposed that willing patients be considered for DIALOG+ on an individual basis and that patients should elect to be involved and pledge to participate to their fullest:
I think you can only do it with the clients who really want to do it isn’t it, you can’t do it with clients who would basically not want to . . .
P2, FG1
I think if you were going to roll this out it may you’d have to really consider it case by case and the suitability . . . I’m just not sure whether . . . DIALOG+ itself is suitable, you know, generically across the board.
INT1
I would say roll it out and service users who are OK and happy with it, and those who are not can opt out.
P1, FG2
In the same way that we would identify who is appropriate for psychological therapies and particularly CBT, it would be individuals who had an obvious wish to talk about their problems and take responsibility for their problems and show a willingness to have a way out of the system rather than to remain in it.
P1, FG3
And I believe that they should understand that it is a contract in the, erm, DIALOG as well . . . that if we are doing it for 3 months or 6 months or whatever, they have to participate in the fullest.
P1, FG4
Our role is to help the individual to kind of like move forward with his life . . . It’s down to how you pitch to that individual . . . It’s the delivery . . . And making that people feel that he is the kind of like person who can control his destiny and move forward . . . Have the scope for his future.
INT4
Some clinicians reported that their patients had difficulty understanding the questions of DIALOG+ – both the scale and the SFT components – and suggested re-wording these for patients with schizophrenia, or applying it to different patient groups:
Very often they just don’t understand . . . They would just put in the middle ‘cause they wouldn’t know what to pick or they would want your help . . . You know in understanding ‘strongly agree’, ‘agree’ . . . all those subtle differences.
INT5
It felt at times that individually he didn’t really understand what we were trying to get at even though there was lots of explanation around it.
INT1
One of my clients, she was really not . . . comfortable I felt. It really quite shocked her me talking in that solution-focused way . . . There’s something about the language, however much I tried to use the solution-focused approach but tailoring it towards her, I just felt that it was quite alien to her and I couldn’t . . . I couldn’t make it less of a shock that it was so different to how I usually worked with her.
INT2
These people have had mental illness for a long time. . . Most of the time the cognitive abilities . . . they can’t do it, they not getting anywhere with it.
P2, FG4
The questionnaire is . . . also quite intellectual. ‘Cause we’re dealing with people that have got paranoid schizophrenia, and some of them may have had relapses many times over, and you know, the more relapses you have, your cognitive ability declines . . . So sometimes the way the questions are asked are confusing to them and sometimes when you try to break it down . . . It makes it more confusing for them to be honest and sometimes you actually lose the whole point of the questions, so possibly if the wording, or wording of the questionnaire was changed and probably even simplified further it would be a benefit . . . I feel that the DIALOG+ is probably more for people . . . who may be depressed . . . and probably middle class . . . It’s not for people that are . . . well, below working class . . . When you’re trying to do the DIALOG+ with a patient that’s got negative symptoms . . . it’s not fruitful . . .
P3, FG4
I’d have to kind of like break it down in a way . . . Speak to them in a way that they can understand, so there was initial issues around understanding of how the questions were phrased . . . Sometimes they couldn’t really answer the questions . . . One person you’d ask them a question and you’d get a totally different answer, something way off the mark.
INT4
The client group that would be capable or suitable for using this DIALOG+, we’ve talked about the IQ [intelligence quotient], because a high reasonable functional IQ level to be able . . . If you look at, like, our client group, the majority of them are people who are low, very low functioning level, and very severely damaged . . .
P3, FG3
Some participants warned against DIALOG+ becoming too monotonous after repeated use, and favoured using the tool for three or four successive sessions, or less frequently over the course of 1 year:
It felt a bit repetitive and monotonous.
P3, FG4
I think sometimes you have to change it because it becomes a bit too . . . monotonous, and it puts them off.
P3, FG2
Later in the process I thought it became a bit repetitive actually, because you were always asking the same questions.
P2, FG1
All of them got bored in the end, you start to think that it was becoming too much for them, I mean, so I had to encourage them to do it . . . From the fourth to the sixth month, that’s when I start to experience that they thought it was becoming too much for them.
P2, FG4
It could actually be used three times a year . . .
P3, FG4
Yet, others felt that DIALOG+ could be used on a more long-standing basis, or more intensively:
I don’t think it’s long enough . . . I think given the chaotic nature certainly of two of my clients . . . I think really a year would’ve given a more accurate picture . . . So, you know, really that chronic long-term work with people in the community, 6 months is quite a small snapshot for people with, you know, ongoing mental health issues.
INT2
In terms of looking at social problems . . . some people, it takes them 2, 3 months to even come up with an idea of how to resolve it.
INT4
It would make sense to do it really regularly . . . Maybe even more than once a month.
INT5
Some clinicians expressed that they would like to see DIALOG+ used in a more task-focused way, with a specific start and end:
It could be done for something specific . . . You could set a certain time, you know so they could be task-focused. Then you know they can see the goal coming closer and closer and closer, it could be used that way.
P3, FG4
What I was missing a little bit was like wrapping it up at the end, you know, so it would be more like a therapeutic process where you really just have a number of sessions, and then wrap it up at the end . . . Have maybe the first few sessions to pick out a few, erm, problems or a few, erm, things clients want to change in their life, and then to stick with those and follow those through actually, and then bring it to the end at some point.
P2, FG1
Some participants suggested that the core DIALOG scale could be linked to another type of intervention, outside SFT:
I’m wondering about being pigeon-holed with the solution-focused therapy approach . . . I’d be really interested to use it but being able to choose my own sort of intervention.
INT2
I’ve always drawn on different therapeutic interventions as well and I think it’s very useful for clients . . . It’s more beneficial to draw on different approaches . . . But using solution-focused therapy as well . . . So integrative approach.
INT5
Many participants emphasised that the portable tablet by which the intervention was delivered was helpful in delivering the intervention:
It’s lighter . . . You can contain it anywhere, even to a ladies’ handbag and the laptop is heavier . . . User-friendly and transportable . . . I find it so convenient taking iPad in my bag.
P1, FG2
One participant noted that smaller tablets than the iPad might be superior and help with implementation:
Maybe a smaller version of it, a mini or something, that I could just slip in my bag . . . I would just say a mini. I know that sounds really ridiculous . . . It’s just smaller, we use everything as smaller . . . as this [the current iPad] is slightly bulkier . . . So it’s [an iPad mini] an easier device to use, it’s lighter . . . much more compact . . . Some form of a tablet as long as is slightly smaller.
INT3
Some felt that the technology could be further optimised to support the intervention, specifically to minimise the problem of clinicians having to tend to the device and operate it by hand rather than engaging with their patients:
Voice recognition . . . To kind of just speak and do it that way . . . To command it to go into the sequence of things and then you can just tick boxes . . . That would be a lot faster, you would be able to maintain the eye contact.
INT4
Using the iPad in a way that you don’t have to type . . . You can just speak into it and kind of save it.
P3, FG4
Recognise your voice so you can speak and it will type.
P1, FG2
A small minority felt that the software was not wholly user-friendly and could be further refined:
I know that using the iPad was user-friendly after you practised for a while, but if you can look at a way to make it much easier after the training period, some of us don’t get to start it straight away.
P1, FG2
Many participants expressed a desire for more training:
I think we should have more than one [training session] definitely.
P2, FG2
I thought I needed a lot more [training] . . . I went away probably knowing how to do it but when actually starting it, it seemed like I had forgotten it.
INT4
In some cases, this was a desire for training on the technological aspect:
I felt like I could’ve done with more training . . . I was struggling with logging in and doing whatever and starting the session whilst the service user was sitting there looking at me, you know . . .
P2, FG3
For others, training needs related to the solution-focused approach to sessions:
In terms of the focused solution side, the first training didn’t go down well . . . I didn’t get it, it was in a rush . . . my first session was a bit confused until I spoke to the trainer, you know, then I got really an insight of what I need to achieve or what we need to do.
P3, FG3
As a social worker we don’t use this type of therapy like CBT or solution-focused therapy . . . Maybe if we had . . . probably three, four sessions about what SFT is and how we incorporate it.
P3, FG4
Discussion
This study sought to gain the experiences of clinicians who delivered the DIALOG+ intervention routinely in co-ordinating care for patients in CMHTs, following their participation in the trial. Four focus groups and five individual interviews were conducted and the subsequent thematic analysis yielded four themes: (1) efficiency of DIALOG+, (2) empowerment of patients, (3) the role of technology and (4) optimising use of DIALOG+.
The first theme, efficiency of DIALOG+, referred to the potential of the intervention to make meetings both more and less efficient, according to clinicians. When considered helpful, aspects of DIALOG+ that improved efficiency included its comprehensive range of topics, which broadened the scope of meetings and helped clinicians to learn more about their patients. This is somewhat remarkable, given that the average length of the relationship between patient and clinician in the trial was 1.7 years. Many clinicians also valued the structured approach and described DIALOG+ sessions as more focused than usual meetings.
Nonetheless, participants also reported that adhering to this structure was not always easy or intuitive, and in some cases, it seemed perfunctory and not particularly valuable. There was disagreement on whether or not DIALOG+ was more time-consuming than usual meetings. Some clinicians struggled with letting patients take the lead with discussions, with the result that discussions around problematic situations were not always productive, for different reasons.
These criticisms might be overcome with further refinements to the manual and training; for example, the instances of patients raising the topic of housing, over which clinicians believed they had no influence. When reviewing previous action items at the beginning of a new session, the patient and the clinician could agree to ‘park’ such a topic for the time being, to be revisited later, when progress had been made. Alternatively, exploring strategies to manage the problematic housing situation through the four-step approach would be appropriate. This needs to be made explicit in the manual and in training. How to negotiate with patients, in instances such as the anecdote of one patient who declined to follow through with an update on agreed actions, should also be covered in the manual and training.
Whether or not participants always found DIALOG+ an efficient tool to facilitate meetings with patients for them as clinicians, the majority of them agreed that DIALOG+ had a positive impact on patients. The second theme referred to the ability of DIALOG+ to empower patients. Clinicians reported that their patients showed improved self-reflection and insight, that they became much more involved in their treatment, and that they began to identify their own solutions to problems, rather than depending on mental health professionals.
Still, some clinicians struggled with encouraging their patients to think in a solution-focused way and reported that patients were at a loss as to how to come up with small improvements or ideal scenarios. Although this was a real concern, the verbatim quotes reported here might be interpreted as reflecting a certain scepticism or cynicism among some clinicians, which may have undermined their ability to implement a solution-focused intervention; they may not have fully endorsed the notion that it is just as much the patient’s evaluation of their situation as the situation itself that can be influenced by SFT. Nonetheless, the appropriateness of SFT for some of the most socially deprived members of society warrants careful consideration. There has been limited research with respect to the use of SFT with patients with psychosis and this requires further investigation.
The third theme of the role of technology in implementing DIALOG+ was a particularly strong one. This referred in part to clinicians’ concerns that the technology would impact negatively on the therapeutic relationship, although major interference arising from use of the tablet computer did not materialise; overall, initial disruption diminished with time, or could be overcome. An even stronger concern was clinicians’ own anxiety surrounding the operation of the tool. This may provide a clue as to why a significant proportion of clinicians in the intervention group did not ultimately implement DIALOG+. It is possible that they found the technology distracting or intimidating; indeed, such tablets and the accompanying software and user interface were still very new to the market when the trial was conducted. The endurance of such concerns may be traceable to the limited training programme prior to the trial. To improve future implementation, an e-learning module would allow clinicians to reacquaint themselves with the operation of the software as and when required, at their convenience.
The fourth and final theme referred to suggestions for improvement of the intervention offered by clinicians. In keeping with the previous theme, they requested more training, in both the technology and in SFT. They agreed with the platform choice of tablet computers and hoped eventually to be able to operate the technology on a hands-free basis.
In keeping with the criticisms of DIALOG+ described under the second theme, clinicians suggested revising the terminology of the SFT to be less ‘intellectual’. It is inferred that terms such as ‘best-case scenario’ and ‘ideal situation’ may need to be re-worded in the manual to make these notions more accessible to patients, and better reflect the room for improvement that would be reasonable for them. It might also be stated in the manual that it is perfectly acceptable for clinicians and patients to acknowledge that far-reaching goals may be subject to limits, but that it is the small changes that will make the real difference.
Furthermore, clinicians suggested the inclusion of different patient groups for DIALOG+, a reasonable proposition given that very little of the intervention is psychosis specific. They held that patients should be nominated for DIALOG+ based on their motivations, that this should represent a ‘contract’ between the patient and the clinician, and that there should be a specific beginning and end. Although this might be the best form of implementation for other, less chronic patient groups, it would seem regrettable to exclude some of the current patient group based on motivation, given the powerful effect of DIALOG+ seen after just a few sessions in the trial. An alternative might be to deliver DIALOG+ even once or twice for patients in the community, which would avoid complaints of the procedure becoming monotonous. There was no strong consensus on how often the intervention should be administered or for how long.
The findings suggest that DIALOG+ presented both advantages and disadvantages. Although the implementation of the tool was subject to teething issues for clinicians, the benefits for patients had the potential to be very powerful. It is important for clinicians be open to administering the intervention in different ways with different patients. Further optimisation of the tool – particularly with respect to the manual and training – has since been undertaken, and the updated manual has been made available for free on the DIALOG website (see Chapter 9).
Chapter 9 Dissemination
We aimed to maximise the dissemination and implementation of this programme of research by delivering the findings to a diverse audience of stakeholders, in the UK and internationally. The stakeholder groups we focused on primarily were clinicians who would be using the app with their patients and decision-makers in services. We held specially organised presentations at these services in order to inspire local interest and attract a high volume of attendees.
Conference presentations
-
DGPPN (Die Deutsche Gesellschaft für Psychiatrie und Psychotherapie, Psychosomatik und Nervenheilkunde) Kongress, Berlin, Germany, November 2011.
-
Section Epidemiology and Social Psychiatry, Association of European Psychiatrists, Ulm, Germany, May 2014.
-
National Psychiatric Conference, Czech Republic, June 2014.
-
World Association of Social Psychiatry, London, UK, November 2014.
Other presentations
-
Presentation to Trish Greenhalgh, Professor of Primary Health Care and Dean for Research Impact, Barts and the London School of Medicine and Dentistry, QMUL, May 2014.
-
12th Annual East London Mental Health Research Presentation Day, October 2014.
-
Health & Technology Assessment Seminar, QMUL, January 2015.
-
UKRCOM (UK Routine Clinical Outcomes Measurement Network) Meeting, January 2015.
-
Centre for Primary Care and Public Health Seminar, Barts and the London School of Medicine and Dentistry, QMUL, February 2015.
-
Presentation to visitors from the World Health Organization (WHO): Matt Muijen (Regional Director of Mental Health Office for Europe) and Mirella Ruggeri (Director of WHO Collaborating Centre in Verona), February 2015.
-
Presentation to Martin Marshall (Professor of Healthcare Improvement, University College London), February 2015.
-
Workshop of Leading UK Academics in Social Psychiatry, ELFT, March 2015.
-
Presentation to SUGAR (Service User and Carer Group Advising on Research) ELFT, March 2015.
-
Mental Health Project Board Meeting, South Staffordshire and Shropshire Healthcare NHS Foundation Trust, April 2015.
-
Adult Mental Health Forum, Bradford District Care Trust, April 2015.
-
Academic Programme, Barnet, Enfield & Haringey Mental Health Trust, April 2015.
-
Clinical Advisory Group, Camden and Islington NHS Foundation Trust, April 2015.
-
Consultants’ meeting, Camden and Islington NHS Foundation Trust, April 2015.
-
Case conference seminar, Southern Health NHS Foundation Trust, April 2015.
-
DIALOG dissemination workshop, Cornwall Partnership NHS Foundation Trust, May 2015.
-
Annual Research & Development Day, North East London NHS Foundation Trust, May 2015.
-
Academic Programme, Oxford Health NHS Foundation Trust, July 2015.
-
Presentation to North Essex Partnership University NHS Foundation Trust, July 2015.
-
Presentation to South London and Maudsley NHS Foundation Trust, July 2015.
Submission of DIALOG iPad app to App Store
-
We submitted the DIALOG app to Apple Inc.’s App Store in late 2014. It is now available to download for free on any iPad device. Users can search for the app by entering the App Store on their iPad and typing ‘DIALOG’ into the search field.
-
We customised the app so that users can administer the basic DIALOG intervention or the full DIALOG+ intervention as they please. Users can enter the app settings and set DIALOG+ to ‘on’ or ‘off’, as desired.
Further development of DIALOG app for different platforms
-
Following various dissemination meetings, many services expressed an interest in the DIALOG app being optimised for iPhone and for Android-compatible platforms, requiring further development. With the programme nearing completion, it was agreed that the remaining resources for development available to the team would be put towards the delivery of an Android version of the app. The Android operating system is the second most popular operating system after iOS and allows services to consider many different hardware options besides those of Apple Inc. to deliver the DIALOG and DIALOG+ interventions, many of which are considerably cheaper than Apple Inc. products; furthermore, Android encompasses both phones and tablets. This greatly increases the flexibility of how the intervention can be delivered, maximising the potential uptake in the NHS. The Android version is available on Google’s Play Store as of 2016.
-
As of 2016, DIALOG and DIALOG+ have also been programmed into RiO, the electronic records system at ELFT. Users of RiO in services outside ELFT should contact ELFT’s information technology department for guidance on how to implement DIALOG and DIALOG+ as part of RiO locally.
DIALOG+ web-based training
To facilitate training in the intervention in mental health services in the NHS, we developed a free web-based training module for DIALOG+ based on the training implemented as part of the randomised controlled trial. This self-directed training facilitates clinicians in learning how to operate the technology of DIALOG+ and the SFT approach to discussing problematic areas of a patient’s life and treatment. The training is hosted by ELFT on the Oracle Learning Management System. The training is ‘SCORM’ (Sharable Content Object Reference Model) compliant, as is standard for e-learning, meaning that there is interoperability for services outside ELFT seeking to adopt the DIALOG+ training module to their learning management system. Interested services should visit the DIALOG website (see below) for the most recent information on how to acquire the training module.
DIALOG website
A website for DIALOG is available at dialog.elft.nhs.uk providing ‘plain English’ information to people wanting to know more about DIALOG and DIALOG+, including what the interventions are and to whom they are relevant, research evidence on their effectiveness in improving patient outcomes, downloadable materials such as the DIALOG+ manual, instructions for downloading the DIALOG app for different platforms, and information on accessing the training module on DIALOG+.
East London NHS Foundation Trust responsibility for DIALOG
East London NHS Foundation Trust holds the copyright for the DIALOG app. The research team have agreed with ELFT that, subsequent to the end of the programme, ELFT will take responsibility for routine updates to the apps as and when required.
Dissemination to participants
-
An information sheet summarising the findings of the research was sent to all participating patients following completion of the trial. This information sheet was checked by SUGAR (Service User and Carer Group Advising on Research) for user-friendly language and style. When possible, the information sheet was passed on via each patient’s care co-ordinator. When not possible, the information sheet was sent to the patient’s last known address.
-
An information sheet was also sent to the clinicians and service managers of the participating CMHTs via e-mail.
-
In addition, the Board of Directors of ELFT were sent a summarising information sheet via e-mail.
Chapter 10 Conclusions/recommendations
The main objectives of the programme were to develop new software for DIALOG, to extend DIALOG to DIALOG+, to support these interventions, to test DIALOG+ in an exploratory trial, to disseminate the findings and to submit an application for a definitive trial.
The new, extended approach and accompanying software were used in an exploratory trial that demonstrated real changes in patients, with both patients and clinicians reporting mostly positive experiences. The software is freely available as an app for both iPads and Android platforms, meaning it can be widely used on tablets – and potentially also on larger smartphones – throughout the NHS. A web-based training programme is also freely available.
The DIALOG+ procedure contains a four-step approach that reflects principles of SFT, which is also in line with principles of CBT. It is manualised and intended to support the patient–clinician interaction in routine meetings in CMHTs, and to help the patient develop generalisable problem-solving skills. The experiences in the trial have shown that the implementation of DIALOG+ can be problematic. A substantial proportion of patients allocated to the intervention group never received the intervention. However, the fact that 30% of patients in the intervention group did not receive the intervention may also have been – at least in part – caused by the design and the implementation of a rigorous research trial. Most of the research team, including the principal investigator, were blinded not only to the allocation of patients, but to all post-randomisation information. This was to guarantee that they could not try to influence outcomes in one of the two arms. Yet it had the consequence that the research team did not know about the implementation problems until the trial was completed, and there was no chance to consider actions to ensure that more patients actually received the intervention as planned. This difficulty will not occur when DIALOG+ is used in routine services. Normal documentation and feedback systems will indicate when interventions are not delivered and the problem can be managed more or less immediately.
For those patients and clinicians in the intervention group who did use DIALOG+, the quality of its use varied considerably. Although the qualitative evaluations indicated a few areas for improvement, it remains unclear as to what extent changes to the DIALOG+ intervention are warranted. Yet, the development of DIALOG+ in this research programme was extensive and considered the views and experiences of a range of clinicians and patients. It may be worth noting that those parts of DIALOG+ that had already been in the previous simpler DIALOG approach hardly changed during the extensive revisions. This may suggest that the intervention has been robustly developed and is unlikely to change very much, in case it gets reviewed and revised again in the future.
Although the intervention itself might not be altered much to improve its use in practice, training and supervision may certainly be improved based on more experiences in both routine care and future research studies. Better support through organisational processes in the given NHS services is required to improve implementation in the future.
The exploratory trial was conducted across several services; however, all of them were based in east London boroughs. The trial methodology excluded a number of patients, for example those with insufficient command of English or lack of capacity to provide informed consent to participate in a trial, who, nevertheless, can make up a substantial proportion of patients in routine care. Given the limited sample size, the economic evaluation provided findings with a degree of uncertainty. Despite these limitations, the findings of the exploratory trial suggest that the DIALOG+ intervention can be associated with both substantially improved outcomes and likely cost savings.
One may conclude that DIALOG+ is a relatively simple intervention with evidence to suggest that it might make the patient–clinician interaction in community mental health care more effective and lead to significant improvements in the treatment of patients with psychosis.
There are five major recommendations for the future:
-
Although services might consider adopting DIALOG+ already at this stage, based on the existing evidence, a definitive trial appears warranted. Such a trial should include sites outside east London and similar inner-city areas to test whether or not the suggested effect holds true in different settings and services.
-
Although the whole programme focused on the treatment of patients with psychosis, there is little in the whole DIALOG+ intervention that is clearly psychosis specific. Applying DIALOG+ to patient groups with other mental disorders may be considered. It may also be tested outside mental health services and in patients with physical health problems. Given that the whole approach requires repeated interventions over a period of time, DIALOG+ is likely to be more appropriate for patients with chronic conditions than in acute situations.
-
The findings suggest that DIALOG+ is effective even after a 1-year period, although hardly any patient–clinician pair continued using it beyond 6 months. Within the first 3 months it was used more regularly, but some patients and clinicians complained that it was too frequently administered and too repetitive. A more flexible use with variable intervals, depending on personal preference and the need in a given therapeutic situation, might help to make the intervention even more acceptable and effective. Yet, such a flexible scheme would pose additional problems for systematic implementation, and present major methodological problems for rigorous evaluation in a trial.
-
DIALOG+ reflects an attempt to develop an intervention specifically for the given setting. It reflects principles of an elaborate therapeutic model (i.e. SFT). More process evaluation is required to identify what mechanisms precisely are involved in the improvements seen in the intervention group in the trial. Evidence of process evaluation may be linked to further considerations of therapeutic models to improve the intervention. DIALOG+ is not intended to be the final product of research, but an important – and hopefully encouraging – step in advancing patient–clinician interaction and improving treatment outcomes.
-
Finally, among the various explanations for why such a brief and inconsistently implemented intervention like DIALOG+ can have such a substantial effect compared with other more costly and intensive treatments, one potentially important aspect is that DIALOG+ is not a separate treatment and not a technology that is administered by a specialist. It rather changes and utilises the existing therapeutic relationship in CMHTs to initiate positive change and help the patients to improve their quality of life. This may encourage future research focusing on how the potential benefits in existing therapeutic relationships can be maximised instead of designing new specialised services.
In summary, the authors believe that the programme has achieved its objectives and hope that the overall positive findings will be followed up by further research and actual improvements in routine care.
Acknowledgements
We are grateful to Tom Burns, Julia Sinclair, Mike Slade, Paul McCrone, Len Bowers, Vanessa Pinfold, Sandra Eldridge and Tim Lambert, who were co-applicants to the grant and facilitated the implementation of all studies in the programme; to Pat Healey, who led on the development of the DIALOG software; and to Lauren Kelley and Husnara Khanom, who were part of the core research team based at the Unit for Social and Community Psychiatry, QMUL.
We are also grateful to the members of the programme’s Steering Committee, who were Thomas Jamieson-Craig, Lars Hansson, Thomas Becker, Daniel Freeman and Elizabeth Kuipers.
We thank Peter Phiri, Denzel Mitchell, Mary Adams, Gay Daley, Tom Bell, Chris Wagg, Greg Vinnicombe, David Hawkes, Chris Iveson and Harvey Ratner of BRIEF, who provided input in the development of the DIALOG+ manual.
Sahida Khan-gul and Amy Gaglia acted as participants in the pilot of the DIALOG+ training programme and, subsequently, trained all clinicians participating in the trial.
The PCTU supported the trial. Gordon Forbes, Clare Rutterford and Stephen Bremner provided input into the statistical analysis overseen by Sandra Eldridge.
Henok Getachew and Toby Prevost were part of the Data Monitoring Committee for the trial.
Iris Mosweu supported Paul McCrone in the evaluation of the cost-effectiveness of DIALOG+.
Finally, we thank all staff and patient participants who gave their time to be involved in research, the Executive Directors of ELFT for their support in implementing research in their services and members of the service user reference group for providing their advice and guidance.
Contributions of authors
The role (job title, area of specialty) of each author is as follows.
Professor Stefan Priebe (Professor, Social and Community Psychiatry) was principal investigator with overall responsibility for the programme, leading on the design and implementation of all studies and authorship of all chapters in this report.
Eoin Golden (Programme Manager) managed the programme, drafted Chapters 2–3 and 7–9 and contributed to the design of the DIALOG software, drafting of the DIALOG+ manual, recruitment of services, recruitment of patients, data collection and management and dissemination of findings in the programme.
Professor David Kingdon (Professor, Mental Health Care Delivery) facilitated and contributed to the development of the DIALOG+ manual.
Serif Omer (Researcher) drafted Chapters 1, 4 and 10, and contributed to the development of the training programme, recruitment of patients, data collection and dissemination of findings in the programme.
Sophie Walsh (Researcher) drafted Chapter 6, leading on design, data collection and analysis of the study and contributed to recruitment of patients and data collection in the programme.
Kleomenis Katevas (Software Developer) drafted substudy A3 of Chapter 2, contributed to the design of the DIALOG software and coded the software for the iOS iPad.
Professor Paul McCrone (Professor, Health Economics) led on the design and analysis of the study presented in Chapter 5 and drafted that chapter.
Professor Sandra Eldridge (Professor, Biostatistics) led on the analysis of data in the trial, as described in Chapter 4.
Professor Rose McCabe (Professor, Clinical Communication) contributed to the design and implementation of all studies in the programme.
All authors critically revised the content of this report.
Publications
Priebe S, Kelley L, Golden E, McCrone P, Kingdon D, Rutterford C, et al. Effectiveness of structured patient–clinician communication with a solution focused approach (DIALOG+) in community treatment of patients with psychosis – a cluster randomised controlled trial. BMC Psychiatry 2013;13:13–173.
Priebe S, Kelley L, Omer S, Golden E, Walsh S, Khanom H, et al. The effectiveness of a patient-centred assessment with a solution-focused approach (DIALOG+) for patients with psychosis: a pragmatic cluster-randomised controlled trial in community care. Psychother Psychosom 2015;84:304–3.
Omer S, Golden E, Priebe S. Exploring the mechanisms of a patient-centred assessment with a solution focused approach (DIALOG+) in the community treatment of patients with psychosis: a process evaluation within a cluster-randomised controlled trial. PLOS ONE 2016;11:e0148415.
Data sharing statement
Anonymised data can be obtained from the corresponding author on reasonable request and subject to a data sharing agreement.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, CCF, NETSCC, PGfAR or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the PGfAR programme or the Department of Health.
References
- Lloyd T, Kennedy N, Fearon P, Kirkbride J, Mallett R, Leff J, et al. Incidence of bipolar affective disorder in three UK cities: results from the AESOP study. Br J Psychiatry 2005;186:126-31. http://dx.doi.org/10.1192/bjp.186.2.126.
- Mangalore R, Knapp M. Cost of schizophrenia in England. J Ment Health Policy Econ 2007;10:23-41.
- Knapp M, Mangalore R, Simon J. The global costs of schizophrenia. Schizophr Bull 2004;30:279-93. https://doi.org/10.1093/oxfordjournals.schbul.a007078.
- Lacro JP, Dunn LB, Dolder CR, Leckband SG, Jeste DV. Prevalence of and risk factors for medication nonadherence in patients with schizophrenia: a comprehensive review of recent literature. J Clin Psychiatry 2002;63:892-909. https://doi.org/10.4088/JCP.v63n1007.
- Nosé M, Barbui C, Tansella M. How often do patients with psychosis fail to adhere to treatment programmes? A systematic review. Psychol Med 2003;33:1149-60. https://doi.org/10.1017/S0033291703008328.
- O’Brien A, Fahmy R, Singh SP. Disengagement from mental health services. A literature review. Soc Psychiatry Psychiatr Epidemiol 2009;44:558-68. http://dx.doi.org/10.1007/s00127-008-0476-0.
- Priebe S, Burns T, Craig TK. The future of academic psychiatry may be social. Br J Psychiatry 2013;202:319-20. http://dx.doi.org/10.1192/bjp.bp.112.116905.
- Care Services Improvement Partnership, Mental Health Strategies . Combined Mapping Framework 2009. www.mhcombinedmap.org/reports/aspx (accessed 13 March 2015).
- Klinkenberg WD, Calsyn RJ, Morse GA. The helping alliance in case management for homeless persons with severe mental illness. Community Ment Health J 1998;34:569-78. https://doi.org/10.1023/A:1018758917277.
- McCabe R, Priebe S. The therapeutic relationship in the treatment of severe mental illness: a review of methods and findings. Int J Soc Psychiatry 2004;50:115-28. https://doi.org/10.1177/0020764004040959.
- Priebe S, Gruyters T. The role of the helping alliance in psychiatric community care. A prospective study. J Nerv Ment Dis 1993;181:552-7. https://doi.org/10.1097/00005053-199309000-00004.
- Tattan T, Tarrier N. The expressed emotion of case managers of the seriously mentally ill: the influence of expressed emotion on clinical outcomes. Psychol Med 2000;30:195-204. https://doi.org/10.1017/S0033291799001579.
- Bridle D, McCabe R, Priebe S. Incorporating psychotherapeutic methods in routine community treatment for patients with psychotic disorders. Psychosis 2013;5:154-65. https://doi.org/10.1080/17522439.2012.683036.
- Priebe S, McCabe R. The therapeutic relationship in psychiatric settings. Acta Psychiatr Scand Suppl 2006;429:69-72. http://dx.doi.org/10.1111/j.1600-0447.2005.00721.x.
- Van Os J, Altamura AC, Bobes J, Gerlach J, Hellewell JS, Kasper S, et al. Evaluation of the Two-Way Communication Checklist as a clinical intervention. Results of a multinational, randomised controlled trial. Br J Psychiatry 2004;184:79-83. https://doi.org/10.1192/bjp.184.1.79.
- Slade M, McCrone P, Kuipers E, Leese M, Cahill S, Parabiaghi A, et al. Use of standardised outcome measures in adult mental health services: randomised controlled trial. Br J Psychiatry 2006;189:330-6. https://doi.org/10.1192/bjp.bp.105.015412.
- Priebe S, McCabe R, Bullenkamp J, Hansson L, Lauber C, Martinez-Leal R, et al. Structured patient-clinician communication and 1-year outcome in community mental healthcare: cluster randomised controlled trial. Br J Psychiatry 2007;191:420-6. http://dx.doi.org/10.1192/bjp.bp.107.036939.
- Burns T, Beadsmoore A, Bhat AV, Oliver A, Mathers C. A controlled trial of home-based acute psychiatric services. I: Clinical and social outcome. Br J Psychiatry 1993;163:49-54. https://doi.org/10.1192/bjp.163.1.49.
- Burns T, Raftery J, Beadsmoore A, McGuigan S, Dickson M. A controlled trial of home-based acute psychiatric services. II: Treatment patterns and costs. Br J Psychiatry 1993;163:55-61. https://doi.org/10.1192/bjp.163.1.55.
- Epstein RM, Franks P, Shields CG, Meldrum SC, Miller KN, Campbell TL, et al. Patient-centered communication and diagnostic testing. Ann Fam Med 2005;3:415-21. http://dx.doi.org/10.1370/afm.348.
- Little P, Everitt H, Williamson I, Warner G, Moore M, Gould C, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. BMJ 2001;323:908-11. https://doi.org/10.1136/bmj.323.7318.908.
- Harrington J, Noble LM, Newman SP. Improving patients’ communication with doctors: a systematic review of intervention studies. Patient Educ Couns 2004;52:7-16. https://doi.org/10.1016/S0738-3991(03)00017-X.
- Huxley P. Outcomes management in mental health: a brief review. J Ment Health 1998;7:273-83. https://doi.org/10.1080/09638239818094.
- Marks I. Overcoming obstacles to routine outcome measurement. The nuts and bolts of implementing clinical audit. Br J Psychiatry 1998;173:281-6. https://doi.org/10.1192/bjp.173.4.281.
- Slade M, Thornicroft G, Glover G. The feasibility of routine outcome measures in mental health. Soc Psychiatry Psychiatr Epidemiol 1999;34:243-9. https://doi.org/10.1007/s001270050139.
- Jones C, Hacker D, Cormac I, Meaden A, Irving CB. Cognitive behavioural therapy versus other psychosocial treatments for schizophrenia. Cochrane Database Syst Rev 2012;4. http://dx.doi.org/10.1002/14651858.CD008712.pub2.
- Wykes T, Steel C, Everitt B, Tarrier N. Cognitive behavior therapy for schizophrenia: effect sizes, clinical models, and methodological rigor. Schizophr Bull 2008;34:523-37. http://dx.doi.org/10.1093/schbul/sbm114.
- Pfammatter M, Junghan UM, Brenner HD. Efficacy of psychological therapy in schizophrenia: conclusions from meta-analyses. Schizophr Bull 2006;32:64-80. http://dx.doi.org/10.1093/schbul/sbl030.
- McGuire-Snieckus R, McCabe R, Catty J, Hansson L, Priebe S. A new scale to assess the therapeutic relationship in community mental health care: STAR. Psychol Med 2007;37:85-9. http://dx.doi.org/10.1017/S0033291706009299.
- Castonguay LG, Beutler LE. Principles of therapeutic change: a task force on participants, relationships, and techniques factors. J Clin Psychol 2006;62:631-8. https://doi.org/10.1002/jclp.20256.
- Frank AF, Gunderson JG. The role of the therapeutic alliance in the treatment of schizophrenia. Relationship to course and outcome. Arch Gen Psychiatry 1990;47:228-36. https://doi.org/10.1001/archpsyc.1990.01810150028006.
- Hansson L, Berglund M. Stability of therapeutic alliance and its relationship to outcome in short-term inpatient psychiatric care. Scand J Soc Med 1992;20:45-50.
- Martin DJ, Garske JP, Davis MK. Relation of the therapeutic alliance with outcome and other variables: a meta-analytic review. J Consult Clin Psychol 2000;68:438-50. https://doi.org/10.1037/0022-006X.68.3.438.
- Johansson H, Eklund M. Patients’ opinion on what constitutes good psychiatric care. Scand J Caring Sci 2003;17:339-46. https://doi.org/10.1046/j.0283-9318.2003.00233.x.
- Priebe S, Watts J, Chase M, Matanov A. Processes of disengagement and engagement in assertive outreach patients: qualitative study. Br J Psychiatry 2005;187:438-43. http://dx.doi.org/10.1192/bjp.187.5.438.
- Catty J. ‘The vehicle of success’: theoretical and empirical perspectives on the therapeutic alliance in psychotherapy and psychiatry. Psychol Psychother 2004;77:255-72. http://dx.doi.org/10.1348/147608304323112528.
- Hansson L, Svensson B, Björkman T, Bullenkamp J, Lauber C, Martinez-Leal R, et al. What works for whom in a computer-mediated communication intervention in community psychiatry? Moderators of outcome in a cluster randomised trial. Acta Psychiatr Scand 2008;118:404-9. https://doi.org/10.1111/j.1600-0447.2008.01258.x.
- Ahmed M, Boisvert CM. Using computers as visual aids to enhance communication in therapy. Comput Human Behav 2006;22:847-55. https://doi.org/10.1016/j.chb.2004.03.008.
- Horvath AO, Del Re AC, Flückiger C, Symonds D. Alliance in individual psychotherapy. Psychotherapy 2011;48:9-16. http://dx.doi.org/10.1037/a0022186.
- Priebe S, Mccabe R. Therapeutic relationships in psychiatry: the basis of therapy or therapy in itself?. Int Rev Psychiatry 2008;20:521-6. http://dx.doi.org/10.1080/09540260802565257.
- Krueger R, Casey M. Focus Groups: A Practical Guide for Applied Research. Thousand Oaks, CA: Sage Publications; 2000.
- Hess J, King R. New Science of Planning. Chicago, IL: American Marketing Association; 1968.
- Heary C, Hennessy E. Focus groups versus individual interviews with children: a comparison of data. Ir J Psychol 2006;27:58-6. https://doi.org/10.1080/03033910.2006.10446228.
- International Statistical Classification of Diseases and Related Health Problems. Geneva: WHO; 2010.
- Apple Inc . Concepts in Objective-C Programming: Model-View-Controller 2015. https://developer.apple.com/library/ios/documentation/General/Conceptual/CocoaEncyclopedia/Model-View-Controller/Model-View-Controller.html (accessed 27 February 2015).
- Strategy Analytics . Global Tablet OS Market Share: Q4 2011 2011. www.strategyanalytics.com (accessed 11 March 2015).
- Lookout . Lookout Mobile Threat Report, August 2011 2011. www.lookout.com/static/ee_images/lookout-mobile-threat-report-2011.pdf (accessed 11 March 2015).
- Miller C. Mobile attacks and defense. IEEE Secur Privacy 2011;9:68-70. https://doi.org/10.1109/MSP.2011.85.
- Apple Inc . Core Data Programming Guide. 2015 n.d. https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreData/cdProgrammingGuide.html (accessed 27 February 2015).
- Haddock G, Eisner E, Boone C, Davies G, Coogan C, Barrowclough C. An investigation of the implementation of NICE-recommended CBT interventions for people with schizophrenia. J Ment Health 2014;23:162-5. http://dx.doi.org/10.3109/09638237.2013.869571.
- Gingerich WJ, Peterson LT. Effectiveness of solution-focused brief therapy: a systematic qualitative review of controlled outcome studies. Res Soc Work Pract 2013;23:266-83. https://doi.org/10.1177/1049731512470859.
- Kim JS. Examining the effectiveness of solution-focused brief therapy: a meta-analysis. Res Soc Work Pract 2008;18:107-16. https://doi.org/10.1177/1049731507307807.
- Richmond CJ, Jordan SS, Bischof GH, Sauer EM. Effects of solution-focused versus problem-focused intake questions on pre-treatment change. J Syst Ther 2014;33:33-47. https://doi.org/10.1521/jsyt.2014.33.1.33.
- McCabe R. Training to Enhance Communication With Patients With Psychosis n.d. http://medicine.exeter.ac.uk/media/universityofexeter/medicalschool/profiles/TEMPO_full_manual.pdf (accessed 13 March 2015).
- Rohricht F. Body-oriented Psychotherapy in Mental Illness: A Manual for Research and Practice. Seattle, WA: Hogrefe; 2000.
- Priebe S, Kelley L, Golden E, McCrone P, Kingdon D, Rutterford C, et al. Effectiveness of structured patient-clinician communication with a solution focused approach (DIALOG+) in community treatment of patients with psychosis – a cluster randomised controlled trial. BMC Psychiatry 2013;13. http://dx.doi.org/10.1186/1471-244X-13-173.
- Priebe S, Huxley P, Knight S, Evans S. Application and results of the Manchester Short Assessment of Quality of Life (MANSA). Int J Soc Psychiatry 1999;45:7-12. https://doi.org/10.1177/002076409904500102.
- Slade M, Phelan M, Thornicroft G, Parkman S. The Camberwell Assessment of Need (CAN): comparison of assessments by staff and patients of the needs of the severely mentally ill. Soc Psychiatry Psychiatr Epidemiol 1996;31:109-13. https://doi.org/10.1007/BF00785756.
- Nguyen TD, Attkisson CC, Stegner BL. Assessment of patient satisfaction: development and refinement of a service evaluation questionnaire. Eval Program Plann 1983;6:299-313. https://doi.org/10.1016/0149-7189(83)90010-1.
- Schwarzer R, Jerusalem M, Weinman J, Wright S, Windsor JM. Measures in Health Psychology: A User’s Portfolio. Causal and Control Beliefs. Windsor: Nfer-Nelson; 1995.
- Tennant R, Hiller L, Fishwick R, Platt S, Joseph S, Weich S, et al. The Warwick-Edinburgh Mental Well-being Scale (WEMWBS): development and UK validation. Health Qual Life Outcomes 2007;5. https://doi.org/10.1186/1477-7525-5-63.
- Greenwood KE, Sweeney A, Williams S, Garety P, Kuipers E, Scott J, et al. CHoice of Outcome In Cbt for psychosEs (CHOICE): the development of a new service user-led outcome measure of CBT for psychosis. Schizophr Bull 2010;36:126-35. http://dx.doi.org/10.1093/schbul/sbp117.
- Kay SR, Fiszbein A, Opler LA. The positive and negative syndrome scale (PANSS) for schizophrenia. Schizophr Bull 1987;13:261-76. https://doi.org/10.1093/schbul/13.2.261.
- Priebe S, Watzke S, Hansson L, Burns T. Objective social outcomes index (SIX): a method to summarise objective indicators of social outcomes in mental health care. Acta Psychiatr Scand 2008;118:57-63. http://dx.doi.org/10.1111/j.1600-0447.2008.01217.x.
- Lasalvia A, Bonetto C, Malchiodi F, Salvi G, Parabiaghi A, Tansella M, et al. Listening to patients’ needs to improve their subjective quality of life. Psychol Med 2005;35:1655-65. http://dx.doi.org/10.1017/S0033291705005611.
- Priebe S, Omer S, Giacco D, Slade M. Resource-oriented therapeutic models in psychiatry: conceptual review. Br J Psychiatry 2014;204:256-61. http://dx.doi.org/10.1192/bjp.bp.113.135038.
- Jauhar S, McKenna PJ, Radua J, Fung E, Salvador R, Laws KR. Cognitive–behavioural therapy for the symptoms of schizophrenia: systematic review and meta-analysis with examination of potential bias. Br J Psychiatry 2014;204:20-9. http://dx.doi.org/10.1192/bjp.bp.112.116285.
- Reininghaus U, McCabe R, Burns T, Croudace T, Priebe S. Measuring patients’ views: a bifactor model of distinct patient-reported outcomes in psychosis. Psychol Med 2011;41:277-89. http://dx.doi.org/10.1017/S0033291710000784.
- Priebe S, Golden E, McCabe R, Reininghaus U. Patient-reported outcome data generated in a clinical intervention in community mental health care – psychometric properties. BMC Psychiatry 2012;12. http://dx.doi.org/10.1186/1471-244X-12-113.
- Beecham J, Knapp M. Measuring Mental Health Needs. London: Gaskell; 2001.
- McCrone P, Patel A, Knapp M, Schene A, Koeter M, Amaddeo F, et al. A comparison of SF-6D and EQ-5D utility scores in a study of patients with schizophrenia. J Ment Health Policy Econ 2009;12:27-31.
- Calsyn RJ, Allen G, Morse GA, Smith R, Tempelhoff B. Can you trust self-report data provided by homeless mentally ill individuals?. Eval Rev 1993;17:353-66. https://doi.org/10.1177/0193841X9301700306.
- Goldberg RW, Seybolt DC, Lehman A. Reliable self-report of health service use by individuals with serious mental illness. Psychiatr Serv 2002;53:879-81. http://dx.doi.org/10.1176/appi.ps.53.7.879.
- Haidet KK, Tate J, Divirgilio-Thomas D, Kolanowski A, Happ MB. Methods to improve reliability of video-recorded behavioral data. Res Nurs Health 2009;32:465-74. http://dx.doi.org/10.1002/nur.20334.
- Thomas DR. A general inductive approach for analysing qualitative evaluation data. Am J Eval 2006;27:237-46. https://doi.org/10.1177/1098214005283748.
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006;3:77-101. https://doi.org/10.1191/1478088706qp063oa.
- Heritage J, Robinson JD, Elliott MN, Beckett M, Wilkes M. Reducing patients’ unmet concerns in primary care: the difference one word can make. J Gen Intern Med 2007;22:1429-33. https://doi.org/10.1007/s11606-007-0279-0.
- Greene S, Hill M, Greene S, Hogan D. Researching Children’s Experience: Approaches and Methods. London: Sage; 2005.
- Kennedy C, Kools S, Krueger R. Methodological considerations in children’s focus groups. Nurs Res 2001;50:184-7. https://doi.org/10.1097/00006199-200105000-00010.
Appendix 1 DIALOG software development focus group schedule
Introduction: purpose of focus group, rules of group, no right or wrong answers, disclosure of personal information, confidentiality, withdrawal, audio-recording.
-
[WARM-UP] ‘We’ll start off by asking everyone to say their name and to say whether they’ve seen an iPad before.’
-
[FOCUS: IDENTIFY PREFERRED INTERFACE FOR DIALOG SOFTWARE]
Present in order 1/2/3 in first group, 2/3/1 in next group, 3/2/1in next group, and repeat.
‘I am now going to show you one of three versions of the DIALOG software. You’ll see two more versions after seeing this one. I am interested in hearing your thoughts on the software. Please touch the screen and attempt to use the software without any instructions from me – this will help me to observe how intuitive the software is for you. Then I’ll ask you a few questions.’
-
What in particular do you notice about this version of the software?
-
What do you find easy or difficult about this version of the software?
-
What would you change about this version of the software if you were in charge of it?
-
How does this version compare to the previous version(s) you’ve seen?
-
What did you think about the colours used in the app?
-
How is the font (writing)?
-
How is the size of the font?
-
Can you show me how to display ratings from previous meetings using the software?
-
Can you tell me which colour represents today’s ratings and which colour represents previous meetings?
-
Can you show me how you would reorder the topics using the software?
-
Can you show me how you would select topics for further discussion using the software?
-
-
[FOCUS: HOW THE SATISFACTION QUESTION SHOULD BE PHRASED]
‘Let’s talk about how each question is put to you repeatedly in the questionnaire – ‘how satisfied are you with . . .’ What are the pros and cons of this phrasing? Is it best as it is, or would you prefer an alternative?’
-
Can you give me your opinion on the pros and cons of phrasing the question as ‘How happy are you with your [mental health]?’
-
Can you give me your opinion on the pros and cons of phrasing the question as ‘How do you rate your [mental health]?’
-
Can you give me your opinion on the pros and cons of removing the question phrasing, and instead presenting stand-alone topics, i.e. mental health, physical health, etc. in isolation?
-
Can you give me your opinion on the pros and cons of presenting the questions in the following format – ‘I am ____ with my [mental health]’?
-
Are there any other alternatives that we should consider?
-
-
[FOCUS: HOW THE ADDITIONAL HELP QUESTION SHOULD BE PHRASED]
‘Let’s talk about how the question on additional help is put to you in the questionnaire. I am going to present you with a few different options. Can you give me your comments on each, including the pros and cons?’
-
Do you need help in this area?
-
Do you need additional help?
-
Do you need different help?
-
Do you need extra help?
-
Do you need more help?
-
Additional/different help needed (stand-alone).
-
Any of the above with ‘do you need’ replacing ‘would you like’.
-
-
‘Let’s talk about the order in which the topics appear. Should we keep it as it is or should the topics appear in a different order?’
-
Is there any topic which you think should come first?
-
Is there any topic which you think should not come first?
-
Is there any topic which you think should come last?
-
Is there any topic which you think should not come last?
-
Are there any topics which you think belong together in the sequence?
-
Are there any topics which you think do not belong together in the sequence?
-
-
[FOCUS: DETERMINING THE BEST LABELS FOR THE RESPONSE OPTIONS]
‘Let’s talk about the labels corresponding with the numbers. At the moment we have 1 = couldn’t be worse, 2 = displeased, 3 = mostly dissatisfied, 4 = mixed, 5 = mostly satisfied, 6 = pleased and 7 = couldn’t be better. What do you think of these labels? We’ll talk about them generally, and then go through them one-by-one.’
-
Could the labels be improved or should they stay the same?
-
Do you feel that there is a suitable difference or distance between each label and the next?
-
What should the labels be at the extreme ends of the scale (i.e. 1 and 7)?
-
What should the labels be at 2 and 6?
-
What should the labels be at 3 and 5?
-
What should the label be for the midpoint, 4?
-
Close: summary by co-facilitator, debrief, compensation.
Appendix 3 Case vignettes for DIALOG+ training
Scenario 1
Mary is 47 years old and has been in the service for 5 years. She is on antipsychotic medication and has not experienced any psychotic symptoms for 6 months now. She lives with her partner of 12 years. Although she has been unemployed for 10 years, this does not bother her much. However, she continuously reports her physical health as a cause for concern (which she rates as 2). Mary is worried that she has put on a lot of weight over the last 5 years and is very eager to lose it. She has failed to stick to any diet and exercise plans made in previous meetings.
-
Eliciting:
-
Does not like being overweight and sometimes gets very down because of it.
-
Tries hard to diet but has not lost any weight.
What works:
-
There are days where she eats healthier. it tends to be when she is busy, she does not find herself snacking on biscuits or drinking fizzy drinks.
-
Regarding exercise, she walks to the shops or to the bus stop. These are the only occasions she can think of, and this is just because she has to.
-
-
Best-case scenario:
-
Would no longer eat biscuits or drink fizzy drinks.
-
Would be back at 9 stone like she was when she was younger.
Small changes:
-
If she lost just 1 pound.
-
She could resist biscuits and soft drinks in her free time.
-
-
What the patient can do:
-
Write down a plan for diet and exercise with specific times.
-
Go for at least a 30-minute walk everyday.
What the clinician can do:
-
Very eager to try any suggestions offered by the clinician (for example, clinician may suggest attending a weight loss group).
What other people can do:
-
If her family were to cut out biscuits and fizzy drinks too there would be no more in the house to tempt her.
-
Scenario 2
Jonathan is 33 years old. He has a degree in computer science and previously had a job inputting data. However, he lost his job 3 years ago as he was continuously failing to meet targets. He currently feels that he could not cope with the demands of a full-time job as he lacks the energy needed. He rates his satisfaction with medication as 2, commenting that it is the main issue for him. Currently, he is taking two different antipsychotics. Moreover, he drinks alcohol almost daily. He finds he is sleeping too much and always feels very tired throughout the day. This is affecting the rest of his life.
-
Eliciting:
-
Sleeps for too long and ends up getting nothing done.
-
This has also affected the rest of his life.
-
Always tired and does not have the energy to work or even socialise.
-
Believes this is due to his medication.
What works:
-
He recently feels he is doing better with his mental health.
-
-
Best-case scenario:
-
Being able to sleep regularly for 8 hours every night.
-
Would be full of energy throughout the day so that he can maintain a full-time job and his normal social activities.
Small changes:
-
Would no longer fall asleep after lunch.
-
-
What the patient can do:
-
Really needs to give himself 8 hours of sleep every night and get into a regular sleep routine.
-
He should reduce drinking, especially during weekdays.
What the clinician can do:
-
Would like a medication review as soon as possible.
What other people can do:
-
His sister could help him stay awake during the day encouraging him to follow a routine, so that he could then sleep all night long for 8 hours.
-
Scenario 3
Only if more time is available
Rodd is 21 years old. He feels his family is generally very supportive, although his mother can be a little overprotective and she does not encourage him to be more independent (satisfaction with family is rated as 4). He reports not being able to find a job as his main source of dissatisfaction (job situation rated as 1). Rodd has previously worked in retail and is looking for another job in this area. However, he complains of lacking confidence and not having the organisational skills needed to find a job.
-
Eliciting:
-
Wants to find a job but is getting nowhere.
-
Lacks confidence and the organisational skills needed to find one.
What works:
-
Worked in retail 5 years ago and that gives him some skills.
-
This experience helps him be more confident in job searching.
-
This thought helps him to cope sometimes.
-
-
Best-case scenario:
-
Having a secure job, maybe in retail, with a good enough wage to live on.
Small changes:
-
Would manage to send off another job application again.
-
Learning what he needs to know to face a job interview.
-
-
What the patient can do:
-
Go to the job centre.
-
In the meantime, get some voluntary work to get some more experience.
What the clinician can do:
-
Have someone check his CV and applications may help.
-
Provide a list of voluntary organisations.
What other people can do:
-
Perhaps his mother could give him a bit more encouragement with the job applications. This may help with his confidence.
-
Scenarios for the DIALOG+ refresher session: instructions for trainers
During the refresher training session the trainer will listen to an audio-taped session where the clinician used the DIALOG+ procedure for the first time with a patient. Feedback will be provided by the trainer, and time will be allowed for the clinician to ask questions.
On top of that, clinicians should also complete the role play of at least one clinical scenario during the refresher session. Note that clinicians will have the option to either role-play specific issues they may have faced in their first experiences of using DIALOG+, or role-play more complex scenarios to strengthen their skills.
Please remember to encourage the clinician in the use of additional probing, including:
-
’what else’ questions to elicit more detailed answers
-
’how will you know’ questions to elicit answers expressed in behavioural terms
-
’what would happen/would you do instead’ questions to elicit an answer that describes a desired change instead of the absence of an undesired problem.
Scenario 1
Eve is 27 years old. Her satisfaction with her mental health is low (score of 2). She is often distressed by very loud voices that can be very offensive to her. She is taking her medication regularly and has noticed an improvement in the frequency of these voices. However, sometimes the voices come back and this is significantly limiting her life, as she becomes distracted at work and in social situations. She feels she would like more help if possible, but she doubts there is something that could be done for her. She says she has no control over her voices and believes she cannot help it.
When using the four-step approach with Eve, she sometimes struggles to see any possible options for herself and that can make agreeing on any plan of action harder. Also, her answers can sometimes be vague or negative (expressing what she does not want instead of what she wants).
-
Eliciting:
-
Often hears loud voices throughout the day.
-
They can be offensive to her and always make her very upset.
-
These voices are really affecting her life.
-
She becomes distracted at work and in social situations to the point where she often cannot concentrate on anything else.
What works:
-
Drinking alcohol reduces the frequency of the voices and helps her to cope better with them.
-
The medication may have helped a little.
-
However, nothing can stop them completely.
-
-
Best-case scenario:
-
(first response) Very vague, would no longer hear the voices.
-
(further prompting) Would be able to not react in such an aversive way when hearing voices, so as to enjoy the key aspects of her life without distraction, especially her social life and leisure activities.
Small changes:
-
Learning a few coping strategies so that she does not need to avoid situations for fear of losing control.
-
-
What the patient can do:
-
(first response) There is nothing that can be done.
-
(further prompting) Just has to learn to cope with the voices. She should try coping strategies that her psychological therapist suggested to her, such as: writing the content of the voices down, checking with other people if they hear them as well.
What the clinician can do:
-
Not sure how helpful they would be.
-
Willing to try some suggestions.
-
Discuss with clinician about voice hearing group and find out details.
What other people can do:
-
Family may be supportive and help her in coping if informed about her experiences.
-
Scenario 2
Tarik is 60 years old. He rates most of the domains positively, including mental health. He is currently living alone following the death of his partner 3 months ago. He has not experienced any psychotic symptoms for 12 months. However, living alone makes him feel quite lonely and depressed at times. Tarik has one daughter who visits occasionally and has a very busy life. He also has two grandchildren. He complains he cannot get to his daughter’s house by public transportation so he rarely sees them.
When trying to use the four-step approach with Tarik he sometimes does not find it easy to answer your questions. In fact, he believes that there is little he or anyone else can do to change his situation. He says he is very low in mood because of the loss he recently had and that cannot be changed. Sometimes there are ‘do not know’ answers, and Tarik may struggle to see a future.
-
Eliciting:
-
Partner passed away 3 months ago.
-
Has been feeling sad and lonely since then.
-
Never gets to see his daughter and granddaughters because they live far away and they are too busy to come and see him.
-
Cannot go to them because he cannot drive and it is difficult by public transport. He is, therefore, feeling very lonely all the time.
What works:
-
(first response) Reading distracts him.
-
(further prompting) Visits from his family help him cope. But this rarely happens.
-
-
Best-case scenario:
-
(first response) His partner would still be alive.
-
(further prompting) He would like to find the motivation to carry on.
Small changes:
-
(first response) Does not know/struggles to answer.
-
(further prompting) Going out for a walk or similar things once a day.
-
(further prompting) An extra visit from his family each month could help a little.
-
-
What the patient can do:
-
Go out at least once a day. Maybe to set a time of the day when to do it.
-
Spend some time talking over the telephone with his family.
What the clinician can do:
-
(first response) Nothing can bring back his partner.
-
(further prompting) He wonders if the service can offer alternative solutions to public transportation for him to visit his family.
What other people can do:
-
(first response) Nobody can change his situation.
-
(further prompting) His family could spend more time seeing him. But he thinks that they are very busy and that this may be unlikely to happen. However, he thinks he should talk to them about it.
-
Scenario 3
Francesca is 54 years old. She is single and unemployed. She lives in supported accommodation but has started to feel threatened by another person living there. Francesca sees this person observing her and talking about her with others in a very derogatory way. In the last couple of weeks she noticed a negative change in the attitude of other people living in her accommodation, and she thinks they are plotting against her (not sure what). She has started to become fearful for her own safety and has rarely left her room during the last week. However, she also reported similar experiences in the last place she lived 3 months ago when a new person moved in there.
Francesca is certain that these people will try to harm her in some way, and she is not willing to accept alternative interpretations of what is going on.
-
Eliciting:
-
There is a person where she lives that she feels threatened by.
-
Sees this person watching her and talking about her with others in a nasty way.
-
Other people living there have started to avoid her, whereas before they got on very well.
-
Convinced they are all plotting against her in some way, but not sure what.
-
She has become very worried for her safety and in the last week has only left her room for food or when she needs to go somewhere.
What works:
-
In general, she likes the place where she lives.
-
There are a couple of people that are still nice to her.
-
-
Best-case scenario:
-
This person would leave her place, and she would get on with everyone that she lives with.
Small changes:
-
The other people stop avoiding her and talk to her more.
-
-
What the patient can do:
-
There is little she can do.
-
She should consider changing flat.
-
Maybe she should talk to the two people that are still nice to her to find out more.
What the clinician can do:
-
Explore the possibility of alternative accommodations.
-
Talk to this person.
What other people can do:
-
Maybe the housing staff could do more to help her. But she does not know what.
-
Appendix 4 Instructions for administering the control condition
Please read the following information carefully, as it is important that all clinicians follow the same standard procedure.
The service user should complete the DIALOG scale once a month, after a routine consultation with you, for a period of 6 months. You should treat the service user as usual – the only difference is that the service user should be given the opportunity to complete the DIALOG scale at the end of the meeting.
It is essential that the service user always completes the DIALOG scale:
-
on their own
-
at the end of the consultation (never before).
Please note: the ratings should never be discussed.
The DIALOG scale
The DIALOG scale is an assessment of the service user’s satisfaction with 11 domains relating to life and treatment. On initiating the software, one is presented with the first of these domains, mental health. The remaining 10 domains are visible underneath, in truncated form.
The service user is invited to rate his/her satisfaction with mental health on a scale of 1 (totally dissatisfied) to 7 (totally satisfied). The procedure is the same for the remaining 10 domains: physical health, job situation, accommodation, leisure activities, relationship with partner/family, friendships, personal safety, medication, practical help and meetings with clinicians.
In order to proceed to a different domain, the service user should press on the required domain from the list on the left. The selected domain will become active, with all other domains truncated. Responses to all previously completed domains are still visible and gradually build an overview of the assessment.
The service user can choose not to answer a particular domain if he/she wishes. In order to undo a rating, one can press down on the slider () until the value disappears.
What to do when using the iPad and DIALOG scale for the first time
The very first time the service user uses the iPad to complete the DIALOG scale, please take some time to help him/her to become familiarised with the software; showing them how the software works (e.g. choosing a response, moving onto the next question), explaining the following: the questions (what is meant by satisfaction), the 1 to 7 scale and its response options and the domains themselves (e.g. job situation and practical help are not necessarily self-explanatory). Give the service user the opportunity to ask questions.
Note: The touch screen responds best to light touches from the skin and may not respond to heavy presses or long nails.
When showing how the software works, please do not use the service user’s ratings to complete the scale; instead, run a trial session for demonstration purposes. Please remember to cancel the session at the end of the demonstration (i.e. do not save) to avoid having to manually remove the demonstration ratings.
Please make it clear that the DIALOG scale will be used as a questionnaire at the end of a meeting and that after the completion of the scale there will not be a discussion about the ratings.
Administration procedure
Note: When meeting with a service user with the intention of asking them to complete the DIALOG scale, please make sure you allow an appropriate amount of time (e.g. 10 minutes) at the end of the meeting for this (obviously the time needed may vary between service users and as they become more familiar with completion of the scale).
-
Open the DIALOG scale on the software.
-
Pass the iPad to the service user and ask them to complete the scale. Remind them that they are to complete the scale independently and their ratings will not be discussed with you.
-
Leave the room (if not possible please do not observe the service user completing the scale).
-
Once the client has finished completing the scale, select ‘finish’.
-
Lock the appointment.
-
Synchronise the appointment.
Please refer to the technical manual for navigating the software.
What should I do if the service user asks for help when completing the scale?
The service user needs to complete the scale alone, without your help. If the service user is struggling to operate the iPad and complete the scale, or asks for help, please tell them to try it for a few minutes and do the best they can and to not worry if they cannot complete it. You can mention that these are the research team’s strict instructions and that you are obliged to follow them.
Although it may be tempting/easier to help the service user, please resist. The only thing that is important here is that the patient receives an iPad and gets the opportunity to complete a rating scale for a few minutes. Do not worry if the patient returns it without completing anything. Just make sure that the session is saved, if the patient has not done this already.
Why is it so important for the service user to complete the scale alone?
As you know, you are using the ‘DIALOG scale’ with the service users on your caseload who are involved in this reseach project. Other clinicians in the study are using ‘DIALOG+’. It is important that the two different types of treatment are very different from each other, so that the researchers can make comparisons when the study is over. ‘DIALOG+’ involves clinicians and service users using the iPad together, so ‘DIALOG scale’ should be completely different, by having the service user use the iPad alone.
Appendix 5 DIALOG+ adherence scale
Appendix 6 DIALOG+ rating sheet
Appendix 7 Control group rating sheet
Appendix 8 DIALOG+ focus group schedule: patient experiences
Introduction: purpose of focus group, rules of group, no right or wrong answers, disclosure of personal information, confidentiality, withdrawal, audio-recording.
-
[WARM-UP] ‘We’ll start off by asking everyone to say their name and to say how long you’ve been working with your care co-ordinator.’
-
[FOCUS: WHAT MEETINGS WERE LIKE BEFORE DIALOG+]
‘As part of this research, you discussed a set of topics with your care co-ordinator that were presented on an iPad, every month or so during your meetings. But before we get into that, can you tell me what your meetings were like before you started using the iPad?’
-
Can you talk me through how the meeting would usually pan out?
-
How did you and your care co-ordinator decide on what to discuss?
-
What would you usually talk about?
-
Did you get a chance to talk about the topics that were most important to you?
-
-
[FOCUS: WHAT MEETINGS WERE LIKE USING DIALOG+ – GENERAL]
‘Okay. And more recently, you started using the iPad to discuss a set of topics during your meeting, in an approach called DIALOG+. Can you tell me what that was like?’
-
Can you talk me through how you and your care co-ordinator would use this?
-
In what way was this different from how you did things before?
-
How did you feel about doing things this way?
-
Did you get a chance to talk about the topics that were most important to you?
-
Was this approach helpful or not helpful?
-
Which approach did you prefer?
-
-
[FOCUS: RATING ONESELF ON A SCALE]
‘What was it like to rate yourself on a scale of 1 to 7 for things like mental health, physical health, job situation etc.?’
-
Was this easy or difficult?
-
What were the advantages/disadvantages of doing this?
-
Did you get used to rating these topics on a monthly basis?
-
Did you find it repetitive, or did you find it useful to follow a structure?
-
For each item, you were asked if you wanted more help in that area. Was this helpful to you in any way?
-
-
[FOCUS: DECIDING ON TOPICS FOR FURTHER DISCUSSION]
‘After rating the 11 topics . . . Then what happened?’
-
Can you talk me through how it would usually go?
-
Did you decide to discuss three or four of the topics in more detail?
-
How did you choose which ones to talk about for the rest of the meeting?
-
Did you ever compare ratings from that day’s session with ratings from a previous session? (If yes, was this helpful or not helpful?)
-
-
[FOCUS: FOUR-STEP APPROACH TO DISCUSSING PROBLEMS – GENERAL]
‘After you had chosen your top three or four topics . . . Then what happened?’
-
Can you talk me through how it would usually go?
-
Can you think of one of the topics you chose and describe your conversation with your care co-ordinator to me?
-
Can you tell me about the types of questions your care co-ordinator asked you?
-
Did you follow any particular steps or structure together?
-
Did you notice that this was different in any way to how you’d discuss things with your care co-ordinator previously?
-
Did you find these discussions helpful?
-
-
[FOCUS: FOUR-STEP APPROACH TO DISCUSSING PROBLEMS – SPECIFICS]
-
Step one: Can you tell me in a bit more detail about how you discussed problems?
-
Step two: Can you tell me about how you talked about improving the situation?
-
Step three: How did you discuss what you could do next to help the situation, and who would help?
-
Step four: How did you finish up talking about the topic?
-
Did you agree on a decision or some kind of ‘action plan’?
-
How did you come to this agreement together?
-
Did you review this plan at the beginning of the next session?
-
-
[FOCUS: DIALOG SOFTWARE]
‘Did you share the iPad and look at it together, or was it mostly your care co-ordinator using it?’
-
What did you notice about the software itself (i.e. the design)?
-
What did you think about the colours used in the app?
-
How was the font (writing)?
-
How was the size of the font?
-
Is there anything missing from the app that could improve it or makes things clearer, or anything you would change?
-
-
[FOCUS: OVERALL THOUGHTS ON EXPERIENCE OF USING DIALOG+]
‘If you had to sum up your experience of using DIALOG+ in your meetings, what would you say?’
-
Have you noticed anything different in your life since using this new approach?
-
Has using this approach changed your relationship with your care co-ordinator in any way?
-
Would you like to continue using this approach on a monthly basis, or would you prefer not?
-
Close: summary by co-facilitator, debrief, compensation.
Appendix 9 DIALOG+ focus group schedule: clinician experiences
Introduction, purpose of focus group, rules of group, no right or wrong answers, disclosure of personal information, confidentiality, withdrawal, audio-recording.
Key questions
DIALOG+ intervention (therapeutic element)
Subtopics | Ideas for questions | Probes |
---|---|---|
General overview | What was your experience of using DIALOG+? What is your understanding of the four-step approach? What is your experience of using the four-step approach? |
See subtopics What is the purpose of the four-step approach? Advantages/disadvantages of the approach |
Change in method/style | How did you find incorporating this method into your usual style? How did you introduce the new structure to your clients? How do you think your clients responded to this new style? What was the impact of structuring the communication between you and your clients in this way? What was the impact of introducing an iPad into your routine meetings? How did the iPad and DIALOG+ impact on your therapeutic relationship? Were there any patients that DIALOG did not work well with? Was DIALOG+ compatible with your routine meetings/way of working? |
Were there any difficulties? Was it a significant shift from your usual style? Did you feel more confident the more you used DIALOG+? Do you think your clients became more proactive in discussions? Were DIALOG+ sessions more client-led? How is it effective? If not, how could it be more compatible? |
Assessment of satisfaction | Was it useful for you to see how your clients rate their satisfaction? | How was it useful? Any other uses apart from DIALOG+? |
Initiating a review | How did you review action items from a previous session? Was the ‘comparison of ratings’ feature useful? |
Did you use the ‘comparison of ratings’ feature regularly? Did you discuss changes in ratings with your clients? |
Four-step approach | How did you find the process of the four steps? How did it feel to have a discussion and only record brief action items? Do you feel action items were mutually agreed? Were the example questions for each step useful? Do you feel the process varied significantly between individual clients? |
Were there any difficulties with particular steps? Did you make notes on paper? Did you ask the questions in the style of the examples or just use them for reference? Did certain clients have difficulties with particular steps compared with others? Likes/dislikes |
Comparison to before | What effect did using DIALOG+ during meetings with your clients have compared with meetings before DIALOG+? Have you noticed any changes in your clients’ behaviour and communication in your meetings since using DIALOG+? Did this method change how information was elicited from your clients? |
Is there a reason you think that is? What kind of changes? What has caused these changes? Can you give an example? Did you explore issues more than you usually would? Did you ask questions differently to how you usually do? Did your clients open up more compared with before DIALOG+? |
Frequency of use | What do you think of the frequency you were asked to use this with your clients? Would you continue using the DIALOG+ procedure with your clients? |
Do you think a period of 6 months was enough? If no/mixed – under what circumstances would you continue to use it? You were encouraged to continue using DIALOG+ with your clients after the 6-month period, what factors influenced your decision not to? |
Design of DIALOG app/software (technological element)
Subtopics | Ideas for questions | Probes |
---|---|---|
General overview | What are the advantages and disadvantages of the current design? (Visual elements, wording, layout) | What would you change about the current design? |
Ease of use | How did you find navigating the software? How did your clients respond to the software? |
Did you have to refer to the technical manual? Did you find it intuitive to use? How was the iPad for shared viewing? |
11 items | Did you feel the 11 items encompassed all areas of an individual’s life? | Would you change any of the 11 items? |
Rating scale | Did your clients understand the rating scale? | Would you change the wording? |
Four-step approach | Was the ‘i’ button useful for the four-step approach? | Was it useful to have this on the app? Did you use the ‘i’ tab? Did you consult the summary pages of the manual? |
Visual elements | Do you have any comments regarding the visual layout of the software? | For example, colours, font, wording |
Training and support
Subtopics | Ideas for questions | Probes |
---|---|---|
General overview | Do you have any feedback about the training session? | |
Training session | How well did you understand the intervention at the end of the session? Did you feel supported throughout the intervention period? Was it useful to be provided feedback after having the audio-recording? |
Were all your questions answered satisfactorily? What improvements would you have made to the training? Was one session enough or would you have preferred more? Did you feel that you could arrange follow-up training whenever you required it? |
Manual | How did you the find the DIALOG+ manual as an accompaniment to the software? | Comprehensive? Did you refer to the manual during the 6-month trial period? Would you make any changes to the manual? |
Close: summary by co-facilitator, debrief, compensation.
List of abbreviations
- app
- application
- CANSAS
- Camberwell Assessment of Need Short Appraisal Schedule
- CBT
- cognitive–behavioural therapy
- CHOICE
- Choice of Outcome in CBT for Psychosis
- CI
- confidence interval
- CMHT
- community mental health team
- CONSORT
- Consolidated Standards of Reporting Trials
- CSQ-8
- Client Satisfaction Questionnaire
- CSRI
- Client Service Receipt Inventory
- ELFT
- East London NHS Foundation Trust
- EQ-5D
- EuroQoL-5 Dimensions
- FG
- focus group
- FOCUS
- Function and Overall Cognition in Ultra-high-risk States
- GP
- general practitioner
- GSS
- General Self-efficacy Scale
- HTML5
- hypertext markup language 5
- ICD-10
- International Statistical Classification of Diseases and Related Health Problems, Tenth Edition
- JSON
- JavaScript Object Notation
- MANSA
- Manchester Short Assessment of Quality of Life
- MVC
- model–view–controller
- NRES
- National Research Ethics Service
- P
- participant
- PANSS
- Positive and Negative Syndrome Scale
- PCTU
- Pragmatic Clinical Trials Unit
- QMUL
- Queen Mary University of London
- SD
- standard deviation
- SFT
- solution-focused therapy
- SIX
- Objective Social Outcomes Index
- STAR-C
- Scale for Assessing Therapeutic Relationships in Community Mental Health Care, clinician version
- STAR-P
- Scale for Assessing Therapeutic Relationships in Community Mental Health Care, patient version
- WEMWBS
- Warwick–Edinburgh Mental Well-Being Scale