Notes
Article history
The research reported in this issue of the journal was funded by the HTA programme as project number 11/129/195. The contractual start date was in March 2013. The draft report began editorial review in February 2015 and was accepted for publication in April 2015. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
Barnaby C Reeves reports receiving grants from the National Institute for Health Research Health Technology Assessment programme during the conduct of the study; the National Institute for Health Research grants (paying for his time through his academic employer) for various ophthalmological studies, including ones investigating wet age-related macular degeneration; personal fees from Janssen-Cilag outside the submitted work; and membership of the Health Technology Assessment Commissioning Board and Systematic Reviews Programme Advisory Group. In particular, he is a coinvestigator on the National Institute for Health Research-funded IVAN trial (a randomised controlled trial to assess the effectiveness and cost-effectiveness of alternative treatments to Inhibit VEGF in Age-related choroidal Neovascularisation; ISRCTN92166560) and is continuing follow-up of the IVAN trial cohort. Ruth Hogg reports she received grants and personal fees from Novartis Pharmaceuticals UK, outside the submitted work. Chris A Rogers reports she received a fee from Novartis Pharmaceuticals UK for a lecture unrelated to this work. Simon P Harding reports grants from the National Institute for Health Research during the conduct of the study. Usha Chakravarthy reports membership of the Health Technology Assessment Interventional Procedures Panel.
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2016. This work was produced by Reeves et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Chapter 1 Introduction
Background
Wet, or neovascular, age-related macular degeneration (nAMD) is a common condition which can cause severe sight loss and blindness. It occurs as a result of a pathological process in which new blood vessels arising from the choroid breach the normal tissue barriers and come to lie within the subpigment epithelial and subretinal spaces. These new blood vessels leak fluid and, because they are fragile, they can bleed easily. The collection of fluid or blood between the tissue layers and within the neural retina is incompatible with normal eyesight. Other variants of nAMD which are usually treated as nAMD include (1) an abnormal vascular complex arising de novo from the retinal circulation, known as retinal angiomatous proliferation, and (2) intrachoroidal aneurysmal dilatation(s) of the vasculature, known as polypoidal choroidopathy.
Currently, patients with nAMD (or nAMD variants) are treated with intravitreal injections of anti-vascular endothelial growth factor (VEGF) drugs1 (drugs that inhibit VEGF). The most commonly used drugs are ranibizumab (Lucentis®, Novartis Pharmaceuticals UK), bevacizumab (Avastin®, Roche Products Limited) and aflibercept (Eylea®, Bayer). Ranibizumab prevents sight loss in over 90% of eyes with nAMD when given as monthly intravitreal injections for up to 2 years. 2,3 Bevacizumab (unlicensed for nAMD) and aflibercept are non-inferior to ranibizumab in maintaining visual acuity after 1 year of treatment,4–6 and bevacizumab is also non-inferior to ranibizumab after 2 years of treatment. 7–9
Anti-VEGF drugs render the nAMD lesion quiescent by making the leaky vessels competent. However, adequate concentrations of the drug need to be present in order to maintain the neovascular complexes in a quiescent non-leaky state and to ameliorate the exudative manifestations. Once the macula has been rendered fluid free, cessation of treatment is the norm and patients are monitored for relapse at regular clinic visits, which are usually monthly. Monitoring involves visual acuity checks, clinical examination and optical coherence tomography (OCT), with treatment being restarted if required. There is now evidence that intensive regular monthly review to detect recurrence, restarting treatment when necessary, can result in functional outcomes similar to those observed in industry-sponsored trials of ranibizumab in which patients received monthly treatment over 2 years. 1,3,8 However, regular monthly review in the Hospital Eye Services (HESs), even without treatment, blocks clinic space, uses valuable resources, is expensive and is also burdensome to the patients and their carers. Ophthalmologists have also investigated giving ‘prophylactic’ treatment to quiescent eyes, extending the interval between clinic visits providing the disease remains quiescent, in order to lessen the burden of regular visits to patients and to the NHS. A disadvantage of this method of treatment is that it can lead to unnecessary overtreatment.
Existing evidence
There is currently no evidence about the effectiveness of community follow-up by optometrists for nAMD. On the other hand, there is evidence about the effectiveness of optometrists providing ‘shared care’ with the HESs for glaucoma and diabetic eye disease and the training programmes that have been used to achieve them. Evaluations comparing management by optometrists and ophthalmologists have shown acceptable levels of agreement between the decisions made in the context of glaucoma and accident and emergency services. 10,11 Thus, there are existing models of shared care management which are well established through formal evaluation. A recent review has outlined different approaches used to increase the capacity in nAMD services across the UK. 12 The case studies in this review show a variety of scenarios, with many involving extended roles for optometrists and nurse practitioners, but these occur within the HES. Some studies have also evaluated the potential of remote care, but these approaches involve assessments by an ophthalmologist specialising in medical retina working in the HES of OCTs captured by outreach services. 13,14
Taking and interpreting retinal images are skills that can be easily taught (the former is usually carried out by technicians in the HES) and, therefore, the final evaluation in a telemedicine scenario need not always involve an ophthalmologist. Many of the hospital-based scenarios involve specialist optometrists and nurse practitioners making clinical decisions, although the effectiveness of these management pathways has not yet been formally evaluated. The transfer of the care of patients who are not receiving active treatment for nAMD requires the ability to interpret signs in the fundus of the eye (through either clinical examination or fundus photography) combined with an examination of OCT images of the macula, as well as the facility for patients to be returned seamlessly and expediently to the HESs when there is reactivation of disease. Optometrists represent a highly skilled and motivated workforce in the UK and the vast majority of optometric practitioners are based in the community. A number of UK community optometric practices have already invested in the technology for performing digital fundus photography and OCT and use these technologies to make decisions about diagnosis and the need to refer a patient to the HES. However, the skill and ability of optometrists to differentiate quiescent nAMD from active nAMD have not been evaluated. In addition, to the best of our knowledge, no shared care management scheme for nAMD has been formally evaluated.
Relevance to the NHS/health policy
Even when nAMD has been successfully controlled by treatment with an anti-VEGF drug, clinicians continue to review patients regularly because there is a very high risk of relapse, evidenced by the proportion of patients who remain in follow-up for many years after initiation of therapy. 15 One of two strategies is typically used: (1) monthly review until active disease recurs, termed the ‘pro re nata regimen’, or (2) the ‘treat and extend regimen’. The latter method requires that treatment is administered even if there is no fluid at the macula, but the subsequent review interval is extended by approximately 2 weeks. The pro re nata regimen is very burdensome for patients and for the NHS, and the treat and extend regimen leads to overtreatment, with its attendant risks and additional expense.
If monitoring of the need for retreatment by community optometrists could be shown to have similar accuracy to the monitoring of the need for retreatment by ophthalmologists in the HES, there would be a strong impetus to devolve monitoring of patients whose disease is quiescent to the community setting. Community optometrists have the necessary training to recognise nAMD (they are responsible for the majority of referrals to the HES) but would need to be trained to acquire OCT images and to interpret them in order to assess the need for retreatment. If optometrists can be trained to perform these tasks and make the correct clinical decision, they could manage patients with quiescent disease effectively in the community until reactivation occurs, at which point rapid referral to the HES could be initiated.
The advantages of devolving monitoring to community optometrists include having clinic capacity freed up for the overstretched NHS and less travel time for patients.
Aims and objectives
The aim of the ECHoES (Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service for follow-up) trial was to test the hypothesis that, compared with conventional HES follow-up, community follow-up by optometrists (after appropriate training) is not inferior for patients with nAMD with stable vision.
This hypothesis was tested by comparing decisions made by samples of ophthalmologists working in the HES and optometrists working in the community about the need for retreatment using clinical vignettes and images generated in the IVAN (a randomised controlled trial to assess the effectiveness and cost-effectiveness of alternative treatments to Inhibit VEGF in Age-related choroidal Neovascularisation) clinical trial [Health Technology Assessment (HTA) programme reference: 07/36/01; International Standard Randomised Controlled Trial Number (ISRCTN) 921665609]. Retreatment decisions made by participants in both groups were validated against a reference standard (see Chapter 2, Reference standard).
The trial had five specific objectives:
-
to compare the proportion of retreatment decisions classified as ‘correct’ (against the reference standard, ‘active’ vs. ‘suspicious’ or ‘inactive lesion’) made by optometrists and ophthalmologists
-
to estimate the agreement, and nature of disagreements, between retreatment decisions made by optometrists and ophthalmologists
-
to estimate the influence of vignette clinical and demographic information on retreatment decisions
-
to estimate the cost-effectiveness of follow-up in the community by optometrists compared with follow-up by ophthalmologists in the HES
-
to ascertain the views of patient representatives, optometrists, ophthalmologists and clinical commissioners on the proposed shared care model.
Chapter 2 Methods
The results of the cost-effectiveness analysis have been published open access in BMJ Open. 16
Study design
The ECHoES study is a non-inferiority trial designed to emulate a parallel-group design (Table 1). However, as all vignettes were reviewed by both optometrists and ophthalmologists in a randomised, balanced, incomplete block design,17,18 the ECHoES trial is more analogous to a crossover trial than to a parallel-group trial. This trial is registered as ISRCTN07479761.
Research question component | ECHoES (crossover) trial | Conventional (parallel-group) trial |
---|---|---|
Population | Vignettes representing the clinical features of patients with quiescent nAMD being monitored for reactivation of disease | Patients with quiescent nAMD being monitored for nAMD reactivation |
Intervention | Assessment of vignettes by a traineda optometrist to identify nAMD reactivation | Monthly review by a community optometrist, after training,b to detect nAMD reactivation |
Comparator | Assessment of vignettes by a traineda ophthalmologist to identify nAMD reactivation | Monthly review by an ophthalmologist in the HESb to detect nAMD reactivation |
Outcome | Correct identification of reactivated nAMD (presumed to lead to appropriate treatment to preserve visual acuity) | Visual acuity |
The trial aimed to quantify and compare the diagnostic accuracy of ophthalmologists and optometrists in assessing reactivation of quiescent nAMD lesions, compared with the reference standard (see Reference standard). This type of design was possible only with limited permutations of the total number of vignettes, participants and number of vignettes per participant. For the ECHoES trial, a total of 288 vignettes were created. Forty-eight ophthalmologists and 48 optometrists each assessed a sample of 42 vignettes. Each vignette was assessed by seven ophthalmologists and seven optometrists. Each sample of 42 vignettes was assessed in the same order by one optometrist and one ophthalmologist, both selected randomly from their cohorts.
Vignettes
A database of vignettes was created for the ECHoES trial using images collected in the IVAN trial (HTA reference: 07/36/01; ISRCTN 921665609), which included a large repository of fundus images and OCT images from eyes with varying levels of lesion activity. In the IVAN trial, OCT and fundus images were captured from 610 participants every 3 months for up to 2 years, generating a repository of almost 5000 sets of images, with associated clinical data. However, only a subset (estimated to be about 25% of all of the available OCT images) was captured using the newer-generation Fourier domain technology (now the clinical standard), which provides optimal images of the posterior ocular findings. The vignettes in the ECHoES trial were populated only with OCT images captured on spectral/Fourier domain systems.
Each vignette consisted of sets of retinal images (colour and OCT) at two time points (baseline and index), with accompanying clinical information (gender, age, smoking status and cardiac history) and best corrected visual acuity (BCVA) measurements obtained at both time points. The ‘baseline’ set were images from a study visit when the nAMD was deemed quiescent (i.e. all macular tissue compartments were fluid free) and the ‘index’ set consisted of images from another study visit. Considering both baseline and index images, and taking into account the available clinical and BCVA information, participants reviewed these vignettes and made classification decisions about whether the index lesion was believed to be reactivated, suspicious or quiescent. Further details are published elsewhere. 19 A reference standard lesion classification was assigned to each vignette on the basis of independent assessment by three retinal experts (see Reference standard).
Participants
Recruitment
The ECHoES trial was publicised in optometry journals and forums to attract optometrists, and circulated to ophthalmologists who were members of the UK and Welsh medical retinal groups. Potential participants were directed to the ECHoES trial website, where they could read the information sheet and register their interest in the trial.
Eligibility criteria
Participants had to meet the following inclusion/exclusion criteria.
Ophthalmologists:
-
have 3 years’ post-registration experience in ophthalmology
-
have passed part 1 of the Royal College of Ophthalmologists examination or hold the Diploma in Ophthalmology or equivalent qualification
-
working in the NHS at the time of participation in the ECHoES trial
-
have experience within an age-related macular degeneration (AMD) service.
Optometrists:
-
be fully qualified and registered with the General Optical Council for at least 3 years
-
be practising within the General Optical Service at the time of participation in the ECHoES trial
-
must not be working within AMD shared care schemes or undertaking OCT interpretation within AMD care pathways.
There were also some practical circumstances in which a potential participant was not accepted to assess the main study vignette set:
-
unable to attend any of the webinar training sessions
-
unable to achieve an adequate standard (75%) with respect to the assessment of lesion activity status (i.e. reactivated, suspicious or quiescent) on the training set of vignettes.
Training participants
Both ophthalmologists and optometrists are qualified to detect retinal pathology, but optometrists (and some ophthalmologists) may not have the skills to assess fundus and OCT images for reactivation of nAMD. Therefore, the training was designed to provide the key information necessary to perform this task successfully, so that all participants had a similar level of background knowledge when starting their main trial vignette assessments. The training included two parts.
Webinar lectures
All participants were required to attend two mandatory webinar lectures. The first webinar covered the objectives of the EcHoES trial, its design, eligibility criteria for participation, outcomes of interest, and the background to detection and management of nAMD. The second webinar covered the detailed clinical features of active and inactive nAMD, the imaging modalities used to determine activity of the lesion and interpretation of the images. Each webinar lecture lasted approximately 1 hour, with an additional 15 minutes for questions.
Test of competence
After confirmation of attendance at the webinars, participants were allocated 24 training vignettes. In order to qualify for the main trial, participants had to assign the ‘correct’ activity status to at least 75% (18 of 24) of their allocated training vignettes, according to expert assessments (see Reference standard). If participants failed to reach this threshold, they were allocated a further 24 vignettes (second training set) to complete. If participants failed to reach the performance threshold for progressing to the main trial on their second set of training vignettes, they were withdrawn from the trial. Participants who successfully passed the training phase (after either one or two attempts) were allocated 42 vignettes for assessment in the main phase of the trial.
Training vignettes were randomly sampled from the same pool of 288 vignettes as those used in the main study. However, the sampling method ensured that participants assessed different vignettes in their main study phase from those assessed during their training phase; the samples for assessment in the main trial were allocated to participant IDs in advance (as part of the trial design) and training sets of vignettes were sampled randomly from the 246 remaining vignettes.
Reference standard
The reference standard was established based on the judgements of three medical retina experts. Using the web-based application (see Implementation/management/data collection), these experts independently assessed the vignette features and made lesion classification decisions for all 288 index images. As the judgements of experts did not always agree, a consensus meeting was held to review the subset of vignettes for which experts’ classifications of lesion status disagreed. The experts reviewed these vignettes together without reference to their previous assessments and reached a consensus agreement. This consensus decision (‘reactivated’, ‘suspicious’ or ‘quiescent’ lesion) for all 288 vignettes made up the reference standard and was used to determine ‘correct’ participant lesion classification decisions.
As described in the protocol,20 the classification of a lesion as reactivated or quiescent depended on the presence or absence of predefined lesion components and whether or not the lesion component had increased from baseline. Two imaging modalities were used. These were colour fundus (CF) photographs and OCT images. Lesion components in colour photographs that were prespecified comprised haemorrhage and exudate. Tomographic components comprised subretinal fluid (SRF), diffuse retinal thickening (DRT), localised intraretinal cysts (IRCs) and pigment epithelial detachment (PED). Experts indicated whether each lesion component was present or absent. Rules for the classification of a lesion as reactivated or quiescent from the assessment of lesion components were prespecified (Table 2). Experts could disagree about the presence or absence of a specific lesion component and whether or not these components had increased from the baseline; however, they had to follow the rules when classifying a lesion as reactivated or quiescent (note that the PED lesion component did not inform lesion classification as active or quiescent). Validation prompts reflecting these rules were added to the web application after the training phase to prevent data entry or keystroke errors; the validation rules did not assist participants in making the overall assessment because they had to enter their assessments of the relevant image features present in each vignette before classifying the lesion status.
Feature | Lesion reactivated | Lesion quiescent |
---|---|---|
SRF on OCT | Yes | No |
Or | And | |
IRC on OCT | Yes, and increased from baseline | No/not increased from baseline |
Or | And | |
DRT on OCT | Yes, and increased from baseline | No/not increased from baseline |
Or | And | |
Blood on CF | Yes, and increased from baseline | No/not increased from baseline |
Or | And | |
Exudates on CF | Yes, and increased from baseline | No/not increased from baseline |
Owing to the short duration of the trial, participant training and assessments were undertaken concurrently with the independent experts’ assessments of the vignettes. Therefore, training sets of vignettes were scored against experts’ assessments that were complete at the time. Subsequently, checks were instituted to ensure that no participant was excluded from the main trial who might have passed the threshold performance score had the consensus reference standard been available at the time.
Outcomes
Primary outcome
The primary outcome was correct classification of the activity status of the lesion by a participant, based on assessing the index images in a vignette, compared with the reference standard (see Reference standard). Activity status could be classified as ‘reactivated’, ‘suspicious’ or ‘quiescent’. For the primary outcome, a participant’s classification was scored as ‘correct’ if:
-
both the participant and the reference standard lesion classification were reactivated
-
both the participant and the reference standard lesion classification were quiescent
-
both the participant and the reference standard lesion classification were suspicious
-
either the participant or the reference standard lesion classification was suspicious, and the other classification (reference standard or participant) was quiescent.
In effect, for the primary outcome, suspicious and quiescent classifications were grouped, making the primary outcome binary (Table 3).
Participant classification | Reference standard classification | ||
---|---|---|---|
Reactivated | Suspicious | Quiescent | |
Reactivated | ✓ | ✗ | ✗ |
Suspicious | ✗ | ✓ | ✓ |
Quiescent | ✗ | ✓ | ✓ |
Secondary outcomes
-
The frequency of potentially sight-threatening ‘serious’ errors. An error of this kind was considered to have occurred when a participant’s classification of a vignette was ‘lesion quiescent’ and the reference standard classification was ‘lesion reactivated’; that is, a definitive false-negative classification by the participant. Definitive false positives were not considered sight-threatening, but were tabulated. Misclassifications involving classifications of ‘lesion suspicious’ were also not considered sight-threatening.
-
Judgements about the presence or absence of specific lesion components, for example blood and exudates in the fundus colour images, SRF, IRC, DRT and PED in the OCT images and, if present, whether or not these features had increased since baseline.
-
Participant-rated confidence in their decisions about the primary outcome, on a 5-point scale.
Adverse events
This study does not involve any risks to the participants; therefore, it was not possible for clinical adverse events to be attributed to study-specific procedures.
Implementation/management/data collection
A secure web-based application was developed to allow participants to take part in the trial remotely (see example screenshots Appendix 2). The website www.echoestrial.org/demo shows how assessors carried out assessments in the trial. Participants registered interest, entered their details, completed questionnaires (regarding opinions on ECHoES trial training and shared care), and assessed their training and main study vignettes through this application. The webinar material was also available in the web application for participants to consult if they needed to revisit it.
Additionally, the web application had tools accessible only to the trial team to help manage and monitor the conduct and progression of the trial. Further details are published elsewhere. 19
Sample size
With respect to the primary outcome, the trial was designed to answer the non-inferiority question ‘Is the performance of optometrists as good as that of ophthalmologists?’. A sample of 288 vignettes was chosen to have at least 90% power to test the hypothesis that the proportion of vignettes for which lesion status was correctly classified by the optometrist group. We assumed that the proportion of vignettes for which lesion status was correctly classified by the ophthalmologist group was at least 95% and that each vignette would be assessed by only one ophthalmologist and one optometrist. However, as each vignette was assessed seven times by each group, the trial in fact had 90% power to detect non-inferiority for lower proportions of vignettes correctly classified by the ophthalmologist group.
Statistical methods
The analysis population consisted of the 96 participants who completed the assessments of their training and main study samples of vignettes. Continuous variables were summarised by means and standard deviations (SDs), or by medians and interquartile ranges (IQRs) if distributions were skewed. Categorical data were summarised as a number and percentage. Baseline participant characteristics were described and groups formally compared using t-tests, Mann–Whitney U-tests, chi-squared tests or Fisher’s exact tests as appropriate.
Group comparisons
All primary and secondary outcomes were analysed using mixed-effects regression models, adjusting for the order the vignettes were viewed as a fixed effect (tertiles: 1–14, 15–28, 29–42) and participant and vignette as random effects. All outcomes were binary and as such were analysed using logistic regression, with group estimates presented as odds ratios (ORs) with 95% confidence intervals (CIs).
In addition to the group comparisons, the influence of key vignette features [age, gender, smoking status, cardiac history (including angina, myocardial infarction and/or heart disease), and baseline and index BCVA (modelled as the sum and difference of BCVA at the two time points)] on the number of incorrect vignette classifications were investigated, adjusting for the reference standard classification (reactivated vs. quiescent/suspicious). This additional analysis was carried out using fixed-effects Poisson regression with the outcome of the number of incorrect classifications. The prior hypothesis was that this information would not influence the number of correct (or incorrect) classifications.
The sensitivity and specificity of the primary outcome are also presented. For these performance measures, the sensitivity is the proportion of lesions for which the reference standard is ‘reactivated’ and participants correctly classified the lesion. The specificity is the proportion of lesions for which the reference standard is either ‘suspicious’ or ‘quiescent’ and the participant’s classification is also ‘suspicious’ or ‘quiescent’.
Non-inferiority limit
For the sample size calculation, it was agreed that an absolute difference of 10% would be the maximum acceptable difference between the two groups, assuming that ophthalmologists would correctly assess 95% of their vignettes. As the group comparison of the primary outcome was analysed using logistic regression and presented as an OR, this non-inferiority margin was converted to the odds scale. Therefore, this limit was expressed as an OR of 0.298 [i.e. the odds correct for the worst acceptable performance by optometrists (85%) divided by the odds correct for worst assumed performance of ophthalmologists (95%): (0.85/0.15)/(0.95/0.05)].
Sensitivity analysis
A sensitivity analysis of the primary outcome, regrouping vignettes graded as suspicious into the ‘lesion reactivated’ group rather than ‘quiescent lesion’ group, was undertaken to assess the sensitivity of the conclusions to the classification of the vignettes graded as suspicious. This analysis was prespecified in the analysis plan but not in the trial protocol.
Post-hoc analysis
The following post-hoc analyses were prespecified in the analysis plan, but not in the trial protocol.
-
Lesion classification decisions were tabulated against referral decisions for both groups.
-
A descriptive analysis of the time taken to complete each vignette and how the duration of this time changed with experience in the trial (learning curve) was performed. The relationship between the duration of this time and participants’ ‘success’ in correctly classifying vignettes was also explored.
-
Cross-tabulations and kappa statistics were used to compare experts’ initial classifications with the final reference standard. Similarly, cross-tabulations and kappa statistics were used to compare lesion component classifications between the three experts.
In addition, a descriptive analysis of the participants’ opinions about the training provided in the ECHoES trial, as well as their perceptions of shared care, was carried out (see ECHoES participants’ perspectives of training and shared care).
Missing data
By design, there were no missing data for the primary and secondary outcomes. However, time taken to complete vignettes was calculated as the time between when each vignette was saved on the database. It was, therefore, not possible to calculate this for the first vignette of each session. Additionally, it was assumed that times longer than 20 minutes (database timeout time) were due to interruptions, so these were set to missing. As the analysis using these times was descriptive and was not specified in the protocol, complete case analysis was performed and missing/available data described.
Statistical significance
For hypothesis tests, two-tailed p-values of < 0.05 were considered statistically significant. Likelihood ratio tests were used in preference to Wald tests for hypothesis testing.
All data management and all analyses carried out by the Clinical Trials and Evaluation Unit, Bristol, were performed in SAS version 9.3 (SAS Institute Inc., Cary, NC, USA) or Stata version 13.1 (StataCorp LP, College Station, TX, USA).
Changes since commencement of study
At the end of their participation in the trial, all participants were asked to complete a questionnaire regarding their opinions about the training provided in the ECHoES trial and their attitudes towards a shared care model. This was made available to all ECHoES trial participants.
In addition, the reference standard was originally planned to include only two categories, namely lesion reactivated or lesion quiescent. However, after a concordance exercise was carried out by the three retinal experts on the ECHoES trial team, a third category of suspicious lesion was added. This change occurred before any experts’ or participants’ lesion classifications were made.
Finally, after much consideration, it was agreed the primary analysis would be carried out using logistic regression rather than Poisson regression in order to fully account for the incomplete block design. Fitting a Poisson model would not have allowed us to include both vignette and participant as random effects in the model.
Health economics
Aims and research questions
The economic evaluation component of the ECHoES trial aimed to estimate the incremental cost and incremental cost-effectiveness of community optometrists compared with hospital-based ophthalmologists performing retreatment assessments for patients with quiescent nAMD. This would enable us to determine which professional group represents the best use of scarce NHS resources in this context. The main outcome measure was a cost per correct retreatment decision, with correct meaning that both experts and trial participants judge the vignettes to be lesion reactivated, lesion quiescent or lesion suspicious.
Analysis perspective
The economic evaluation took the cost perspective of the UK NHS, personal social services and private practice optometrists, and was performed in accordance with established guidelines and recent publications. 21–23 Although it is possible that any incorrect retreatment decisions (false positives) could lead to costs being incurred by patients, their families or employers because of time away from usual activities, these wider societal costs were felt to be small compared with the implications for the NHS. Therefore, in line with the IVAN trial9, we decided not to adopt a societal perspective in the ECHoES trial.
Economic evaluation methods
The methods used in the economic evaluation are summarised in Table 4. Data on resource use and costs were collected using bespoke costing questionnaires for community optometrists. With the help of the ECHoES trial team (clinicians and optometrists), we first identified the typical components/tasks which are included in a monitoring review for nAMD and then subcategorized these tasks into resource groups such as staffing, equipment and building space. As some optometrist practices are likely to incur set-up costs associated with assessing the need for retreatment, we compiled a list of items of equipment necessary for performing each task within a monitoring review. The resource-use and cost questionnaire asked each participant which items of equipment from this list their practice currently owned and how much they had paid for the items. The questionnaire was piloted with a handful of optometrists and ophthalmologists who were able to advise on whether or not our questionnaire was straightforward to comprehend and complete. For any items which would be necessary for the review, but which the practice did not already own, we inferred that these items would have to be purchased ‘ex novo’. Once the total costs of equipment were identified, along with their predicted life span as suggested by study clinicians, we created an equivalent annual cost for all of the equipment using a 3.5% discount rate,24 and divided this cost by the number of potential patients who could undergo the monitoring reviews in the community practices. See Appendix 3 for a copy of the bespoke costing questionnaires for the optometrists in the ECHoES trial.
Aspect of method | Strategy used in base-case analysis |
---|---|
Form of economic evaluation | Cost-effectiveness analysis for comparison between optometrists and ophthalmologists |
Perspective | NHS, Personal Social Services and private practice optometrists’ costs |
Data set | Optometrist and ophthalmologists enrolled in the ECHoES trial |
Missing data | Imputation |
Costs included in analysis | Equipment, staff, building, training for webinars |
Effectiveness measurement | Correct retreatment decisions |
Sensitivity analysis | Three injections of Lucentis and consultations instead of one, using Eylea instead of Lucentis and using Avastin instead of Lucentis |
Time horizon | A within-trial analysis, taking an average of 2 weeks (maximum of 4 weeks) |
We estimated the costs associated with training optometrists to perform the assessments. The costs were estimated for the 2-hour duration webinar, plus time spent to revisit the webinar and time spent in checking other resources. Finally, the amount of time the ECHoES study clinicians spent preparing and delivering the webinars was also estimated.
For information on the costs of ophthalmologists performing the monitoring assessments, we used cost data from the IVAN trial, in which we had undertaken a very detailed micro-costing study. 25
In the economic evaluation, it is advised that value-added tax (VAT) should be excluded from the analysis. 21 Where possible, we used costs without VAT for our base-case analysis.
Missing data
Only 40 of the 61 optometrists who were initially invited to complete the health economics questionnaire (those who completed webinar training) actually submitted the questionnaire; 34 of the 40 completed their main assessments for the trial. Fifty-five of the 61 optometrists who were invited to complete the health economics questionnaire replied to the feedback questionnaire, which also contained information on the time spent on training by the optometrist. This training time was costed and contributes to the calculation of the cost per monitoring review for optometrists. Therefore, in order to maximise the use of questionnaire replies, a cost per monitoring review was estimated using all available information as reported by all optometrists rather than focusing only on the 48 that subsequently completed the main assessments.
For consistency with the procedure adopted in the IVAN trial costing, mean values of the relevant variables were imputed to the 21 optometrists who did not submit the cost questionnaire and to the six who did not complete the feedback questionnaire. The estimated costs from the 61 practices were randomly assigned (see Cost model: random, allocation) to each of the 2016 vignettes assessed by the 48 optometrists who completed the main assignment.
Cost model: random, allocation
From the IVAN trial, we had estimates of the cost of clinician-led monitoring reviews from 28 eye hospitals. Using a random allocation procedure similar to the one used in the IVAN trial,9 these estimates were allocated to each of the 2016 vignettes assessments carried out by the 48 ophthalmologists in the ECHoES trial. From the data collected for the ECHoES trial, we had estimates of the cost of the monitoring review from 61 (after imputation) optometric practices, which varied across practices. Although it would have been possible to link each vignette to the cost derived for the optometry practice in which the optometrist assessing the vignette worked, this would have ignored the heterogeneity between practices and the uncertainty around the estimates of the mean cost of the monitoring review. Therefore, we adopted the same approach as used for the IVAN trial, which was to randomly allocate costs. We randomly sampled from the distribution of costs from different optometrist practices using the procedure described below. Each of the 61 practices (for which the cost of a monitoring review had been estimated) reported the monthly number of nAMD patients that they could accommodate after all required changes in the practice (purchase of necessary equipment, changes to the structure/size of the practice, new staff, etc.) would have been implemented. These numbers were used to calculate weights; that is, each weight was the number of nAMD patients that the practice could accommodate per month divided by the summation of all patients with quiescent nAMD who could theoretically be accommodated by all of the practices per month after implementing changes. Cumulative weights were then calculated and assigned to each optometric practice. For each vignette (potential patient) a random number was generated. This random number determined which of the monitoring review costs each vignette was randomly assigned to. For each vignette assessed by a community optometrist, we randomly drew a value for the cost of the community optometry review from the distribution of the ECHoES trial monitoring review costs; for each vignette assessed by an ophthalmologist, we randomly drew a value for the cost of a hospital review from the cost distribution used in IVAN. For those vignettes where an incorrect treatment decision resulted in an additional hospital monitoring consultation or an unnecessary injection, we drew additional random numbers to sample those consultation costs from the distribution of costs as reported for the IVAN trial.
Care cost pathway decision tree
In order to generate an estimate for a cost for a correct retreatment decision, we first mapped the various care pathways that would possibly happen from the treatment assessments in the study, by comparing the reference lesion classification for each vignette to the actual classification made by the study participants (48 ophthalmologists and 48 optometrists). This gave rise to two versions of a simple decision tree: one for ophthalmologists (Figure 1) and one for optometrists (Figure 2). The starting point for the decision trees was the ‘actual’ truth. The three main branches of the trees were true active, true quiescent and true suspicious. Participants’ main trial assessments provided information about the number of participants who made correct and incorrect decisions on treatment compared with the reference standard decisions.
The decisions that both groups made about the vignettes were then placed into the decision trees and the associated costs for the different pathways were calculated. This process generated an average cost for each of the alternative care pathways. Any ‘incorrect’ decisions implied that patients would have had unnecessary repeat monitoring visits at the optometrist practice or at the hospital and maybe unnecessary anti-VEGF injections if they misclassified patients as requiring treatment. All costs are reported in 2013/14 prices unless specified otherwise.
Our baseline analysis calculated the average cost and outcome on a per patient basis. From these estimates, the incremental cost-effectiveness ratios (ICERs) for the different assessment options were derived, producing an incremental cost per accurate retreatment decision. Sensitivity analyses were carried out to demonstrate the impact of variation around the key parameters in the analysis on the baseline cost-effectiveness results. The four sensitivity analyses which we conducted were:
-
Sensitivity analysis 1: all patients initiating treatment were assumed to have a course of three ranibizumab given at three subsequent injection consultations, with no additional monitoring reviews. This matched the way in which discontinuous treatment was administered in the IVAN trial.
-
Sensitivity analysis 2: treatment was assumed to be one aflibercept injection given during an injection consultation.
-
Sensitivity analysis 3: treatment consisted of one bevacizumab injection given during an injection consultation.
-
Sensitivity analysis 4: only considered the cost of a monitoring review rather than considering the cost of the whole pathway.
The economic evaluation results were expressed in terms of a cost-effectiveness acceptability curve (CEAC). This indicates the likelihood that the results fall below a given cost-effectiveness ceiling and could help decision makers to assess whether optometrists are likely to represent value for money for the NHS when compared with ophthalmologists in making decisions about retreatment in nAMD patients and also completely disparate health-care interventions. All health economics analyses were conducted in Stata version 12.1 (StataCorp LP, College Station, TX, USA), except the budget impact analysis.
Budget impact
Freeing up HES clinic time could lead to an increase in the overall capacity of the NHS both to manage the nAMD population more effectively and to manage non-nAMD eye patients (more time for ophthalmologists to spend with non-nAMD patients if they are seeing fewer nAMD patients for monitoring). Therefore, we attempted to estimate the potential time and costs that could be saved by HES clinics if some of the management of nAMD patients could be undertaken in the community. This was done by using the resource-use information collected during the IVAN trial as a basis on which to consider what proportion of an ophthalmologist’s time is spent on retreatment decisions for this group of patients relative to other aspects of their care. We brought together data from the IVAN trial (average number of patients attending a clinic) and information from the literature, and used expert opinion from the ECHoES study ophthalmologists and optometrists to try to estimate the total number of patients with quiescent or no lesion in both eyes who would be eligible for monitoring by a community optometrist in a given month and the total number of monitoring visits that could be transferred from a hospital to the community per year. We then replaced the costs of the ophthalmologist’s time with the cost of the optometrist’s time and examined the difference. The Microsoft Excel® spreadsheet (Microsoft Corporation, Redmond, WA, USA) calculations for the budget impact are reported in Appendix 3.
Qualitative research
Focus groups were conducted with ophthalmologists, optometrists and eye service users separately, and one-to-one interviews were conducted with other health professionals involved in the care and services of those with eye conditions.
Recruitment
Focus groups with optometrists and ophthalmologists
Two focus groups were conducted with ophthalmologists and optometrists separately, held at specialist conferences (the National Optical Conference for Optometrists in Birmingham, November 2013, and the Elizabeth Thomas Seminar for Ophthalmologists in Nottingham, December 2013). For the optometrist focus groups, information about the study was placed in specialist press and the ECHoES study website. Focus group participants were also recruited by the snowball technique, with health-care professionals informing other potentially interested colleagues. For the ophthalmologist focus groups, conference organisers emailed information about the study to delegates. Interested participants were asked to contact the qualitative researcher for more information.
Interviews with commissioners, clinical advisors to clinical commissioning groups and public health representatives
Clinical commissioning groups (CCGs) in England were contacted by an e-mail which contained information about the study, asking it to be forwarded on to general practitioners (GPs)/commissioners in each CCG who were responsible for commissioning ophthalmology care. Those who were interested were asked to contact the qualitative researcher for more information. Subsequent selection of clinical advisors and public health representatives was guided via the snowball technique. Interviews were mostly conducted in person (n = 6) although, when it was not practicable to do so, a telephone interview was conducted (n = 4). Interviews were conducted between March and June 2014.
Focus groups with service users
Participants with nAMD were recruited from local support groups organised by the Macular Society, formerly known as the Macular Disease Society, a UK-based charity for anyone affected by central vision loss with over 15,000 members and 270 local support groups around the UK (www.macularsociety.org). Three support groups in the south-west of England were initially selected, one based in a major city, one in a large town and one in a rural village. Service users with any history of nAMD were invited to join the study (regardless of whether they had nAMD in one eye or both, dry AMD in their other eye, were currently receiving or had had any treatment in the past).The researcher attended the local support meetings to explain about the research and provide attendees with a participant information leaflet to take home. Contact details of those potentially interested at this stage were obtained. Those who expressed an interest were telephoned by the researcher a week later to discuss the study further and to answer any questions to help them decide whether or not to take part.
Sampling
Basic demographic, health and professional information was collected from those who agreed to be contacted for sampling purposes. A purposeful sampling strategy was used to ensure that the feasibility and acceptability of the proposed shared model of care for nAMD was captured from a range of perspectives. Within this sampling approach, maximum variation was sought in relation to profession, age, gender and geographic location (for health professionals) and gender, age, type of AMD and time since diagnosis (for service users). Participant characteristics were assessed as the study progressed, and individuals or groups that were under-represented were targeted (i.e. commissioners or clinical advisors). Where it was felt that variation had been achieved, potential participants were thanked for their interest and informed that sufficient numbers had been recruited (i.e. optometrists).
Data collection
A favourable ethical approval for this study was granted by a UK NHS Research Ethics Committee. Written consent was obtained from participants at the start of each focus group or interview. Separate topic guides were developed for service users, optometrists/ophthalmologists (with additional questions for each professional group) and all other health professionals, to ensure that discussions within each group covered the same basic issues but with sufficient flexibility to allow new issues of importance to the informants to emerge. These were based on the study aims, relevant literature and feedback from eye-health professionals in the ECHoES study team, and consisted of open-ended questions about the current model of care for nAMD and perspectives of stable patients being monitored in the community by optometrists. They were adapted as analysis progressed to enable exploration of emerging themes.
All of the focus groups and interviews were conducted by DT; these were predominantly led by the participants themselves, with DT flexibly guiding the discussion by occasionally probing for more information, clarifying any ambiguous statements, encouraging the discussion to stay on track, and, in the focus groups, providing an opportunity for all participants to contribute to the discussion. All participants were offered a £20 gift voucher to thank them for their time (two commissioners declined the voucher).
Data analysis
Focus groups and interviews were transcribed verbatim and checked against the audio-recording for accuracy. Transcripts were imported into NVivo version 10 (QSR International, Warrington, UK), where data were systematically assigned codes and analysed thematically using constant comparison methods derived from grounded theory methodology. 26 Transcripts were repeatedly read and ongoing potential ideas and coding schemes were noted at every stage of analysis. To ensure each transcript was given equal attention in the coding process, data were analysed sentence by sentence and interesting features were coded. Clusters of related codes were then organised into potential themes, and emerging themes and codes within transcripts and across the data set were then compared with explore shared or disparate views among participants. The transcripts were reread to ensure the proposed themes adequately covered the coded extracts, and the themes were refined accordingly. Emerging themes were discussed with a second experienced social scientist (NM) with reference to the raw data. Data collection and analysis proceeded in parallel, with emerging findings informing further sampling and data collection. Data collection and analysis continued until the point of data saturation, that is, the point at which no new themes emerged.
The ECHoES trial participants’ perspectives of training and shared care
In addition to the information collected during focus groups and interviews, the opinions of all optometrists and ophthalmologists who had taken part in the ECHoES trial were sought in a short online questionnaire. The survey related to participants’ experiences of the training and their attitudes towards shared care for nAMD. Questions required a binary, a Likert scale or a free-text response.
Quantitative data were analysed in Microsoft Excel and presented as proportions. Free-text responses were imported into NVivo and coded into categories, which were derived from the main survey questions, relating to feedback on the training programme (experiences of using the web application, ease of training and additional resources used) and attitudes towards shared care (general perspectives of the proposed model, and perceived facilitators and barriers to implementation). Data were analysed thematically using the constant comparative techniques of grounded theory, whereby codes within and across the data set were compared to look for shared or disparate views among optometrists and ophthalmologists.
Chapter 3 Results: classification of lesion and lesion components (objectives 1 to 3)
Registered participants
A total of 155 health-care professionals (72 ophthalmologists and 83 optometrists) registered their interest in the ECHoES trial. Of these, 62 ophthalmologists and 67 optometrists consented to take part. Everyone who registered an interest was eligible for the trial; those who did not consent either did not return their consent forms or were no longer required for the trial. See Figure 3 for details.
Recruitment
Participants were initially recruited between 1 June and 9 October 2013. However, as participants progressed through the training stages, it became apparent that the withdrawal rate of optometrists was higher than expected and that more would be required if the planned sample size were to be met. Recruitment was therefore reopened between 13 February and 6 March 2014, when a further seven optometrists consented to take part. A number of participants withdrew or were withdrawn by co-ordinating staff throughout the trial, mostly because they missed webinars or failed their assessment of training vignettes (see Figure 3 and Withdrawals for details). The final participant completed the main study vignettes on 21 April 2014. As planned, 48 ophthalmologists and 48 optometrists completed the full trial.
At the start of the trial we were unsure how many participants we would need to recruit in order to meet our target of 48 participants in each group. Therefore, we over-recruited at the consent stage and asked a number of participants to complete the webinar and then ‘wait and see’ whether or not we needed them to participate in the main trial. We also slightly over-recruited at each stage of the trial to account for dropouts. This resulted in a small number of participants being withdrawn at various stages of the trial because they were no longer required.
Withdrawals
During the ECHoES trial, withdrawals could occur for a number of reasons. First, participants could withdraw or be withdrawn between consenting and receiving their training vignettes if the mandatory webinar training was not completed, they no longer wanted to take part or they were no longer needed for the trial. This occurred for six ophthalmologists and six optometrists (see Figure 3 for details). Second, participants could be withdrawn if the threshold performance score for the assessments of their training vignettes was not attained. Of the 54 ophthalmologists who completed their vignette training, 48 (88.9%) passed first time, two (3.7%) passed on their second attempt and four (7.4%) failed their second attempt so were withdrawn. Of the 57 optometrists who completed their vignette training, 38 (66.7%) passed first time, 11 (19.3%) passed on their second attempt and eight (14.0%) failed their second attempt and were withdrawn from the trial. Two ophthalmologists and one optometrist who passed their training vignettes were withdrawn from the trial as the target sample size (48 in each group completing main assessments) had already been reached.
Numbers analysed
Ninety-six participants, 48 ophthalmologists and 48 optometrists, were included in the analysis population. For the primary and secondary outcomes, all participants were included as by design there were no missing data.
Reference standard classifications
The reference standard classified 142 (49.3%) of the 288 vignettes as reactivated, 5 (1.7%) as suspicious and 141 (49.0%) as quiescent.
Participant characteristics
The characteristics of participants collected for the trial included only age, gender and date of qualification for the participant’s profession. As participants were not randomised to the two comparison groups, baseline characteristics were presented descriptively and formally compared. Table 5 shows that the gender balance and average ages were similar among optometrists and ophthalmologists [mean 43.1 years (SD 10.1 years) and 42.2 years (SD 8.0 years), respectively; 50.0% vs. 43.8% women]. Optometrists had on average significantly more years of qualified experience than ophthalmologists [median 17.4 years (IQR 10.1–28.4 years) and 11.4 years (IQR 4.8–16.9 years), respectively].
Characteristic | Ophthalmologists (n = 48) | Optometrists (n = 48) | p-value |
---|---|---|---|
Age (years), mean (SD) | 42.2 (8.0) | 43.1 (10.1) | 0.616 |
Gender (female), n/N (%) | 21/48 (43.8) | 24/48 (50.0) | 0.539 |
Years since qualification, median (IQR) | 11.4 (4.8–16.9) | 17.4 (10.1–28.4) | < 0.001 |
Primary outcome
The primary outcome (correct lesion classification by a participant compared with the reference standard: 2.5.1) was achieved by the ophthalmologists for 1722 out of 2016 (85.4%) vignettes and by the optometrists for 1702 out of 2016 (84.4%) vignettes (Table 6). The odds of an optometrist being correct were not statistically different from the odds of an ophthalmologist being correct (OR 0.91, 95% CI 0.66 to 1.25; p = 0.543). The ability of optometrists to assess vignettes is non-inferior (both clinically and statistically) to the ability of ophthalmologists, according to the prespecified limit of 10% absolute difference (0.298 on the odds scale; illustrated by the dashed black line on Figure 4). In this primary outcome model, the variance attributed to the participant random effect was far smaller than that of the vignette random effect (0.360 and 2.062, respectively).
Primary outcome | Ophthalmologists (n = 48) | Optometrists (n = 48) | OR (95% CI) | p-value |
---|---|---|---|---|
Lesions correctly classified (overall), n/N (%) | 1722/2016 (85.4) | 1702/2016 (84.4) | 0.91 (0.66 to 1.25) | 0.543 |
Sensitivity (overall), n/N (%) | 736/994 (74.0) | 795/994 (80.0) | 1.52 (1.08 to 2.15) | 0.018 |
Specificity (overall), n/N (%) | 986/1022 (96.5) | 907/1022 (88.7) | 0.27 (0.17 to 0.44) | < 0.001 |
Median correct participant score (IQR) | 37.0 (35.0–38.5) | 36.0 (33.0–38.0) | – | – |
Median sensitivity, participant level (IQR) | 76.5 (64.9–87.1) | 83.7 (71.6–94.1) | – | – |
Median specificity, participant level (IQR) | 100.0 (94.6–100.0) | 94.7 (79.2–100.0) | – | – |
The median number of correct lesion classifications (compared with the reference standard) by individual participants was 37 (IQR 35.0–38.5) in the ophthalmologist group compared with 36 (IQR 33.0–38.0) in the optometrist group. The lowest number of correct lesion classifications was 26 out of 42 in the ophthalmologist group and 24 out of 42 in the optometrist group; the highest number of correct lesion reactivation decisions was 41 out of 42, achieved by one ophthalmologist and three optometrists. It was recommended by a reviewer that the relationship between years of experience and the number of correct responses of ophthalmologists should be explored. Appendix 4, Figure 27 suggested no clear relationship. Appendix 4, Figure 28 shows a plot of individual optometrist participant scores against their paired ophthalmologist counterparts for the 48 different sets of vignettes. The sensitivity and specificity (see Table 6; described in Chapter 2, Group comparisons) and detailed breakdown of the participants’ classifications (Table 7 and Figure 5) show the agreement between participants and the reference standard in more detail. Figure 5 shows that optometrists were more likely than ophthalmologists to correctly classify a vignette as reactivated (80% vs. 74.0%), but were less likely to correctly classify a vignette as quiescent (65.7% vs. 81.7%).
Participants’ lesion classifications by reference standard classification | Ophthalmologists | Optometrists | Overall | |||
---|---|---|---|---|---|---|
n/N (observations) | % | n/N (observations) | % | n/N (observations) | % | |
Reactivated (n = 142) | ||||||
Reactivated | 736/994 | 74.0 | 795/994 | 80.0 | 1531/1988 | 77.0 |
Suspicious | 196/994 | 19.7 | 142/994 | 14.3 | 338/1988 | 17.0 |
Quiescent | 62/994 | 6.2 | 57/994 | 5.7 | 119/1988 | 6.0 |
Suspicious (n = 5) | ||||||
Reactivated | 1/35 | 2.9 | 10/35 | 28.6 | 11/70 | 15.7 |
Suspicious | 17/35 | 48.6 | 11/35 | 31.4 | 28/70 | 40.0 |
Quiescent | 17/35 | 48.6 | 14/35 | 40.0 | 31/70 | 44.3 |
Quiescent (n = 141) | ||||||
Reactivated | 35/987 | 3.5 | 105/987 | 10.6 | 140/1974 | 7.1 |
Suspicious | 146/987 | 14.8 | 234/987 | 23.7 | 380/1974 | 19.3 |
Quiescent | 806/987 | 81.7 | 648/987 | 65.7 | 1454/1974 | 73.7 |
A post-hoc analysis to look at this subgroup effect was undertaken. The interaction between participant group and reference standard vignette classification (reactivated vs. quiescent/suspicious) was statistically significant (interaction p < 0.001); the odds of an optometrist being correct were about 50% higher than those of an ophthalmologist if the reference standard classification was reactivated (OR 1.52, 95% CI 1.08 to 2.15; p = 0.018) but about 70% lower if the reference standard classification was quiescent/suspicious (OR 0.27, 95% CI 0.17 to 0.44; p < 0.001).
Secondary outcomes
Serious sight-threatening errors
Serious sight-threatening errors could occur only for the vignettes which were classified as ‘reactivated’ by the reference standard. These errors occurred in 62 out of 994 (6.2%) of ophthalmologists’ classifications and 57 out of 994 (5.7%) of optometrists’ classifications; this difference was not statistically significant (OR 0.93, 95% CI 0.55 to 1.57; p = 0.789).
Each participant viewed between 15 and 27 ‘reactivated’ vignettes and the most sight-threatening errors made by a single participant was eight (out of 25) in the ophthalmologists group and five (out of 19) in the optometrists group (Table 8). Table 8 also shows the number of non-sight-threatening errors, that is, participants classifying vignettes as reactivated when the reference standard was quiescent. This type of error occurred more frequently in the optometrists group (105 out of 987; 10.6%) than in the ophthalmologists group (35 out of 987; 3.5%); this difference was not formally compared.
Type of serious error | Ophthalmologists (n = 48) | Optometrists (n = 48) | OR (95% CI) | p-value | ||
---|---|---|---|---|---|---|
n/N | % | n/N | % | |||
Sight-threatening | 62/994 | 6.2 | 57/994 | 5.7 | 0.93 (0.55 to 1.57) | 0.789 |
Non-sight-threatening | 35/987 | 3.5 | 105/987 | 10.6 | – | – |
Number of participants making different numbers of sight-threatening serious errorsa | ||||||
0 errors | 17/48 | 35.4 | 19/48 | 39.6 | – | – |
1 error | 13/48 | 27.1 | 13/48 | 27.1 | – | – |
2 errors | 11/48 | 22.9 | 8/48 | 16.7 | – | – |
3 errors | 5/48 | 10.4 | 5/48 | 10.4 | – | – |
4 errors | 1/48 | 2.1 | 2/48 | 4.2 | – | – |
5 errors | 0/48 | 0.0 | 1/48 | 2.1 | – | – |
8 errors | 1/48 | 2.1 | 0/48 | 0.0 | – | – |
Number of participants making different numbers of non-sight-threatening serious errorsa | ||||||
0 errors | 27/48 | 56.3 | 19/48 | 39.6 | – | – |
1 error | 11/48 | 22.9 | 6/48 | 12.5 | – | – |
2 errors | 7/48 | 14.6 | 7/48 | 14.6 | – | – |
3 errors | 2/48 | 4.2 | 4/48 | 8.3 | – | – |
4 errors | 1/48 | 2.1 | 3/48 | 6.3 | – | – |
5 errors | 0/48 | 0.0 | 5/48 | 10.4 | – | – |
7 errors | 0/48 | 0.0 | 3/48 | 6.3 | – | – |
15 errors | 0/48 | 0.0 | 1/48 | 2.1 | – | – |
Lesion components
We did not attempt to achieve a consensus among the three experts for the individual features of the vignettes. Therefore, responses given by the professional groups for these individual features were formally compared with each other rather than with any reference standard. Optometrists judged lesion components to be present for all components except PED more often than ophthalmologists (Table 9 and Figure 6). This difference was particularly evident for DRT and exudates; the odds of identifying these components as present were more than three times higher in the optometrist group than in ophthalmologist group [OR 3.46, 95% CI 2.09 to 5.71 (p < 0.001), and OR 3.10, 95% CI 1.58 to 6.08 (p < 0.001), respectively]. SRF was also reported significantly more often by optometrists than ophthalmologists (OR 1.73, 95% CI 1.21 to 2.48; p = 0.002). The difference between the groups was of borderline statistical significance for blood (OR 1.56, 95% CI 1.00 to 2.44; p = 0.048). The differences between the groups for IRC and PED did not differ statistically [OR 1.00, 95% CI 0.61 to 1.65 (p = 0.985) and OR 0.91, 95% CI 0.47 to 1.79 (p = 0.786), respectively].
Secondary outcomes | Ophthalmologists (n = 48) | Optometrists (n = 48) | OR (95% CI) | p-value | ||
---|---|---|---|---|---|---|
n/N | % | n/N | % | |||
Is there SRF? | 515/2016 | 25.5 | 627/2016 | 31.1 | 1.73 (1.21 to 2.48) | 0.002 |
Has it increased since baseline? | 498/515 | 96.7 | 541/627 | 86.3 | – | – |
Are there IRC? | 799/2016 | 39.6 | 808/2016 | 40.1 | 1.00 (0.61 to 1.65) | 0.985 |
Has it increased since baseline? | 667/799 | 83.5 | 683/808 | 84.5 | – | – |
Is there DRT? | 482/2016 | 23.9 | 826/2016 | 41.0 | 3.46 (2.09 to 5.71) | < 0.001 |
Has it increased since baseline? | 381/482 | 79.0 | 597/826 | 72.3 | – | – |
Is there any PED? | 845/2016 | 41.9 | 842/2016 | 41.8 | 0.91 (0.47 to 1.79) | 0.786 |
Has it increased since baseline? | 311/845 | 36.8 | 392/842 | 46.6 | – | – |
Is there blood? | 150/2016 | 7.4 | 194/2016 | 9.6 | 1.56 (1.00 to 2.44) | 0.048 |
New or increased since baseline? | 126/150 | 84.0 | 146/194 | 75.3 | – | – |
Are there exudates? | 152/2016 | 7.5 | 380/2016 | 18.8 | 3.10 (1.58 to 6.08) | < 0.001 |
New or increased since baseline? | 38/152 | 25.0 | 87/380 | 22.9 | – | – |
Confidence ratings
The confidence ratings displayed in Table 10 show that ophthalmologists were clearly more confident in their decisions than optometrists. Ophthalmologists stated that they were very confident (5 on the rating scale) in their judgements for 1175 out of 2016 (58.3%) vignettes, whereas optometrists reporting the same level of confidence in their judgements for only 575 out of 2016 (28.5%) vignettes (OR 0.15, 95% CI 0.07 to 0.32; p < 0.001). For both groups, a confidence rating of 5 resulted in a correct answer over 90% of the time, but there did not appear to be any clear relationship between confidence and correctness for lower confident ratings, especially for optometrists (see Table 10).
Secondary outcomes | Ophthalmologists (n = 48) | Optometrists (n = 48) | OR (95% CI) | p-value | ||
---|---|---|---|---|---|---|
n/N (observations) | % | n/N (observations) | % | |||
Confidence rating | ||||||
1 | 7/2016 | 0.3 | 52/2016 | 2.6 | 0.15 (0.07 to 0.32)a | < 0.001 |
2 | 26/2016 | 1.3 | 140/2016 | 6.9 | ||
3 | 220/2016 | 10.9 | 496/2016 | 24.6 | ||
4 | 588/2016 | 29.2 | 753/2016 | 37.4 | ||
5 | 1175/2016 | 58.3 | 575/2016 | 28.5 | ||
Correct lesion classifications for each confidence ratingb | ||||||
1 | 3/7 | 42.9 | 42/52 | 80.8 | – | – |
2 | 21/26 | 80.8 | 114/140 | 81.4 | – | – |
3 | 147/220 | 66.8 | 362/496 | 73.0 | – | – |
4 | 474/588 | 80.6 | 634/753 | 84.2 | – | – |
5 | 1077/1175 | 91.7 | 550/575 | 95.7 | – | – |
Key vignette information
It was stated in the protocol that the effect of key vignette features on correct reactivation decisions would be assessed. For interpretability we modelled the effect of these features on the number of incorrect classifications.
The influence of vignette features was investigated using Poisson regression, adjusting for the reference standard classification. Interactions between these vignette characteristics and professional group were tested but were not retained as they were not statistically significant at the 5% level. The outcome of this analysis can be seen in Figure 7: professional group, gender, cardiac history and age did not significantly influence the number of incorrect classifications. In contrast, the reference standard classification, smoking status and BCVA sum and difference were all statistically significant (p < 0.001, p = 0.005, p = 0.001 and p = 0.004, respectively). Vignettes of current smokers were more likely to be incorrectly classified than non-smokers, but differences between ex-smoker and non-smoker vignettes were not found [incidence rate ratio (IRR) 1.33, 95% CI 1.05 to 1.70, and IRR 0.91, 95% CI 0.76 to 1.10, respectively]. Vignettes with better BCVA (larger average BCVA over the two visits) were less likely to be incorrectly classified (IRR 0.996, 95% CI 0.993 to 0.998), while vignettes with a greater increase in BCVA from baseline to index visit were more likely to be incorrectly classified (IRR 1.017, 95% CI 1.005 to 1.028). Vignettes classified as reactivated by the reference standard were more likely to be incorrectly classified (IRR 3.16, 95% CI 2.62 to 3.81); this finding was in agreement with the raw data displayed in Figure 5 (noting that, in the two right-hand columns, classifications of quiescent and suspicious should be pooled for the total percentage correct).
Sensitivity analysis
For the primary outcome, a vignette classification was defined as ‘correct’ if both the reference standard and the participant classified the vignette as ‘reactivated’ or if both classified a vignette as ‘suspicious’/’quiescent’. A sensitivity analysis of the primary outcome was performed in which suspicious classifications were grouped with reactivated classifications instead of quiescent classifications. In this analysis, ophthalmologists correctly classified 1756 out of 2016 (87.1%) vignettes and optometrists correctly classified 1606 out of 2016 (79.7%). This difference was statistically significant (OR 0.51, 95% CI 0.38 to 0.67; p < 0.001), but the lower end of the CI did not cross the non-inferiority margin (0.298). Therefore, when correct classifications were redefined in this way, optometrists were statistically inferior but clinically non-inferior to ophthalmologists.
Additional (post-hoc) analyses
Vignette classifications compared with referral recommendations
It was of interest to see how lesion classification decisions related to referral decisions. These should be congruent (as referral decisions should be made based on classification decisions) but no such rules were imposed in this trial. Table 11 illustrates that, as expected, lesions classified as reactivated were usually paired with a recommendation for rapid referral to hospital (1679/1682, 99.9%) and lesions classified as quiescent were usually paired with a recommendation for review in 4 weeks (1526/1604, 95.1%). There was very little difference between the two professional groups in the extent to which lesion classifications and referral decisions were paired. Lesions classified as suspicious were often paired with a recommendation for review in 2 weeks (605/746, 81.1%), but rapid referral to hospital was not uncommon (131/746, 17.6%). Optometrists more often than ophthalmologists tended to pair suspicious classifications with review in 2 weeks (87.3% vs. 74.4%), whereas ophthalmologists tended to pair suspicious classifications with rapid referral to hospital more often than optometrists (24.8% vs. 10.9%).
Referral decisions by lesion classification | Ophthalmologists (n = 48) | Optometrists (n = 48) | Overall (n = 96) | |||
---|---|---|---|---|---|---|
n/N (observations) | % | n/N (observations) | % | n/N (observations) | % | |
Reactivated | ||||||
Refer to hospital | 771/772 | 99.9 | 908/910 | 99.8 | 1679/1682 | 99.8 |
Review in 2 weeks | 0/772 | 0.0 | 2/910 | 0.2 | 2/1682 | 0.1 |
Review in 4 weeks | 1/772 | 0.1 | 0/910 | 0.0 | 1/1682 | 0.1 |
Suspicious | ||||||
Refer to hospital | 89/359 | 24.8 | 42/387 | 10.9 | 131/746 | 17.6 |
Review in 2 weeks | 267/359 | 74.4 | 338/387 | 87.3 | 605/746 | 81.1 |
Review in 4 weeks | 3/359 | 0.8 | 7/387 | 1.8 | 10/746 | 1.3 |
Quiescent | ||||||
Refer to hospital | 1/885 | 0.1 | 1/719 | 0.1 | 2/1604 | 0.1 |
Review in 2 weeks | 44/885 | 5.0 | 32/719 | 4.5 | 76/1604 | 4.7 |
Review in 4 weeks | 840/885 | 94.9 | 686/719 | 95.4 | 1526/1604 | 95.1 |
Duration of vignette assessment
The assessment durations were calculated for each participant as the difference between the time an assessment was saved and the time the previous assessment was saved. Therefore, this information was not available for all 4032 assessments because many participants took breaks between assessments (see Chapter 2, Missing data). The median durations of vignette assessment for all main study assessments were 2 minutes 21 seconds (IQR 1 minute 39 seconds to 3 minutes 27 seconds; n = 1835) for ophthalmologists and 3 minutes 2 seconds (IQR 2 minutes 5 seconds to 4 minutes 57 seconds; n = 1758) for optometrists. Assessment time, on average, reduced as experience increased, especially in the optometrists group; for ophthalmologists and optometrists respectively, the median assessment durations were 2 minutes 44 seconds (IQR 2 minutes 3 seconds to 4 minutes 40 seconds; n = 45) and 5 minutes 23 seconds (IQR 3 minutes 18 seconds to 10 minutes 3 seconds; n = 41) for the second main study vignette, and 1 minute 55 seconds (IQR 1 minute 19 seconds to 2 minutes 49 seconds; n = 48) and 2 minutes 31 seconds (IQR 1 minute 52 seconds to 4 minutes 26 seconds; n = 48) for the 42nd (final) main study vignette. Figure 8 shows this relationship.
Among vignette assessments for which duration was not missing, there were 1570 out of 1835 (85.6%) correct responses by ophthalmologists and 1492 out of 1758 (84.9%) correct responses by optometrists. Figure 9 illustrates the relationship between assessment duration and the percentage of correct responses; broadly speaking, shorter assessment durations were more likely to result in correct lesion assessments than longer assessment durations. This relationship was similar for both professional groups.
Expert classifications for derivation of the reference standard
Each expert individually assessed the vignettes in order to develop the reference standard (see Chapter 2, Reference standard). The lesion classifications of the three experts were congruent for these assessments for 219 out of 288 (76.0%) vignettes (comprising 103 out of 219 classifications of reactivated and 116 out of 219 classifications of quiescent). The three experts then held a consensus meeting and jointly assessed the vignettes for which there was disagreement about the lesion classification. The presence or absence of lesion components in index images of these vignettes, and their change from baseline, were discussed in detail. Consensus lesion classifications were agreed which, together with the congruent classifications, made up the final reference standard. Experts did not attempt to reach consensus about specific lesion components. A few specific vignettes were reassessed by experts who felt that they had made errors about the lesion components in their original assessments. When carrying out these reassessments, data were collected only for assessments of the lesion components.
Table 12 shows the reference standard classification against each expert’s initial individual classification; 774 out of 864 (89.6%) of the individual classifications remained unchanged, with the rest (10.4%) being amended after group discussions. The responses that did not change are shaded green. The majority of the changed classifications involved an initial classification of suspicious by one expert.
Individual experts | Reference standard | Agreement (%) | ||
---|---|---|---|---|
Reactivated (n = 142) | Suspicious (n = 5) | Quiescent (n = 141) | ||
Expert 1 | ||||
Reactivated | 131 | 1 | 4 | 89.9 |
Suspicious | 6 | 4 | 13 | |
Quiescent | 5 | 0 | 124 | |
Expert 2 | ||||
Reactivated | 110 | 0 | 0 | 86.1 |
Suspicious | 14 | 0 | 3 | |
Quiescent | 18 | 5 | 138 | |
Expert 3 | ||||
Reactivated | 134 | 1 | 5 | 92.7 |
Suspicious | 3 | 3 | 6 | |
Quiescent | 5 | 1 | 130 |
Comparing lesion component classifications across experts
Although a reference standard was established for the overall lesion classification, no such standard was established for the lesion component classifications. Therefore, it was of interest to compare lesion component classifications across experts for the six lesion components. The experts could classify each lesion component as absent, present but not increased since baseline, or present and increased since baseline. The frequencies with which each expert identified different lesion components across all vignettes are shown in Table 13. Agreement between experts with respect to lesion components is shown in Table 14. Table 14 shows that agreement was best for blood and exudates; the three experts agreed about blood classification for 266 out of 288 (92.4%) vignettes and for exudate classification for 260 out of 288 (90.3%) vignettes. Table 13 shows that experts observed no blood or exudates on the about colour images of > 90% of the vignettes. Agreement for SRF and IRC was also high, with total agreement for 259 out of 288 (89.9%) and 243 out of 288 (84.4%) of the vignettes, respectively (see Table 14). Interestingly, for both these components, expert 2 disagreed with the other two experts slightly more often than experts 1 and 3 did with each other; Table 13 shows that expert 2 identified SRF to be present more often than experts 1 and 3, and identified IRC to be present less often than experts 1 and 3. Agreement between experts was lower for DRT and PED than for the other components, suggesting that classification of these components was potentially more difficult or less clear-cut. In addition, Table 14 shows the agreement between the experts about the overall lesion classification, prior to any discussions to agree the reference standard.
Lesion component | Absent, n (%) | Present but not increased, n (%) | Present and increased since baseline, n (%) |
---|---|---|---|
SRF | |||
Expert 1 | 222 (77.1) | 0 (0.0) | 66 (22.9) |
Expert 2 | 197 (68.4) | 3 (1.0) | 88 (30.6) |
Expert 3 | 220 (76.4) | 0 (0.0) | 68 (23.6) |
IRC | |||
Expert 1 | 193 (67.0) | 9 (3.1) | 86 (29.9) |
Expert 2 | 206 (71.5) | 12 (4.2) | 70 (24.3) |
Expert 3 | 190 (66.0) | 6 (2.1) | 92 (31.9) |
DRT | |||
Expert 1 | 189 (65.6) | 6 (2.1) | 93 (32.3) |
Expert 2 | 237 (82.3) | 5 (1.7) | 46 (16.0) |
Expert 3 | 252 (87.5) | 6 (2.1) | 30 (10.4) |
PED | |||
Expert 1 | 129 (44.8) | 95 (33.0) | 64 (22.2) |
Expert 2 | 53 (18.4) | 185 (64.2) | 50 (17.4) |
Expert 3 | 93 (32.3) | 127 (44.1) | 68 (23.6) |
Blood | |||
Expert 1 | 267 (92.7) | 2 (0.7) | 19 (6.6) |
Expert 2 | 266 (92.4) | 2 (0.7) | 20 (6.9) |
Expert 3 | 261 (90.6) | 2 (0.7) | 25 (8.7) |
Exudates | |||
Expert 1 | 266 (92.4) | 13 (4.5) | 9 (3.1) |
Expert 2 | 279 (96.9) | 7 (2.4) | 2 (0.7) |
Expert 3 | 283 (98.3) | 1 (0.3) | 4 (1.4) |
Lesion component | All experts agree, n (%) | Experts 1 and 2 agree (expert 3 differs), n (%) | Experts 1 and 3 agree (expert 2 differs), n (%) | Experts 2 and 3 agree (expert 1 differs), n (%) | None agree, n (%) | Kappa statistic, n |
---|---|---|---|---|---|---|
Overall lesion classification | 219 (76.0) | 16 (5.6) | 31 (10.8) | 15 (5.2) | 7 (2.4) | 0.697 |
SRF | 259 (89.9) | 4 (1.4) | 21 (7.3) | 4 (1.4) | 0 (0.0) | 0.827 |
IRC | 243 (84.4) | 8 (2.8) | 22 (7.6) | 11 (3.8) | 4 (1.4) | 0.759 |
DRT | 189 (65.6) | 36 (12.5) | 17 (5.9) | 43 (14.9) | 3 (1.0) | 0.327 |
PED | 134 (46.5) | 23 (8.0) | 55 (19.1) | 67 (23.3) | 9 (3.1) | 0.420 |
Blood | 266 (92.4) | 7 (2.4) | 6 (2.1) | 9 (3.1) | 0 (0.0) | 0.660 |
Exudates | 260 (90.3) | 4 (1.4) | 4 (1.4) | 19 (6.6) | 1 (0.3) | 0.183 |
Participants’ views on the ECHoES trial training
This section includes information provided by all participants who completed the questionnaire on their opinions of the ECHoES trial training, regardless of whether or not they completed the study and were included in the analysis population. A total of 102 participants completed the questionnaire: 47 ophthalmologists (44 of whom passed training and became main study participants) and 55 optometrists (47 of whom passed training and became main study participants).
Overall, participants gave positive feedback about the training. In particular, both the ophthalmologists and the optometrists commented on the thorough nature of the webinars and how helpful the trial staff were, particularly the trial co-ordinator. Many optometrists were pleased to have taken part, finding the training to be ‘very useful’ and noting that the training had improved their confidence.
Well designed, a lot of thought and hard work to set and to run.
Optom262
Extremely useful and well structured.
Ophthalm131
I learnt a lot and am pleased I took the opportunity to take part.
Optom269
Table 15 shows that more ophthalmologists than optometrists found the web application harder to use. Of those who provided free-text responses, the majority (both ophthalmologists and optometrists) commented on the content of the training. Specifically, 10 participants felt that the quality of the OCT images was poor. Eight participants (five of whom were optometrists) also commented that it would have been helpful to have a side-by-side comparison of baseline and index images on the same screen, as it was difficult and time-consuming to have to change back and forth between images. Three participants commented that they used two different screens alongside each other to overcome this.
Some of the images were difficult to view as they were of poor quality.
Optom254
I found it time-consuming changing between the baseline and index scans. I speeded up once I solved this by having one set on my iPad® and one set on my monitor.
Optom268
Only issue I had was that we were unable to compare the baseline and index images side by side which made assessing more difficult.
Optom225
ECHoES trial training feedback | Ophthalmologists (n = 47) | Optometrists (n = 55) |
---|---|---|
What did you think of the training? | ||
Excellent, n/N (%) | 11/47 (23.4) | 4/55 (7.3) |
Very good, n/N (%) | 17/47 (36.2) | 17/55 (30.9) |
Good, n/N (%) | 15/47 (31.9) | 18/55 (32.7) |
Fair, n/N (%) | 4/47 (8.5) | 11/55 (20.0) |
Poor, n/N (%) | 0/47 (0.0) | 5/55 (9.1) |
Very poor, n/N (%) | 0/47 (0.0) | 0/55 (0.0) |
How easy did you find the web application to use? | ||
Very easy, n/N (%) | 23/47 (48.9) | 22/55 (40.0) |
Easy, n/N (%) | 20/47 (42.6) | 23/55 (41.8) |
Neutral, n/N (%) | 2/47 (4.3) | 7/55 (12.7) |
Difficult, n/N (%) | 2/47 (4.3) | 2/55 (3.6) |
Very difficult, n/N (%) | 0/47 (0.0) | 1/55 (1.8) |
Was the training sufficient? | ||
Completely sufficient, n/N (%) | 33/47 (70.2) | 6/55 (10.9) |
Additional training required, n/N (%) | 14/47 (29.8) | 43/55 (78.2) |
Completely different training required, n/N (%) | 0/47 (0.0) | 6/55 (10.9) |
Did you revisit webinars? | ||
Did not revisit, n/N (%) | 24/47 (51.1) | 2/55 (3.6) |
Did revisit, n/N (%) | 23/47 (48.9) | 53/55 (96.4) |
If revisited, how long for? (hours), median (IQR) | 1 (1, 2) | 3 (2, 3) |
Did you use other resources? | ||
Did not use other resources, n/N (%) | 42/47 (89.4) | 18/55 (32.7) |
Did use other resources, n/N (%) | 5/47 (10.6) | 37/55 (67.3) |
If other resources used, how long for (hours)?, median (IQR) | 3 (1–3) | 3 (2–4) |
Table 15 shows that 78% of optometrists felt that additional training may be required, compared with only 30% of ophthalmologists. One ophthalmologist and five optometrists elaborated on this, stating that they had found the content of the webinars to be confusing (particularly in terms of the questions about lesion components), and one ophthalmologist and three optometrists said they had found the training challenging.
Uncertainty about the some of the decisions I was making – not good for the nerves!
Optom256
Questions sometimes were confusing.
Ophthalm102
It was really challenging . . . you really had to think.
Ophthalm107
Years of ophthalmic experience did not appear to influence how well the training was received; ophthalmologist participants who felt that additional training was required had a median of 9.6 years of experience (IQR 4.6–15.3 years of experience), whereas ophthalmologist participants who felt training was completely sufficient had a median of 11.0 years of experience (IQR 5.6–16.1 years of experience). Almost all optometrists (96%) stated that they had revisited the webinar content, compared with half of ophthalmologists (49%; see Table 15). On average, the optometrists also spent over 1 hour longer revisiting the material than the ophthalmologists. Only 11% of ophthalmologists said that they had used other resources, compared with 67% of optometrists. Most participants used websites (25%) or a textbook (20%), had discussions with colleagues (6%) or looked through previous conference notes (3%). With respect to textbooks, the majority of participants had used Clinical Ophthalmology by Kanski and Bowling. 27
Chapter 4 Results: health economics (objective 4)
Resource use and unit costs
Tables 16–22 present the main results of the analysis of resource use and costs. In summary, Tables 16 to 19 present resource-use results, with Table 20 providing information on most of the unit costs which would be added to the resource-use information, and staff unit cost information is presented in the next table (Table 21). The resource-use and unit cost information are combined and presented in Table 22, which reports the cost of a monitoring review performed by community optometrists.
Resource item | Number of responses | Mean number (SD) |
---|---|---|
Building | ||
Approximate size (floor space, m2) | 27 | 173.10 (266.37) |
Approximate size (number of rooms) | 40 | 3.5 (2.20) |
Equipment for monitoring review | Number of items of equipment | Mean number per practice (SD) |
ETDRS visual acuity charts (e.g. Lighthouse Inc.): 4-m viewing distance required (with/without mirrors) | 40 | 0.575 (0.984) |
Projector that includes ETDRS chart | 39 | 0.46 (0.756) |
Retro-illuminated light box | 39 | 0.49 (1.022) |
Trial frame | 40 | 2.225 (1.230) |
Lens set | 40 | 2 (0.905) |
Light meter to measure luminance (e.g. Sper Scientific) | 40 | 0.1 (0.304) |
Slit lamp | 40 | 1.75 (0.776) |
CF camera | 39 | 0.72 (0.456) |
OCT acquisition system | 37 | 0.22 (0.417) |
OCT acquisition system with fundus photography included as a component | 38 | 0.342 (0.481) |
Computer | 39 | 4.92 (3.382) |
Computer networka | 40 | 0.63 (0.490) |
Printer | 40 | 2.73 (1.66) |
Role in monitoring review | Main member of staff performing task (%) | Other staff performing task (%) | Mean durationa in minutes (SD) | Number of responses |
---|---|---|---|---|
Taking patient history | Optometrist (100) | Pre-registration optometristb | 4 (1.50) | 40 |
Clinical examination: slit lamp biomicroscopy, anterior segment and macula | Optometrist (100) | Pre-registration optometristb | 4 (1.50) | 40 |
Visual acuity assessment | Optometrist (95) | Pre-registration optometrist, optical assistant (5%) | 8 (2.98) | 38 |
Administration of 1% tropicamide drops | Optometrist (92.5) | Pre-registration optometrist, optical assistant (7.5%) | 1 (0.37) | 37 |
CF photography (or equivalent CF image) | Optometrist (67.5) | Pre-registration optometrist, optical assistant, clerical/retailer staff, practice manager other administrative staff (32.5%) | 4 (1.38) | 27 |
Spectral domain OCT | Optometrist (67.5) | Pre-registration optometrist, optical assistant, clerical/retailer staff, practice manager other administrative staff (32.5%) | 4 (1.35) | 27 |
Final assessment | Optometrist (100) | Not applicable | 5 (1.88) | 40 |
Update | Optometrist (77.5) | Pre-registration optometrist, optical assistant, clerical/retailer staff, practice manager other administrative staff (22.5%) | 2 (0.73) | 31 |
Booking appointments | Optometrist (2.5) | Pre-registration optometrist, optical assistant, clerical/retailer staff, practice manager other administrative staff (97.5%) | 1 (not applicable) | 1 |
Time spent | Number of optometrists revisiting webinars (% of observations) | Number of optometrists consulting other resources (% of observations) |
---|---|---|
Up to 30 minutes | 4 (8) | 4 (11) |
30 minutes to 1 hour | 16 (30) | 13 (34) |
1–2 hours | 21 (40) | 10 (26) |
2–4 hours | 9 (17) | 8 (21) |
> 4 hours | 3 (6) | 3 (8) |
Number of observations | 53 | 38 |
Time spent | Number of ophthalmologists revisiting webinars (% of observations) | Number of ophthalmologists consulting other resources (% of observations) |
---|---|---|
Up to 30 minutes | 13 (57) | 3 (38) |
30 minutes to 1 hour | 5 (22) | n/a |
1–2 hours | 4 (17) | 4 (50) |
2–4 hours | n/a | n/a |
> 4 hours | 1 (4) | 1 (12) |
Number of observations | 23 | 8 |
Unit cost (£) | |||
---|---|---|---|
Item | Unit cost (£) | Source | Notes |
ETDRS visual acuity charts – 4-m viewing distance required | 83.00 | Expert’s opinion,a recommended website: http://sussexvision.co.uk/index.php/distance-tests/logmar-charts/4-metre-viewing/logmar-4m-etdrs-chart-r-original.html (accessed 30 September 2014) | Sussex Vision item. The ECHoES trial protocol requires three of them: one for doing the refraction, one for the right eye and one for the left eye |
Projector that includes ETDRS chart | 2850.00 | Personal communication with Topcon (2014)b | Model: 1240263 CC-100XP LED with remote control |
Retro-illuminated light box | 878.00 | Expert’s opinion,a recommended website: http://sussexvision.co.uk/index.php/distance-tests/logmar-test-types/precision-vision-logmar-cabinet.html (accessed 30 September 2014) | Sussex Vision item |
Bulb for retro-illuminated light box | 15.00 | Expert’s opinion,a recommended website: http://sussexvision.co.uk/index.php/distance-tests/logmar-test-types/tube-for-sdt-396-lpv-cabinet-without-diffuser.html (accessed 30 September 2014) | Sussex Vision item. Two bulbs: bulb needs to be replaced every 2 years for each chart |
Trial frame | 375.00 | Expert’s opinion,a recommended website: www.opticalmarketplace.co.uk/new-equipment/optical-equipment/trial-frames/omp1559/oculus-ub4-trial-frame/ (accessed 30 September 2014) | Optical Marketplace item |
Lens set | 475.00 | Expert’s opinion,a recommended website: www.opticalmarketplace.co.uk/new-equipment/optical-equipment/trial-lens-sets/omp1464/quality-trial-lens-set/ (accessed 30 September 2014) | Optical Marketplace item |
Light meter to measure luminance (e.g. Sper Scientific) | 120.00 | Expert’s opinion,a recommended website: www.coleparmer.co.uk/Product/Sper_Scientific_840006_Light_Meter_with_Analog_Output/UY-01588-24?referred_id=3482&gclid=CKX6m8PjiMECFUn3wgodxJYA6Q (accessed 30 September 2014) | Cole Parmer item |
Slit lamp | 4000.00 | Expert’s opiniona | Estimate |
Computer | 549.00 | www.dell.com/uk/business/p/desktops-n-workstations.aspx?c=uk&l=en&s=bsd&∼ck=mn (accessed 1 October 2014) | Dell desktop OptiPlex 9020 |
Computer network | 1000.00 | Expert’s opinionc confirming estimates at www.itdonut.co.uk/it/communications/networking (accessed 29 September 2014) | Guide price to build a simple network of up to 10 computers |
Printer | 155.00 | http://accessories.euro.dell.com/sna/sna.aspx?c=uk&cs=ukdhs1&l=en&s=dhs&∼topic=printer_shopall_lasers (accessed 1 October 2014) | Dell C1660w colour printer |
Eye drops, tropicamide 1% | 0.50 | British National Formulary 28 | Single use – net price 20 × 0.5 ml = £10.00 |
Consultant | 139.00 | Unit Costs of Health and Social Care 2013 29 | PSSRU table 15.5; consultant: medical. Cost including qualifications |
Ranibizumab (dose of 0.5 mg) | 742.00 | British National Formulary 28 | – |
Aflibercept | 816.00 | British National Formulary 28 | – |
Bevacizumab (dose of 1.25 mg) | 49.00 | Dakin et al. (2014)25 | Price typically charged by the not-for-profit NHS provider used in the IVAN trial (£49/prefilled syringe) |
Ratio | |||
Type of ratio | Ratio | Source | Notes |
Ratio ‘salary oncost/salary’ | 0.234 | Unit Costs of Health and Social Care 2013 29 | PSSRU table 9.1; community physiotherapist. Ratio applied to salary as reported by participant to the Health Economics Questionnaire |
Ratio ‘qualification/salary’ | 0.237 | Unit Costs of Health and Social Care 2013 29 | PSSRU table 9.1; community physiotherapist. Ratio applied to salary as reported by participant to the Health Economics Questionnaire |
Ratio ‘overheads/salary’ | 0.756 | Unit Costs of Health and Social Care 2013 29 | PSSRU table 9.1; community physiotherapist. Ratio applied to salary as reported by participant to the Health Economics Questionnaire |
Ratio ‘capital overheads/salary’ | 0.093 | Unit Costs of Health and Social Care 2013 29 | PSSRU table 9.1; community physiotherapist. Ratio applied to salary as reported by participant to the Health Economics Questionnaire |
Salary band | Optometrist | Pre-registration optometrist | Optical assistant | Clerical/retailer staff | Practice manager | Other administrative staff |
---|---|---|---|---|---|---|
Less than £20,000 | 4 | 5 | 24 | 14 | 3 | 10 |
£20,000–29,999 | 2 | n/a | 4 | 2 | 5 | 2 |
£30,000–39,999 | 9 | n/a | n/a | n/a | 6 | n/a |
£40,000–49,999 | 9 | n/a | n/a | n/a | n/a | n/a |
£50,000–59,999 | 10 | n/a | n/a | n/a | n/a | n/a |
£60,000–69,999 | 4 | n/a | n/a | n/a | n/a | n/a |
£70,000–79,999 | 1 | n/a | n/a | n/a | n/a | n/a |
£80,000 per year or more | 1 | n/a | n/a | n/a | n/a | n/a |
Observations | 40 | 5 | 28 | 16 | 14 | 12 |
Cost items | Mean cost, £ (SD) |
---|---|
Equipment | 22.99a (5.552) |
Refurbishment/building/rent | 0.05 (0.147) |
Staff labour | 27.26 (7.317) |
Preparation and delivery of webinar training | 0.13 (0.101) |
Optometrist’s time on training | 0.89 (1.080) |
Eye drops, tropicamide 1% | 0.50 (n/a) |
Total cost per monitoring review | 51.82 (8.153) |
More specifically, Table 16 shows the mean numbers of equipment items which the community optometrists stated that they already had in their practices when they completed the online resource questionnaire. Although most practices had a CF camera, fewer than half had a projector which includes an ETDRS (Early Treatment of Diabetic Retinopathy Study) chart or a retro-illuminated light box.
In terms of floor space, the respondents said that the average floor space was 173 m2 and the average number of rooms in the practice was 3.5 (SD 2.20 rooms). Of the 39 community optometrists who replied to the question about the need to make modifications to their premises in order to assess nAMD patients, just over 50% said that they would need to do so (mean 0.54, SD 0.505).
In terms of staffing resources, Table 17 presents a breakdown of activities for the monitoring review, the staff members who would perform the various tasks and the average predicted time for the tasks. The optometrists said that they would always take a patient history, carry out a clinical examination and make the final assessment, and that they would be largely responsible for other activities such as undertaking OCTs; pre-registration optometrists and other support staff could help in some of these other activities. The optometrists stated that they expected to book appointments for patients only rarely, as this activity was mainly done by clerical/administrative staff.
Tables 18 and 19 present information on the number of times that optometrists and ophthalmologists revisited the webinars and consulted other resources. As already described (see Chapter 3, Participants’ views on ECHoES trial training), the ophthalmologists were less likely to revisit the webinars or seek out other sources of information, which might be predicted given that they have more experience than optometrists in caring for nAMD patients.
The medical retina experts spent 30 hours preparing and delivering the webinar training for the ECHoES trial (i.e. 15 hours for preparing the webinars, 5 hours for agreeing on the content of webinars and 10 hours for delivering the webinars to the participants).
Table 20 provides a summary of the unit costs which were attached to the resource-use information above.
In terms of unit cost information on salaries for staff working in community optometrist practice, Table 21 provides a breakdown of the salary bands for different types of staff.
Table 22 shows the combination of resource-use and unit cost information which produces the cost for each of the cost categories. The table also shows the sum of these categories, which leads to the total average cost per optometrist monitoring review equal to £51.82 per review (SD £8.153 per review). This sum compares to the average cost of £75.60 (SD £44.31) for ophthalmologists performing a monitory review, as costed in the IVAN trial. The cost of purchasing and setting up equipment and facilities accounted for £2 per patient, with staff labour accounting for much of the rest of the remaining cost of ophthalmologist-based monitoring reviews.
Cost-effectiveness of monitoring by optometrists compared with ophthalmologists
The care pathway cost table (see Table 23) shows the cost and effect information combined, highlighting the impact that incorrect assessments could have on costs. The pathway includes the cost of a monitoring consultation itself and also downstream costs (e.g. ranibizumab injections and follow-up visits based on the care cost pathway decision tree).
Table 23 shows that, of the vignettes that experts rated as reactivated, the optometrists made more correct decisions than the ophthalmologists (39.43% compared with 36.51%) and were less likely to misclassify reactivated lesions as suspicious or quiescent. When optometrists correctly judge the lesion as reactivated, the patient needs to be referred for another monitoring review at the hospital eye clinic before actually receiving treatment (anti-VEGF injection). There were 795 vignettes assessed in this category for the optometrists (39.43%) and only 736 (36.51%) for the ophthalmologists. There are two points to note here. First, if the optometrist is correct in his or her diagnosis in the model of care delineated in this study, the additional cost for an ophthalmologist-led monitoring review will represent an unnecessary cost. However, if the optometrist is incorrect then the further ophthalmologist-led monitoring review will represent a cost saving because in our model of care we assume that the ophthalmologists are able to detect the mistake and, therefore, the patient will not undergo the unnecessary treatment. Second, and inversely to the optometrist case, when an ophthalmologist says that a patient requires treatment there is not a second monitoring review; the patient simply continues to have their treatment, thereby directly incurring costs. This represents a cost saving with respect to the optometrist care model if the ophthalmologist’s diagnosis is correct (cost of a second monitoring review avoided), but will instead represent an additional unnecessary cost in the event that the patient undergoes unnecessary treatment. Of the vignettes that experts rated as suspicious, 10 (0.50%) were incorrectly classed as reactivated observations in this category for the optometrists, compared with only one (0.05%) in the ophthalmology group. In this case, the second monitoring review in the optometrists’ pathway is actually helpful, as it can prevent unnecessary treatment. However, if we assume that this ophthalmologist would go on to provide treatment when the patient did not need it, this would cost £877 compared with only £118 for an optometrist making an incorrect decision here (this figure is the cost of optometrist review, plus the cost of an ophthalmologist review).
Lesion status assessment | Observations (% of total) | Pathways cost (£),a mean (SD) | |
---|---|---|---|
Experts (true) | Optometrists’ decision | ||
Reactivated | Reactivated | 795 (39.43) | 935.40 (45.50) |
Reactivated | Suspicious | 142 (7.04) | 103.61 (18.51) |
Reactivated | Quiescent | 57 (2.83) | 51.29 (9.08) |
Suspicious | Reactivated | 10 (0.50) | 118.12 (16.39) |
Suspicious | Suspicious | 11 (0.55) | 57.04 (9.10) |
Suspicious | Quiescent | 14 (0.69) | 52.96 (9.37) |
Quiescent | Reactivated | 105 (5.21) | 117.14 (32.61) |
Quiescent | Suspicious | 234 (11.61) | 78.31 (11.53) |
Quiescent | Quiescent | 648 (32.14) | 51.98 (8.23) |
Total | – | 2016 (100) | – |
Experts (true) | Ophthalmologists’ decision | ||
Reactivated | Reactivated | 736 (36.51) | 882.67 (46.41) |
Reactivated | Suspicious | 196 (9.72) | 153.18 (92.25) |
Reactivated | Quiescent | 62 (3.08) | 77.01 (45.49) |
Suspicious | Reactivated | 1 (0.05) | 877.38 (n/a) |
Suspicious | Suspicious | 17 (0.84) | 68.84 (31.00)a |
Suspicious | Quiescent | 17 (0.84) | 60.57 (17.16)a |
Quiescent | Reactivated | 35 (1.73) | 882.29 (38.00) |
Quiescent | Suspicious | 146 (7.24) | 150.34 (95.19) |
Quiescent | Quiescent | 806 (39.98) | 75.28 (44.72) |
Total | – | 2016 (100) | – |
For the truly quiescent vignettes that the health professional incorrectly assesses as reactivated, the ophthalmologists go ahead with treatment, whereas the optometrists will refer a patient to the HES where the mistake will be identified before treatment is given. However, although the cost consequences of incorrectly rating quiescent eyes as reactivated are much smaller for optometrists than for ophthalmologists, optometrists are three times more likely to make this error. For cases where participants correctly rate the vignette as suspicious, only the routine monitoring review is considered here, as both optometrists and ophthalmologists will recommend a subsequent routine check, so options will be cheaper for optometrists than ophthalmologists in this case, reflecting the differential in cost between the two professionals. A similar pathway is implemented in ‘suspicious’ cases incorrectly rated as ‘quiescent’.
Table 24 presents the base-case analysis of the cost-effectiveness of optometrists compared with ophthalmologists in performing monitoring reviews. The cost of a monitoring review is based on an average of the patient cost pathways in the previous table (unweighted according to the most likely pathways). The table shows that the mean care pathway cost for each assessment is quite similar between groups, at £410.78 for optometrists and £397.33 for ophthalmologists, producing an incremental cost difference of £13.45 (95% CI £17.96 to £44.85). The higher cost for optometrists than for ophthalmologists may reflect the fact that optometrists were more likely to incorrectly classify vignettes as reactivated, thereby incurring higher unnecessary costs for the health service.
Costs and effects | Optometrists (observations, n = 2016) | Ophthalmologists (observations, n = 2016) |
---|---|---|
Cost of a monitoring review (£), mean pathway cost (SD) | 410.78 (424.92) | 397.33 (387.46) |
Mean proportion of correct assessments (SD) | 0.844 (0.363) | 0.854 (0.353) |
Incremental cost (£) (95% CI) | 13.45 (–17.96 to 44.85) | |
Incremental benefit, proportion of correct assessments (95% CI) | –0.0099 (–0.045 to 0.025) | |
ICER, incremental cost per correct assessmenta | Dominated |
In terms of effectiveness information, the proportion of correct treatment decisions is also quite similar, with around 85% of optometrists and ophthalmologists making correct decisions; ophthalmologists made marginally more correct decisions, but this difference was not statistically significant. With higher mean costs (albeit not statistically significant) and a lower proportion of correct treatment decisions (albeit not statistically significant) differences between the groups, Figure 10 shows that the strategy of optometrists performing monitoring reviews is dominated by ophthalmologist-led reviews, with optometrist-led monitoring reviews being more costly and less effective. However, the differences are extremely small: optometrist-led reviews increase the total costs by only £13 per review (3% of the total cost of the care pathway) and result in only one more incorrect decision per 101 monitoring reviews conducted. Furthermore, there is substantial uncertainty around this finding. The CEAC shown in Figure 11 shows that the probability that it is cost-effective for optometrists to perform monitoring reviews is below 30% regardless of how much we are willing to pay per correct decision. In fact, if the NHS is willing to pay less than £600 per correct decision, which is likely to be the case, the probability that it is cost-effective to conduct monitoring by community optometrists is 14% for willingness to pay equal to £200 and 8% for willingness to pay equal to £600.
Thus, it appears that there is no willingness-to-pay level for which we can be 95% confident that the two ways of performing a monitoring review differ in value; that is, they differ in terms of cost-effectiveness.
The CEAC in Figure 11 shows the probability that the optometrists are cost-effective compared with ophthalmologists for a range of willingness-to-pay thresholds. If a decision-maker is willing to pay £200 for an extra correct retreatment decision, then the probability of optometrists being cost-effective is around 14%. The probability that optometrists are cost-effective changes little across a broad range of willingness-to-pay thresholds, indicating that this probability is invariant to the willingness-to-pay threshold.
Figure 10 presents the cost-effectiveness plane and graphs the point estimate of the ICER and two confidence ellipses, with the outer ellipse representing the 95% CI and the inner being the 85% confidence ellipse for our base-case analysis. The figure shows that the 95% CI is not definable and there is no willingness-to-pay threshold for which we can be 95% confident that the optometrist-led and the ophthalmologist-led monitoring reviews differ in value from each other. The widest definable Fieller interval is 85% and the green line shows the tangent to the 85% confidence ellipse.
Sensitivity analyses
One-way sensitivity analyses investigated the impact of varying the way of delivering treatment for lesions assessed as reactivated, which is one of the main cost drivers in our analysis. In the base-case analysis it was assumed that treatment for an active lesion consisted of one ranibizumab injection given during an injection consultation. In three of our sensitivity analyses, one ranibizumab injection was replaced with alternative treatments to reflect new emerging practices across eye hospitals (sensitivity analyses 1 and 2) and to make a comparison with a much cheaper drug assessed in the IVAN trial (sensitivity analysis 3):
-
Sensitivity analysis 1: all patients in whom treatment for a reactivated lesion was initiated were assumed to receive a course of three injections of ranibizumab at three injection consultations, with no additional monitoring reviews. The ‘mandatory’ three injections, at monthly intervals, matched the discontinuous treatment regimen administered in the IVAN trial (although in the IVAN trial monitoring continued thereafter).
-
Sensitivity analysis 2: treatment was assumed to be given in the form of one aflibercept injection during an injection consultation.
-
Sensitivity analysis 3: treatment consisted of one bevacizumab injection given during an injection consultation.
-
Sensitivity analysis 4: only considered the cost of a monitoring review rather than considering the cost of the whole pathway.
Sensitivity analysis 1 (three ranibizumab injections and consultations) increases the costs of lesion care at the eye hospital because more treatment and consultations are required and, once combined with data on the consequences of retreatment decisions in our care pathways model, optometrists remained dominated, although there remained very little difference in costs or effects between optometrists and ophthalmologists. Akin to our base-case analysis, there was also no acceptable willingness-to-pay threshold for which we can be 95% confident that the two alternative ways of performing a monitoring review differ in value. A similar result was found for the sensitivity analysis that used aflibercept rather than ranibizumab for treatment, which was predictable given that aflibercept is more expensive than ranibizumab. When switching to the much cheaper bevacizumab (in place of ranibizumab), with a reduced cost of treatment, again very little difference was found in costs and effects difference across the two groups, with optometrist-led reviews again being more costly and less effective than ophthalmologists-led monitoring reviews. However, the fourth sensitivity analysis, which considered only the cost of the monitoring review, rather than the total care pathway information, found that community optometrist-led care cost £23.70 less per consultation than ophthalmologist-led care (the difference between the crude consultation costs, p < 0.001). As a result, optometrist-led care was less effective and significantly less costly than ophthalmologist-led care. Ophthalmologist-led care cost an additional £2389 per additional correct treatment decision compared with optometrist-led care. Although the maximum the NHS is willing to pay for a correct retreatment decision is unknown, it is unlikely to be this high (since this figure is higher than the cost of simply treating all patients without assessing whether or not the eye is quiescent). At ceiling ratios of £600 or lower, we can be > 95% confident that optometrists are a cost-effective option compared with ophthalmologists in this fourth sensitivity analysis. This final sensitivity analysis highlights how important it was to have built a simple decision model to explore the consequences after the initial monitoring review and not just use the information from the initial review. However, it also suggests that the conclusions may be sensitive to the assumptions within the decision tree. The economic evaluation section in Appendix 3 presents the results of the sensitivity analysis in respect to the cost care pathways, cost-effectiveness analysis and the respective cost-effectiveness planes for the four sensitivity analyses.
Budget impact
Data on the prevalence and incidence of nAMD (Table 25) were used to calculate that around 219,000 patients currently attend VEGF clinics in England in any given month, of which 52,000 (19%) have bilateral disease (Table 26). In our budget impact calculations, we assumed that patients would be referred from the HES to community optometrists for monitoring if they did not meet the IVAN retreatment criteria 1 month after finishing a course of anti-VEGF treatment. We assumed that patients with bilateral disease would not be referred from the HES to community optometrists for monitoring if they met retreatment criteria in either eye in the same month; the probability of meeting retreatment criteria was assumed to be independent in the two eyes.
Data | Value | Source |
---|---|---|
Population of adults aged 50 years and over in England | 19,323,400 | Population of England aged 50 years and over – Office for National Statistics30 |
Prevalence of diagnosed nAMD among over-50-year-olds | 1.2% | The estimated prevalence and incidence of late stage age-related macular degeneration in the UK – Owen et al.31 |
Percentage of diagnosed prevalent nAMD cases starting anti-VEGF treatment | 100% | Assumption |
Incidence of diagnosed nAMD among over-50-year-olds | 0.11% | NICE TA294 macular degeneration (wet age-related) – aflibercept: costing template32 |
Percentage of people eligible for treatment | 80% | NICE TA294 macular degeneration (wet age-related) – aflibercept: costing template32 |
Percentage of prevalent cases with bilateral nAMD | 19% | NICE TA294 macular degeneration (wet age-related) – aflibercept: costing template32 |
Percentage of patients seen in clinic who finish a course of treatment in month n | 18% | In the IVAN trial, proportion of months in which patients in the discontinuous arm completed a 3-month cycle |
Of those eyes completing a cycle of treatment at visit n, what percentage remain quiescent at visit n + 1 (the month after the end of treatment) | 62% | In the IVAN trial, proportion of study eyes in the discontinuous group becoming quiescent at visit n + 1 after completing a 3-month cycle of treatment at visit n, by month stratum: weighted average of months 3–24 |
Median duration (in months) of quiescence for those eyes achieving quiescence after the end of treatment, i.e. the number of months for which we expect a quiescent eye to remain quiescent | 2.0 | Based on median time to retreatment in the IVAN trial from each time a patient becomes treatment free. Medians were calculated for each 3-month period and the median of the 3-month periods was used. Converted from days to months assuming each month has 30 days |
Mean cost of monitoring consultation conducted by community optometrist | £51.90 | The ECHoES trial costing analysis (see Table 22) |
Mean cost of monitoring consultation conducted by hospital ophthalmologist | £75.60 | The IVAN trial costing analysis |
Total cost of monitoring consultation and downstream costs (e.g. injections and follow-up visits): community optometrist | £410.78 | The ECHoES trial economic evaluation (see Table 24) |
Total cost of monitoring consultation and downstream costs (e.g. injections and follow-up visits): hospital ophthalmologist | £397.33 | The ECHoES trial economic evaluation (see Table 24) |
Estimated patient numbers in England per year | |
---|---|
Total number of patients attending clinic | |
All patients | 219,514 |
Unilateral nAMD | 167,263 |
Bilateral nAMD | 52,250 |
Total number of patients quiescent in both eyes who become eligible for community optometry in any given month | |
All patients | 21,949 |
Unilateral nAMD | 18,366 |
Bilateral nAMD | 3583 |
Total number of monitoring visits that could be transferred from hospital to community per year | |
All patients | 535,548 |
Unilateral nAMD | 448,128 |
Bilateral nAMD | 87,420 |
Budget impact – initial consultations only | |
Total cost of community optometrist monitoring consultations in England | £27,794,931 |
Total cost of hospital ophthalmologist monitoring consultations in England | £40,487,414 |
Budget impact (i.e. incremental cost of using community optometry rather than hospital monitoring) | –£12,692,483 |
Budget impact – including downstream costs | |
Total cost of community optometrist monitoring consultations and downstream costs (e.g. injections and follow-up visits) in England | £219,992,329 |
Total cost of hospital ophthalmologist monitoring consultations and downstream costs (e.g. injections and follow-up visits) in England | £212,789,211 |
Budget impact (i.e. incremental cost of using community optometry rather than hospital monitoring) | £7,203,118 |
In the IVAN trial, 18% of study eyes in the discontinuous treatment arms completed a 3-month cycle of treatment each month, of which 38% still met retreatment criteria at the next visit. The probability of meeting retreatment criteria and the duration of quiescence were constant over the trial period; therefore, no distinction was made between the first and subsequent years of treatment. Applying these figures to the national patient numbers and allowing for patients with bilateral disease suggests that around 21,949 of the total of 219,514 patients currently attending clinics may be eligible for referral to community optometry review each month (see Table 26), assuming that such referrals would occur in all patients who are quiescent according to the IVAN trial criteria 1 month after their last anti-VEGF injection. In the IVAN trial, quiescence lasted a median of 61 days; allowing for this duration and extrapolating monthly figures to 12 months suggests that 535,548 monitoring reviews could be done by community optometrists in England (see Table 26) each year.
Applying the results of the ECHoES trial costing analysis suggests that the initial monitoring consultation is £23.70 cheaper if performed by community optometrists rather than by hospital ophthalmologist-led teams. Scaling this up across the 535,548 visits that could be transferred to community optometrists suggests that initial savings of £12.7M could be made for the NHS in England. However, this figure takes no account of the increased numbers of second monitoring reviews or additional intravitreal injections that result from optometrist judgements. If we allow for the costs accrued from the entire pathway (see Table 24), optometrist care is £13.45 more costly and therefore referring patients to community optometry review will cost an additional £7.2M across England.
Chapter 5 Results: views of patients and health professionals about the shared care model (objective 5)
Participants in focus groups and interviews
Health professionals
The focus groups lasted a mean of 110 minutes, ranging from 99 to 121 minutes. The interviews lasted a mean of 48 minutes, ranging from 31 to 61 minutes. In total, 24 health professionals were recruited. This comprised eight optometrists, six ophthalmologists, two public health representatives, six NHS commissioners and two clinical eye care advisors to their local CCGs. Of these, 12 were women and 12 were men. Participants had a mean age of 39 years (range 31–64 years) and stated that they had been in their profession for a mean of 21 years (range 4–40 years). Although none had participated in shared care for nAMD, 11 participants had experience of shared care schemes for other conditions such as diabetic retinopathy screening and ocular hypertension monitoring. Table 27 shows participants’ professional background. Years in profession and location are not presented for the ‘other’ health professionals (i.e. those other than ophthalmologists and optometrists) to protect their anonymity given their unique roles. However, these participants’ locations were distributed evenly throughout England and they had been in their profession for an average of 20 years (range 8–37 years).
Participanta | Role(s) | Years in professionb | Location |
---|---|---|---|
Focus group 1 | |||
Optom1 | Optometrist | 37 | South West England |
Optom2 | Optometrist | 28 | South East England |
Optom3 | Optometrist | 29 | North West England |
Optom4 | Optometrist | 37 | West Midlands |
Optom5 | Optometrist | 15 | South West England |
Optom6 | Optometrist | 20 | South West England |
Optom7 | Optometrist | 36 | South West England |
Optom8 | Optometrist | 40 | West Midlands |
Focus group 2 | |||
Ophthalm1 | Ophthalmologist | 20 | North West England |
Ophthalm2 | Ophthalmologist | 20 | London |
Ophthalm3 | Ophthalmologist | 4 | Dundee |
Ophthalm4 | Ophthalmologist | 7 | East Midlands |
Ophthalm5 | Ophthalmologist | 17 | North West England |
Ophthalm6 | Ophthalmologist | 3 | South West England |
Interviews | |||
CA1 | Optometrist, clinical advisor to local CCG | – | – |
CA2 | Optometrist, clinical advisor to local CCG | – | – |
Comm1 | Commissioner, pharmacist | – | – |
Comm2 | Commissioner, GP | – | – |
Comm3 | Commissioner, GP | – | – |
Comm4 | Commissioner, GP | – | – |
Comm5 | Commissioner, pharmacist | – | – |
Comm6 | Commissioner, optometrist | – | – |
PH1 | Member of eye health local professional network, optometrist | – | – |
PH2 | Optical advisor for eye charity, optometrist | – | – |
Service users
The focus groups lasted a mean of 71 minutes, ranging from 65 to 90 minutes. Three focus groups were conducted with 23 participants in total, with seven or eight participants in each group. The sample consisted of 15 women (65%) and eight men (35%) who described themselves as white British. The sampling strategy intended to recruit individuals from a mix of ethnicities, but all those at the supporting groups who were willing to be contacted were white British. They had a mean age of 82 years (range 72–93 years). All had nAMD and attended the same eye hospital in a major city and were diagnosed an average of 5.9 years ago (range 6 months to 20 years). Nine participants had active nAMD in one eye (39%), nine were stable in one eye (39%), four people had active disease in both eyes (18%), and one person was stable in both eyes (4%). Eight participants had dry AMD in their other eye (35%). The partners of two participants (who were also their carers) joined the focus group but spoke very little; their comments were not included in the final analysis. Table 28 provides participants’ demographic and health-related details.
Participanta | Gender | Age (years) | Location of support group | Information about condition | Time since diagnosis |
---|---|---|---|---|---|
Focus group 3 | |||||
Arthur | Male | 80 | City | Advanced active nAMD in one eye | 20 years |
Edith | Female | 83 | City | Inactive nAMD in one eye | 7 years |
Edward | Male | 78 | City | Active wet in one eye, dry in other | 10 years |
Elizabeth | Female | 81 | City | Inactive wet in one eye, dry in other | 12 years |
Harriett | Female | 86 | City | Inactive nAMD in one eye, dry in other | 2.5 years |
Kath | Female | 77 | City | Inactive in nAMD in both eyes | 15 years |
Ruth | Female | 87 | City | Active nAMD in one eye | 10 months |
Tom | Male | 79 | City | Inactive wet in one eye, dry in other | 13 years |
Focus group 4 | |||||
Alice | Female | 79 | Large town | Inactive nAMD in one eye | 5 years |
Debbie | Female | 84 | Large town | Advanced active nAMD in one eye, dry in other | 4 years |
Julie | Female | 86 | Large town | Active nAMD in one eye, dry in other | 6 months |
Mandy | Female | 93 | Large town | Active nAMD in one eye | 6 months |
Maria | Female | 93 | Large town | Inactive nAMD in one eye | 20 years |
Pam | Female | 83 | Large town | Active nAMD in one eye, dry in other | 4 years |
Ralph | Male | 79 | Large town | Active nAMD in one eye | 7 years |
Robert | Male | 78 | Large town | Active nAMD in both eyes | 10 years |
Focus group 5 | |||||
George | Male | 85 | Rural village | Active nAMD in both eyes | 6 years |
Harry | Male | 72 | Rural village | Inactive nAMD in one eye | 1 year |
Henry | Male | 82 | Rural village | Active nAMD in one eye, dry in other | 4 years |
Olivia | Female | 84 | Rural village | Active nAMD in both eyes | 8 years |
Pat | Female | 78 | Rural village | Active nAMD in both eyes | 7 years |
Tracey | Female | 77 | Rural village | Inactive nAMD in one eye | 11 years |
Yvonne | Female | 76 | Rural village | Inactive nAMD in one eye | 1 year |
Results of focus groups and interviews
Overall, the majority of participants were extremely enthusiastic about the possibility of a shared care model being implemented for nAMD care. Thematic analysis of the focus group and interview data produced six key themes: ‘Current clinic capacity: Pushed to the limit’, ‘Potential for a more patient-centred model’, ‘Perceptions of optometrists’ competency’, ‘(Lack of) communication between optometrists and ophthalmologists’, ‘The cost of shared care’ and ‘The importance of specialist training’. The interpretation of themes and subthemes is supported by illustrative quotes. The codes used for the quotation sources can be found in Tables 27 and 28.
Current clinic capacity: pushed to the limit
Many health professionals stated that the number of repeat visits for patients was rising exponentially, which was attributed to an increase in the number of patients who were being diagnosed with nAMD and to new government guidelines for treatment.
The numbers going through the system are higher, and they need treatment for longer.
Comm3
Hospital clinics were felt to be ‘pushed to their limit’ [Optom7, Ophthalm3] and the ophthalmologists felt frustrated that their time was mostly spent on stable patients who did not require treatment.
You are taking the potential time from the ones that actually need the care. There is a limit of work that you can do. I mean you can’t go home at 8 every day.
Ophthalm2
Both the health professionals and service users described how patients would often have to wait for long periods of time for their appointment because of how busy the eye hospital clinics were:
Well, quite honestly when you go to the eye hospital, it always seems to be packed out left, right and centre.
Harry
Sometimes we have to wait a long time, but you know it can’t be helped.
Henry
It’s just generally the issue of they have had to wait a long time [. . .] You’ve got to think also there are diabetics among there that have to regulate their meals, and they have their set routine in terms of their meals and health. It’s the same with everybody, but perhaps diabetics most, because it affects them straight away. I’ve known diabetics that have gone into a hypo [hypoglycaemia] because they were waiting around. Because they had a 9 ‘clock appointment and it got to 11. It’s just a real shame.
PH2
There was a sense that the current model would inevitably need to adapt to cope with this demand and agreement that optometrists monitoring in the community had the potential to reduce clinical workload.
It will help shift a lot of the workload out of the hospital environment where they are overrun with this and putting it into a more capable environment with local optometrists.
Optom6
[Monitoring in the community] will make less queues at the hospital. At the moment they’re choc à bloc with people.
Ruth
The optometrists, clinical advisors and commissioners also described how a shared care scheme represented a great opportunity to enhance optometrists’ professional roles by developing their skills.
It’s fantastic for the optometry profession, because it must give them much more exciting and interesting careers, and career progression, and variety within their work.
Comm3
Potential for a more patient-centred model
Most health professionals also felt that the current model of care was not appropriate for older patients with limited vision who had to regularly travel to the hospital.
The problem is that all the patients have got to go to the eye hospital all the time, which isn’t very patient-centric . . . It’s not very easy for people to get in once a month – which is obviously what Lucentis is about – for their assessment. Given that they are, almost by definition, elderly and with poor vision, it’s not an ideal centre for it.
Comm2
The service users found it stressful travelling to and from the hospital for care. Many stated that they were unable to drive because of their nAMD, which was described as a ‘massive blow’ to their independence (Pat). These service users reported difficulty seeing the bus numbers, and most needed to get multiple buses each way as there were no direct buses. Others were driven by family or friends, but described parking as ‘awful’ (Arthur).
It’s not just getting to the hospital. It’s all that time afterwards, if you’ve got to get the bus, it’s – in the winter, it’s even worse.
Ralph
One of the things that’s come out here is that everyone is, obviously, getting older. They’re stressed when they have to go out of the town because getting home when you’ve got . . .
Robert
Oh, it’s terrible.
Mandy
So if they have someone in the town who is an optician and deals with us, it’s only a short distance from home.
Robert
Monitoring in the community was described as a ‘wonderful’ idea (Elizabeth), particularly for those who lived further away from the hospital or older participants who had severe vision loss.
For me, living out of town in [small town], to get to an optician on the bus is easy, whereas it’s a day’s expedition to come into [city].
Tracey
Many rarely saw the same consultant or nurse at the hospital, and felt that staff were often impersonal as they were ‘so busy’. This was likened to ‘being on a conveyer belt’ (Pam).
If a doctor said, ‘Well, that’s alright’, that’s it. It’s reassurance. I think they’re trying to speed up time, and I know they’re very busy and they obviously look at the photographs and they can see everything, but for patients’ feel-good interest, I always like a doctor or someone to say, ‘That’s alright. You’re not doing too badly. Well, you’re in your eighties now.’ Just to talk to you properly.
George
I think one of the greatest things wrong in the eye hospital is they . . . they kick you out the door.
Arthur
In addition, most others felt that they did not receive enough information about the status of their condition.
Just be told what’s happening –
Harriett
The only criticism I would have is to try to find out how you’re doing and whether you’re getting worse or getting better, or stable, because they’re all so busy.
Ruth
These participants were therefore enthusiastic about the potential for continuity of care, which they hoped would enable them to build up a relationship with their optometrist.
I think one of the things, I think you’ll agree, has come to light this morning, is basically that many aspects of the eye hospital, it’s so impersonal. I think that probably a system like you’re suggesting would probably add a personal touch to it and a more one-on-one situation . . . That’s the big thing, seeing the same person. Like I said, the personal touch . . . The relationship would build up.
Edward
Perceptions of optometrists’ competency
The optometrists in the focus groups, who acknowledged that they had a special interest in nAMD, were very positive about the possibility of shared care and felt that their profession was more than capable of monitoring in the community. This was also echoed by the commissioners, clinical advisors and public health representatives.
They’re [optometrists] really incredible, impressive professionals, with just a huge amount of experience at looking at eyes.
Comm3
However, several health professionals (from mixed professions) commented that ophthalmologists would resist shared care as they were not convinced of optometrists’ competence.
I think it’s the misconception that optometrists won’t do as good a job as secondary care. So I think that’ll be the biggest barrier.
Comm4
This was considered to be problematic for shared care as there was uncertainty whether or not ophthalmologists would ever truly relinquish responsibility for patients.
I would not want to close the door on them, ever. I’d still want them to contact me if they noticed any change.
Ophthalm1
Honestly, I think that clinicians aren’t always very good at letting go [. . .] It will be an issue.
Comm3
There was hesitation among ophthalmologists about whether or not optometrists were capable of monitoring nAMD. The ophthalmologists referred to how they frequently received incorrect referrals from optometrists. Furthermore, they highlighted the ways in which nAMD differed to other eye diseases where shared care schemes existed.
When you work in ophthalmology for quite some time you see just the amount of work that comes to you from inappropriate referrals. Really they’re just doubling work up.
Ophthalm3
This is not like glaucoma where you notice pressure or you don’t feel OK and refer back to the hospital. This is something . . . it’s based on the scan and each patient is different. There’s only a few parameters for glaucoma, whereas here there are . . . it’s complex [. . .] So we can’t expect an optometrist to . . . [laughter].
Ophthalm5
The ophthalmologists felt that the hospital provided an environment where they had access to all previous scans and other colleagues’ expertise, which enabled them to confidently make a clinical judgement. They expressed uncertainty about whether or not optometrists would have these resources. Ultimately, owing to this perceived complexity of assessing the need for retreatment and the support and resources available in the hospital, the ophthalmologists felt that monitoring patients in the community would be a compromise.
So what we are trying to say is hospital care is the best [laughter].
Ophthalm6
Several ophthalmologists felt that patients would prefer to remain being monitored by a consultant at the hospital, whom they would inherently trust. In line with this, service users with active nAMD also tended to be apprehensive about the level of optometrist competence in the community, and commented that lengthy waiting times were secondary to receiving the best care for their condition.
If we put ourselves in their position what would we prefer to have? I’ll prefer to be seen by a doctor in a hospital.
Ophthalm1
So, you’ve got to have confidence in the person that is monitoring you [. . .] I feel that to rely on somebody that has been trained up to identify problems can’t really be as efficient as seeing an actual doctor who specialises in that subject, and because of that, I wouldn’t be happy going to an optician. It might take you longer. We have sat up there for hours, but the end result is well worth it.
Henry
The majority of service users described needing to be able to ‘have faith’ in their optometrist if they were to participate in shared care. Those whose optometrist had diagnosed their nAMD commented that this gave them a sense of confidence in their optometrist’s abilities.
I would trust my optician. He really seems to care. I would trust him. If my wet macular was stable, I’d be very happy to go to my optician because I’ve got confidence in him, because he detected it in the beginning.
Harriett
However, a few service users were apprehensive about shared care and did not wholly trust the idea of monitoring by an optometrist. Those who expressed this sentiment did not have faith in their optometrist because the optometrist had not recognised the condition initially
Personally I wouldn’t have faith in the optometrist. I would much prefer to stay with the hospital.
Henry
(Lack of) communication between optometrists and ophthalmologists
Overall, the health professionals described the relationship between optometry and ophthalmology as poor.
Collaboration optometry and ophthalmology? No way. It’s absolutely dreadful!’ [laughter]
Optom8
These participants described how ‘systems-based’ issues made communication between the two professions poor. This was described by all professions except the ophthalmologists. For instance, most of the optometrists described how it could be extremely difficult to relay information to ophthalmologists because of incompatible computer e-mail systems and variation in technology.
Our problem is we’ve got NHS.net in optometry and because everyone else, they’re all NHS.co.uk. So their end isn’t secure. We can do NHS.net to NHS.net, but we can’t do NHS.net to NHS.co.uk, which is what all hospitals give their consultants. It’s absolutely crazy.
Optom8
They’ve not embraced NHS.net at all.
Optom5
Some optometric practices don’t even have computers. Particularly in the [city] area where many of them are way behind. We actually had to buy them fax machines when this started to make it work. You’d expect most people had that sort of facility, but they didn’t. To make it work, we would do that. So I think standardisation of forms across our units, across the country, and making those forms readily available, and everybody knows that they’ve got to look for the red-topped form in the practice or whatever, could possibly aid this model of shared care.
Optom2
The other big issue is transfer. If you’re actually going to transfer the data, they are massive, massive, massive files. I was talking to an optometrist who has an OCT and sends scans to an ophthalmologist, and he literally has to do it overnight. Just for one patient, it takes so long.
Optom1
All of the optometrists also commented that technology had caused issues with their referrals to ophthalmologists in the past, in that they often got lost between the two professions. As a result, the optometrists stated that they would follow up each referral by calling the consultant to ensure it had been received or they sent referrals via multiple technology methods to ensure that one would reach the consultant.
It does happen that patients will wander back in 2 weeks later and say, ‘Oh, you told me I’d be seen within a couple of weeks. I haven’t heard a thing.’ Then we contact the hospital and we think, ‘OK, what’s happening?’.
Optom2
However, many participants (both health professionals and service users) explained that one of the key concerns of nAMD was the rapid progression of the disease if the condition became active. In particular, several service users described the devastating consequences of their vision deteriorating within days.
Last week I was all-seeing and driving and everything, and on Thursday I thought, ‘There’s something missing on that signpost’, and the glare was terrible. So then I couldn’t read the paper at lunchtime . . . To me, it’s been a disastrous week. I can no longer drive. I can no longer read. This is a week! To me, that’s a disaster. It’s very frightening.
Pat
Participants, particularly the ophthalmologists and service users, therefore expressed concerns about a potential delay between primary and secondary care if retreatment was required.
The problem is that the more steps you have in the system, the longer it takes. We don’t have the time.
Edward
The nature of the disease is such that it needs urgent attention. So it’s better if we see the patient [. . .] I can’t imagine going to the optometrist and him making a decision to refer the patient back, all that delay.
Ophthalm1
The health professionals emphasised that, in order for shared care to be delivered successfully, it would be essential that the two health-care professions were able to work collaboratively so that an efficient pathway could be developed.
I think they would like to make sure that there’s a seamless process between community and hospital, and that nothing drops through the cracks. So I think they would need to make sure that there’s a robust recall service, and that if there is an issue, that there’s a pathway back for the patient into secondary care. So those kinds of things, I think, would need to be ironed out before it goes ahead.
Comm2
The cost of shared care
A theme unique to the health professionals was the financial implications of implementing a shared care model. Both the optometrists and the ophthalmologists believed that financial considerations should be secondary to the care of the patients, although the other health professionals described how a harsh reality of health care meant that shared care would not be commissioned unless it ‘got the most out of the NHS pound’ (PH2).
I think you have to possibly take out the finance issues. How you buy something should be secondary to what the patient needs.
Optom4
You’ve got to show the CCGs that you’re saving money over sending them into the hospital. Because otherwise, they’re just not going to commission it.
PH1
The commissioner participants undertook several roles professionally and were often optometrists or GPs alongside this position. The multiple perspectives from a commissioning and clinical point of view appeared to conflict in terms of patient outcomes and financial efficiency when contemplating nAMD care.
Patients would like it [being monitored in the community] because it’s much closer to home, they don’t have to go to hospital. They don’t have to sit and queue and wait in pain, park and all that sort of stuff that patients usually tell you . . . From a commissioner’s perspective. . . in terms of saving and shifting costs across the health system for eye care services, it certainly doesn’t achieve that.
Comm4
The health professionals were divided about whether or not monitoring in the community would represent a more cost-efficient model. For instance, some perceived that NHS costs of managing sight loss could ultimately decrease.
So if you could use the monitoring to stop the wet AMD getting worse, so that it’s kind of preventative, then you would be addressing the public health indicator of dropping those numbers of people registered with AMD sight loss. [. . .] Yes, that would be the biggest advantage that I would see of it.
PH2
Many also considered that there may be an opportunity to save money with a differential fee for optometrists and ophthalmologists. Conversely, the ophthalmologists stated that the money saved from following up with patients with nAMD could be set towards resources to improve secondary care resources, rather than being ‘lost’ to the community (Ophthalm3).
It would be a cost-efficient option for the commissioners because we would be paying something like £60 for an optometrist to measure the patient’s visual acuity, rather than £100 and whatever it is for a consultant outpatient episode.
Comm4
If the optometrist where I am gets compensated, that pot of money comes from the resources from the hospital and we’re struggling as it is. We’re diverting resources to somebody else who’s not doing as much work as we are doing in the hospital, just doesn’t make sense.
Ophthalm3
The ophthalmologists stated that they took around 15 minutes to see a patient and determine the need for retreatment, while the optometrists’ estimates varied at 20, 30 or 40 minutes. As the requirements to determine reactivation were listed (including a clinical examination, administration of tropicamide drops and use of an OCT), the optometrists concluded the estimate of up to 40 minutes was more realistic to make a clinical decision and explain the results to the patient.
It has to be realistic so you can practise and sustain it.
Optom3
I’ll allow 30 and probably spend 40 [. . .] But once you’ve got an OCT sitting there and you start looking at it, and you really want to explain it to the patient and they say, ‘Oh, thank you. No one ever tells me anything like that at the hospital. It makes you feel so good at the end. You’ve actually told the patient what’s really wrong and, ‘Rest assured my dear, it’s not getting any worse, so we don’t need to send you back’. ‘Oh, thank you!’ It probably would take me 40 minutes.
Optom8
Several commissioners and one clinical advisor therefore highlighted a potential conflict between practices’ clinical and commercial interests and stated that optometrists would need to be paid a sufficient amount to ensure that shared care was economically possible.
If the current business model for eye care is that funding comes from selling glasses, then if you fill your clinic appointments up with OCTs which will be a lovely service for patients, but if the only thing that’s happening is that you are breaking even with doing an OCT, your practice isn’t going to survive.
Comm1
There was also agreement that an OCT, although considered essential for monitoring nAMD, is an expensive piece of kit that not all practices can invest in. The clinical advisors and public health representatives felt that CCGs should provide OCTs, although most of the commissioners and the ophthalmologists felt it should be self-funded to demonstrate a level of commitment to monitoring in the community or that those practices which already had an OCT would represent a more enthusiastic, clinically orientated team.
Well, it’s on the optometrists, really. Whether they’ve got the kit or whether they will want to invest in one. I think what it will be is that you’ve have a small group of practices within a particular area, who will show keenness. [. . .] They will be the more cutting edge practices.
Comm6
I think CCGs should pay for that [an OCT], the NHS. I don’t think it should be the optometrist.
PH2
A further financial issue was whether or not ophthalmologists would repeat tests, owing to difficulty relinquishing responsibility or not trusting an optometrist’s judgement.
If you’re doing injecting from a message that you’ve got from the community, then actually if something was to go wrong, you can’t just say, ‘I did it because the optician outside told me to do it.’ Do you know what I mean? [. . .] For their peace of mind.
CA1
You’d have to attend another OCT in the hospital, look at the scans and then we have to take everything, yes, you may need another angiogram, just to be certain.
Ophthalm1
Consequently, a main concern of the commissioners was that the CCGs might be charged for repeated testing.
‘From a commissioner’s point of view, we would want to make sure then we would only be being charged for that element of it, and they wouldn’t go on and repeat all the tests again.
Comm1
The importance of specialist training
The health professionals spent a considerable amount of time addressing how they felt training should be delivered, in terms of which method was most effective for learning and ensuring training was delivered in a way to reassure ophthalmologists that optometrists were being trained to a high standard.
I think the training needs to be very carefully designed.
PH2
The ophthalmologists have to believe in the competence of the optometrist. It’s important that they have belief in the quality of the accreditation.
Optom4
Virtual training was deemed appropriate for providing a ‘foundation’ level of knowledge, although most felt that this alone would not be sufficient to train the optometrists. In particular, the ophthalmologists were unconvinced by the applicability of a virtual trial.
I think it complements training, but I don’t think it should be the sole course of training. I think you still need a bit of hands-on.
Comm1
I’m sure in your studies you will find 100% virtual case studies that you give, that you will find good correlation between what the ophthalmologist would say and what the optician would say in a virtual case. You can’t just say ‘Yes, the optom [optometrist] has 100% exactly the same as the ophthalmologist, which says that now they are just as good’.
Ophthalm3
Clinical experience, whereby optometrists would gain experience of monitoring nAMD patients in a hospital setting, was viewed as an ‘essential’ component of training.
I think they need to come and see the real patient. There’s no point on sitting on the MediSoft or whatever and clicking boxes and thinking they know it all. Real life is not like that. I mean there are the OCT scans like that, sometimes they can be very devious and very challenging and very confusing . . .
Ophthalm2
The majority of participants felt that having ophthalmologists supervising this training would provide assurance of optometrists’ competence and enable greater collaboration between the two professions. However, they acknowledged that this would be time-consuming.
I would suggest that they [optometrists] would spend a certain amount of time in a consultant clinic. Most services will have a specialist clinic for macular degeneration, so they would maybe spend a session a week, or something, in such a clinic for 6 months, just getting familiar with the treatments and the monitoring, etc. Also that will help the consultants’ gain a bit of confidence in the optom [optometrist] as well. So it’s a working partnership going on there.
Comm2
I think they would be involved if they had an element of control over it. If they’d done the training and if they knew who they were sending the patients out to and they knew what protocols were being followed, the service spec. You don’t get these things to work unless you’ve got clinical buy in. You just don’t. You can set up all you want, but you’ve got to get the clinicians involved.
Optom5
The hospital departments are caving in and creaking under the weight . . . so getting a consultant to be doing local training can be difficult.
CA2
Trial participants’ opinions on shared care
Feedback questionnaires were completed by 47 ophthalmologists and 55 optometrists, most of whom had completed the study and were included in the analysis population (see Chapter 3, Participants’ views on ECHoES trial training). Findings from the ECHoES trial participants’ questionnaire survey in relation to perspectives of shared care mirrored those from the focus groups and interviews described above, and they therefore act as a method of triangulation to increase the plausibility and dependability of the main qualitative research. 33 The vast majority commented in the questionnaire that monitoring in the community represented an excellent opportunity to reduce current clinical capacity and a more efficient use of consultants’ time, which would enable them to focus on active patients who required retreatment. In addition, many noted that shared care was a more accessible option for patients and provided a welcome opportunity for the professional development of optometrists.
I think it is an excellent idea.
Ophthalm105
Makes perfect sense.
Optom270
However, a major concern of 18 optometrists was that ophthalmologists would resist the shared care scheme as they were not convinced of optometrists’ competence. In line with this, several ophthalmologists stated that there was interprofessional distrust and acknowledged that a buy-in from their profession would be difficult.
Ophthalmologist fear in delegating care to optometrists if patient care is compromised.
Optom278
Ophthalmology departments do not want to let go of their patients.
Optom202
Similar to the findings from the focus groups and interview data, most respondents emphasised the need for appropriate training which should include supervision by ophthalmologists. However, several commented that training would be a time-consuming process for both disciplines.
Good-quality training with HES involvement from the start.
Ophthalm222
The local ophthalmologists would have to be prepared to give time to hands-on training to those optometrists participating.
Optom207
In addition, both groups felt that these interprofessional barriers had the potential to be problematic with regard to poor communication between primary and secondary care sectors, which could cause a delay for patients who required retreatment.
Need to create a good system of communication between optometrists and ophthalmologists.
Optom242
Work collaboratively and build partnership keeping patients at the centre.
Ophthalm138
Furthermore, respondents also expressed uncertainty as to how shared care should be funded. Twenty-one optometrists and 11 ophthalmologists highlighted that most practices did not own an OCT and would struggle to afford such an expensive piece of kit. Several optometrists also felt that, if practices were not compensated sufficiently for the enhanced role, the practice’s business would ultimately suffer.
Each practitioner will need an OCT, which the majority of optometry practices do not have at the moment.
Optom201
There needs to be sufficient funding for the outlay on the OCT equipment and the professional time on the high street to make this a viable investment in order to not lose money.
Optom216
Chapter 6 Discussion
Main findings: study conduct
Recruitment
The staged nature of the trial and the risk of withdrawals (most, but not all, participants progressing from training to the main trial or withdrawing for other reasons) combined with the priority of maintaining the balanced incomplete blocks design made recruitment challenging, particularly given the short duration of the trial. The balanced incomplete blocks design meant that each participant had to be assigned a specified set of vignettes for the main trial assessment on completion of webinar training. This constraint prevented an extra participant from being recruited until an existing participant had definitively withdrawn. As participants could not all be recruited at the same time and progressed at different speeds, we did not know until a considerable time had elapsed whether or not additional participants would be needed. In order to expedite completion of the trial, we reopened recruitment at a later stage and deliberately over-recruited participants to the webinar training, some of whom did not progress into the main trial because the target sample size had been reached. Close to the end of the main trial, in order to increase the likelihood of completing the main trial in accordance with our revised schedule, we also assigned two participants to one set of vignettes in order to be more confident that at least one would complete the assessments promptly.
The need to over-recruit also had financial consequences for the trial budget. Participants who chose to withdraw during their participation in the trial were paid only for the time spent on webinar training and not for any training or main vignette assessments that had been carried out up to the time of withdrawal. However, those participants who were withdrawn by the trial team, either because of failure at the training stage or because the trial had reached its target, were reimbursed fully up to the point of withdrawal, as if they had completed the study.
Images used to create vignettes and constraints on viewing images
Despite the overall size of the IVAN trial image repository, the number of suitable baseline and index images available to create vignettes was much smaller than anticipated. This limitation required us to identify a different combination of (1) number of participants, (2) number of vignettes per participant and (3) number of times each vignette was assessed. The final design recruited 96 participants (48 in each professional group), each of whom assessed 42 vignettes, with each vignette being assessed seven times (by each group), requiring a total of 288 vignettes and 4032 assessments. This total was considerably smaller compared with our original plan for 96 participants each to assess 48 vignettes, with each vignette being assessed four times, which would have required 1152 vignettes and 4608 assessments.
The smaller number of available vignettes also forced us to sample sets of training vignettes from among the 288 vignettes used for the main trial assessments. This constraint meant that extra participants (recruited when recruitment reopened) could not start their training assessments until an earlier participant had withdrawn and freed up a specified set of vignettes for the main trial assessment.
All images used to create vignettes were reviewed by a senior grader from NetwORC UK (AM) and a retina specialist (UC) to identify appropriate pairs of sufficient quality. Almost all of the CF photographs in the IVAN trial repository that were potentially suitable for inclusion were considered to be of sufficient clarity and focus for constructing the vignettes. However, OCT images were subject to several limitations:
-
We made the decision that vignettes could only be constructed with OCT images captured by spectral domain tomographic equipment. Spectral domain OCT systems provide images of greater clarity, resolution and definition and represent systems to which clinicians and optometrists are currently exposed. These spectral domain systems came into widespread use in 2008. When the IVAN study started in 2007/8, the standard for clinical trials was time domain OCT because the data import and export protocols and grading methods had been established and validated only for these acquisition systems. Therefore, the majority of sites in the IVAN trial used time domain systems to capture OCT images. Nonetheless, as the old time domain systems began to fail, most IVAN trial clinical sites replaced these with spectral domain systems. Validation of the spectral domain systems for trials was undertaken and, therefore, spectral domain OCT images were permitted in the IVAN trial protocol. Approximately one-third of all IVAN trial participants had OCT images acquired on one of the three types of commonly used spectral domain systems (Heidelberg Spectralis®; Zeiss; Optovue). All the spectral domain systems yielded high-quality images. However, the lower aspect ratios of the Zeiss and Optovue made it more difficult to interpret images than with the Spectralis. This is because the appearance of the Spectralis images is more similar to that observed on a standard retinal histological preparation. The Spectralis is also in more widespread use in the UK. Both experts and participants commented that OCT images from the Zeiss and Optovue spectral domain systems were unfamiliar and, therefore, that the images were more difficult to interpret and that they were less certain about their assessments.
-
Baseline and index OCT images consisted of six radial line scans which passed through the fovea. Ideally, scan lines at the same clock-face orientations should be captured at every visit and be displayed in the same order so that an assessor can compare the appearance of the retina at specific locations. For some of the scans, the orientations did not match exactly for baseline and index images.
-
The web application presented ‘thumbnail’ images of all six OCT images and an assessor could ‘click’ on an image to enlarge it and to toggle backwards and forwards between the six enlarged images. (The website www.echoestrial.org/demo shows how assessors carried out assessments in the trial.) However, it was not possible to view baseline and index images for corresponding orientations simultaneously because of the limitation of providing the application on a single monitor. Paired viewing of OCT images is a desirable feature and it is usual practice for clinicians to use a paired display which allows the viewer to scroll through the different scan orientations from two visits on a single screen. One participant (Optom268, see Chapter 3, Participants’ views on ECHoES trial training) achieved this by having one set of images on a tablet and one set on a conventional computer display.
Reference standard
The original plan (described in the grant application) was for the reference standard to be derived directly from grading data collected during the IVAN trial. For various reasons, this was not considered acceptable once we started to set up the study: there are occasional errors in the grading data and the list of features/questions drawn up for the web assessment (see Table 2) did not completely match the table in the protocol or the grading data available. There were two key differences, namely (1) the distinction between presence of a feature in the index OCT and whether or not an increase in the feature had occurred from the baseline to the index OCT, and (2) the inclusion of DRT as a feature. Grading data were used to select baseline and index OCT pairs that were likely to suitable, based on the features described in the table in the protocol, but it was decided that the reference standard should be assigned by expert ophthalmologists (reading centre leads, UC, TP and SPH).
This decision was taken when the first draft of the assessment part of the website was available, in late June 2013. We accepted that training could begin while vignettes were being assessed by experts. However, there was still insufficient time for the three experts to assess each vignette by the time participants’ responses to the training vignettes had to be scored. Instead, we aimed for two experts to assess each vignette (one-third of the total number of vignettes by each pair of expert ophthalmologists), but only had one assessment for each vignette at the time that participants’ responses were scored. These assessments were carried out by two of the three experts.
As participants’ assessments on the training vignettes accrued, it became clear that there were some vignettes that participants were getting consistently wrong. Consequently, the two experts who had provided the expert assessments reviewed about 30 vignettes. Some of these were believed to be mistakes (mainly key-stroke errors when completing the web data entry) by an expert and the overall assessment was changed for 10 vignettes. These new gradings were later used in the derivation of the reference standard.
Main findings: study results
Classification of lesion activation status and lesion components
Optometrists were non-inferior to ophthalmologists with respect to their overall ability to classify lesions correctly: that is, the primary outcome. Neither group attained the level of performance expected at the outset (on which the target sample size was calculated). However, optometrists made different kinds of errors. Compared with ophthalmologists, it was less likely that they would classify a reactivated lesion as quiescent or suspicious (false-negative misclassifications) and more likely that they would classify a quiescent or suspicious as reactivated (false-positive misclassifications); better sensitivity, worse specificity. It is probable that this finding arises because optometrists tended to adopt a more cautious decision criterion, which would be consistent with their obligations under the General Optical Service contract (to refer any suspected pathology). The finding may also reflect the fact that optometrists had more difficulty in interpreting the diversity of appearance of quiescent lesions, that is, eyes with an abnormal appearance but not needing treatment, which are normally managed in the HES.
The poorer than expected overall performance may have arisen because the quality of the images was suboptimal. In a real-life scenario, any monitoring review (carried out by either an ophthalmologist or an optometrist) is likely to include examination of the patient, for example slit-lamp biomicroscopy, as well as review of CF and OCT images. Some components, such as blood and exudates, might be identified more reliably from this examination than from images. If this were the case, the combination of the vignette information with direct examination would increase the performance.
No harms could arise in the trial itself because all of the decisions being made were for anonymised vignettes. However, the health economic evaluation carefully considered the costs and consequences of the different types of error predominantly made by each group. In the context of community optometry, it seems desirable that optometrists should use a cautious decision criterion for referral, although this limits the potential benefit of the shared care model in terms of both the impact on workload in the HES (because of false-positive referrals) and its potential cost-effectiveness.
Optometrists were also non-inferior to ophthalmologists with respect to the frequency of false-negative sight-threatening errors (i.e. failure to identify a lesion as reactivated). In fact, optometrists were slightly less likely than ophthalmologists to make such errors because of their tendency to adopt a more cautious decision criterion. Conversely, optometrists made more non-sight-threatening errors (false-positive errors, i.e. failure to identify a lesion as quiescent or suspicious) than ophthalmologists.
Except for PED (which did not inform the classification of lesion activity), lesion components were identified as present more often by optometrists than by ophthalmologists, again consistent with optometrists adopting a more cautious decision criterion. This tendency was particularly evident for DRT and exudates. Ophthalmologists were much more confident in their classifications than optometrists. However, there was no association between confidence and the odds of classifying a lesion correctly. For all grades of confidence in the classification, optometrists had slightly better odds of classifying a lesion correctly. As hypothesised, other items of information included in a vignette did not influence participants’ lesion classifications.
The sensitivity analysis found a statistically significant difference, but not clinically important one, with respect to the prespecified non-inferiority margin, in favour of ophthalmologists. The alternative definition of the primary outcome used in this analysis involved reassigning four cells from Table 3. Participants’ classifications changed from incorrect to correct for two cells (suspicious reference lesions classified as reactivated by participants and reactivated reference lesions classified as suspicious by participants) and from correct to incorrect for two cells (suspicious reference lesions classified as quiescent by participants and quiescent reference lesions classified as suspicious by participants). Of these, the latter were both the most numerous (380 out of 4032 participants’ classifications) and showed the greatest difference in frequency between ophthalmologists and optometrists (88 more such classifications by optometrists considered incorrect in the sensitivity analysis; see Table 7). Reactivated reference lesions classified as suspicious by participants were second most numerous (338 out of 4032), with the difference in frequency going in the opposite direction (54 more such classifications by ophthalmologists considered correct in the sensitivity analysis). Both of these changes favoured the performance of the ophthalmologists. The large number of quiescent reference lesions classified as suspicious by optometrists almost certainly arises from their tendency to adopt a more cautious decision criterion. It is not obviously the case that it would be desirable to encourage optometrists to shift their criterion to a less cautious position, that is to ‘trade’ an improvement in specificity for a reduction in sensitivity, given the potential sight-threatening consequences of false-negative errors. The desirability of shifting the criterion would depend on the likelihoods of (1) reactivation being identified by the optometrist at the next monitoring review (when relevant lesion components are likely to have increased) and (2) irrecoverable sight loss from deterioration of the lesion.
The time taken to assess vignettes decreased as participants worked through their assessments for the main trial, and this finding was particularly marked for the optometrists. Interestingly, the time taken by optometrists more than halved and approached that taken by ophthalmologists by the time they reached their final assessment (2 minutes 31 seconds compared with 1 minute 55 seconds). This information gives some indication of how quickly optometrists can become proficient in assessing CF photographs and OCT images. We also investigated whether or not the time taken was associated with the odds of a correct assessment and found, in both professional groups, that a correct lesion assessment was more likely with a shorter assessment duration. This finding suggests that duration of assessment may have been a proxy for difficult assessments.
Data about the agreement between expert medical retina consultants represent a bonus from the study, as such data have not previously been reported. (These experts are responsible for the three sites that together make up the UK Network of Ophthalmic Reading Centres.) All three experts assigned congruent lesion classifications to over three-quarters of vignettes and disagreed completely (all three experts classified a lesion differently) for only seven vignettes. The level of agreement judged on the basis of kappa values (see Table 14) varied by lesion component; it was excellent for SRF, IRC and blood but poor for exudates and DRT. (Experts agreed about the presence or absence of exudates of > 90% of vignettes but the kappa value was low because exudates were rarely present.) These data emphasise several important points. First, the assessment task was difficult and was sometimes compromised by the images that were available (see Images used to create vignettes and constraints on viewing images). Second, signs of reactivation inevitably spanned a continuum and hence there was a need to assign a reference classification of suspicious for some vignettes. Third, the vignettes created from the IVAN trial repository represented a real-world spectrum of reactivation that was appropriate for the ECHoES trial.
There are no similar studies with which to compare our data and, consequently, no meta-analysis combining our findings with previous findings.
It is difficult to assess the extent to which the performance of participants is representative of the performance of the two professions and, hence, the applicability of the findings. Models of shared care for other eye conditions have been voluntary. Indeed, it is difficult to conceive a model in which optometrists, who are private practitioners, might be required to provide shared care. Our recruitment methods sought volunteer optometrists, presumably who were interested in shared care, albeit in the context of a research project. We cannot be sure that our recruitment methods for optometrists were generally applicable but can imagine that similar methods might be used, for example by clinical commissioners seeking to commission shared care in a particular geographic region. The motivations of the volunteer ophthalmologists were less clear, but a similar interest in the feasibility of shared care for nAMD is a possibility. The key question is whether or not volunteer ophthalmologists might have been less than averagely expert/experienced – or optometrists more than averagely expert/experienced – compared with the kinds of people who might volunteer for shared care. We think that this is unlikely but have no evidence to substantiate this.
For the findings to be applicable, the vignettes used for the study also needed to be representative of the clinical circumstances that community-based optometrists would be likely to encounter. In this respect, we are more confident that the study findings are applicable. The IVAN trial was pragmatic and had broad eligibility criteria. It recruited participants on presentation and followed their progress in the trial. Although their treatment was specified by their experimental allocation, all treatment regimens had similar effects on vision and morphology which were consistent with clinical experience and other trials of anti-VEGF drugs. Decisions about the need for retreatment had to be made in the IVAN trial just as in usual care. The availability of spectral domain tomographic equipment was limited by hospitals and clinic scheduling.
Health economics
The results of the economic evaluation show that, when we take account of downstream costs (e.g. follow-up consultations and injections), the optometrists had slightly higher costs and made slightly fewer correct retreatment decisions compared with ophthalmologists when performing the virtual monitoring review for the ECHoES trial. However, the differences were very small (an incremental cost of £13/consultation and one additional incorrect decision per 101 reviews conducted) and the differences were not significantly different.
Views of patients and health professionals about the shared care model
Overall, the findings show there was consensus that optometrists monitoring quiescent nAMD in the community have the potential to reduce clinical workload and could represent a more patient-centred option for patients. However, a number of potential barriers were identified which could limit the feasibility of a shared care scheme, including ophthalmologists’ perceptions of optometrists’ competence, the need for clinical training, whether or not optometry and ophthalmology could work more collaboratively and whether or not shared care was a financially efficient option for CCGs.
Participants felt that hospital eye clinics were pushed to the maximum capacity with the volume of patients who required care for nAMD. Research has found an insufficiency of resources to deliver nAMD secondary care from the burden of follow-up visits required in patients with inactive nAMD. 12 Therefore, the health professionals agreed that shared care would relieve the ophthalmology workload. Shared care schemes are also attractive to patients who would find it more convenient to be monitored closer to their home. 34,35 In line with previous research, the service users described how they were frustrated at the lack of support and information that they had received about their condition. 36,37 Consequently, they felt that being monitored in the community would enable them to build up a relationship with one optometrist, with whom there would be more of an opportunity to receive better support and information.
While most participants felt that motivated optometrists would be capable of monitoring patients, several ophthalmologists and service users were unsure whether or not the expertise of an optometrist in detecting reactivation of nAMD would be equal to that of an ophthalmologist. Previous research exploring perceptions of shared care for glaucoma has found that specialists are not convinced of optometrists’ expertise, even when they have received additional training. 38 Furthermore, interviews with service users who had declined to take part in a shared care scheme for a range of eye diseases (including nAMD) did so because of their familiarity with, and the reputation of, the HES. 35
Overall, the health professionals described poor collaboration between community optometrists and ophthalmologists working in the HES. Many participants, particularly the ophthalmologists and service users, voiced concerns about a potential delay between primary and secondary care if retreatment was required. This is a new finding, as previous research exploring patient perceptions of shared care for a range of eye conditions has not found this to be an issue. 34,35,39,40 Long-term research has demonstrated that recurrence of neovascular activity is common,15 and any delay beyond the recommended interval may cause patients to unnecessarily lose vision permanently. 12 Given that vision can deteriorate in a short time if nAMD is not treated and the potential impact this could have on independence and quality of life for patients, it is perhaps not surprising that receiving prompt treatment is an important priority for nAMD care.
The health professionals felt that, although virtual training could provide a foundation level of knowledge, clinical experience under the supervision of an ophthalmologist would be a more effective method of training. Shared care research for a variety of eye conditions has reported that, although web-based training is a convenient and effective option, it is not representative of clinical practice and may not be appropriate for teaching practical clinical skills which need to be developed through attendance at a training course. 35,41 Health professionals in the current study also highlighted that face-to-face training of this kind would provide reassurance to the ophthalmologists that optometrists were being trained to a high standard and further encourage interprofessional collaboration.
The health professionals considered the financial implications of moving to a shared care model, although commissioners appeared to experience conflict between what was best for the patient and what was financially efficient. There was also agreement that optometry practices may struggle to obtain appropriate equipment, but there was uncertainty as to how funding for OCTs would be provided. In line with this research, studies exploring optometrists’ perspectives of extending or enhancing their roles has highlighted a conflict between the retail and clinical sides of the optometric practice. 41,42 Amoaku et al. 12 points out that the technology involved in monitoring nAMD, particularly an OCT, is expensive and optometrists would be unlikely to receive any grants for their purchase. An additional financial consideration was that there could an opportunity to save money with a reduced fee for optometrist rather than ophthalmologist monitoring, although commissioners expressed concerns about the possibility – and subsequent cost – of repeat testing by ophthalmologists who doubted optometrists’ judgements when a patient was deemed to require retreatment.
In summary, the qualitative research demonstrates enthusiasm for shared care for nAMD. However, ophthalmologists and patients would need reassurance that greater convenience would not compromise the quality of care, in terms of both optometrist competence and the speed of the referral pathways back into secondary care if retreatment was deemed to be necessary. The research highlighted poor communication and trust between ophthalmologists and optometrists, and there was agreement that, if shared care were to be implemented, it would be essential that the two professions work more collaboratively. Training for optometrists under the supervision of ophthalmologists was deemed to be the most effective method of training and could improve the communication and trust between the disciplines.
Patient and public involvement
Patient and public involvement (PPI) impacted on the trial in two main ways. Firstly, review of the grant application highlighted the need for the study to explore the views of patients and health professionals about the shared care model. Secondly, we ensured that a patient perspective was represented on the Trial Steering Committee (TSC).
The first impact of PPI led to inclusion of substantial qualitative research (interviews and focus groups) with patients with nAMD, ophthalmologists, optometrists and clinical commissioners. This research highlighted that implementation of shared care for nAMD is likely to be challenging. The second impact of PPI led to the nomination and appointment of Cathy Yelf as the patient representative on the TSC. Cathy Yelf is the Head of External Relations for the Macular Society.
Owing to the virtual nature of the trial, there was no need for PPI to inform recruitment and aspects of the conduct of the trial.
Strengths and limitations
Classification of lesion activation status and lesion components
The virtual nature of the trial element of the ECHoES study made it feasible to address the research question when a conventional trial may not have been feasible, either because of reluctance by patients or the professions to participate or because of the high cost of a conventional trial. 19 We were able to carry out the trial within a relatively short period of time and at low cost compared with a conventional trial of the research question, and the study design did not require any compromise with respect to the risk of bias. Participants engaged extremely well with the tasks that they were set, evidenced by the trial having complete data for the analysis population and the level of participation in additional tasks not communicated to participants at the outset, for example questionnaires about training and the likely resources needed to provide shared care in community optometric practices.
There are two limitations that may be perceived to be critical, namely the virtual nature of the trial itself and the adequacy of training. A critic might argue that decision-making on the basis of a vignette bears no resemblance to face-to-face decision-making in a HES clinic. However, decision-making on the basis of investigations made previously, in the absence of the patient, reflects quite well how some hospitals are managing their workload by implementing a two-step process. Patients are first recalled for a monitoring appointment to capture BCVA and retinal images (usually staffed by non-medical personnel); the information is then reviewed in an ‘offline’ assessment by an ophthalmologist or other trained member of staff, and the patient is rapidly recalled for treatment if required. In our view, the task of vignette assessment in the ECHoES study closely parallels this offline assessment in the HES.
The importance of training was highlighted by identification of concern about the competence of optometrists as a potential barrier to implementation (see Chapter 5, Perceptions of optometrists’ competency). The quality of the training was perceived to be good, very good or excellent by over 90% of ophthalmologists but by only about 70% of optometrists; 70% of ophthalmologists considered that the training was sufficient, compared with only 20% of optometrists; and almost all optometrists, but only half of the ophthalmologists, revisited the webinars (see Chapter 3, Participants’ views of ECHoES trial training). These simple responses to the questionnaire items eliciting feedback about training were supported by free-text comments. These differences in perception about training are likely to have arisen for various reasons: some ophthalmologists may have considered themselves to be already trained; the training was inevitably conceived by ophthalmologists from a HES perspective; and the relative unfamiliarity of the task for optometrists may have made them less confident about the competence that the training had in fact conferred. Both the feedback questionnaire and the qualitative research suggest that a bigger investment in training would be required if shared care were to be implemented. The nature of further training that would address the need voiced by optometrists is unclear.
The nature of some of the images available to create the vignettes and the method for viewing OCT images were also potential limitations of the study (see Images used to create vignettes and constraints on viewing images). These limitations may explain in part why the performance of ophthalmologists was worse than expected (about 85% correct compared with the expected 95%). Consequently, the performance levels observed in the trial should, perhaps, be considered to be the minimum achievable. It is possible that ophthalmologists (most of whom were already familiar with OCT images) could have been selectively disadvantaged, in that they were used to seeing images captured on spectral domain systems and displayed on very high-resolution monitors. Alternatively, they may have had the advantage of having seen older images in the HES, which optometrists may not have done. Therefore, it is unknown whether or not vignettes created from more recent information may have favoured the performance of one professional group over the other. The important point is that the trial design ensured a ‘level playing field’ in terms of the assessments that both groups, and the experts, carried out.
Finally, the final reference standard was not available by the time that participants’ training performance had to be assessed (see Reference standard). This could have been a serious limitation if the information used to assess participants’ performance at the time misclassified participants with respect to their performance and their ability to progress in the trial. However, as previously stated (see Chapter 2, Reference standard), when all experts had classified all vignettes, we checked the performance of all applicants for their training vignettes using the final reference standard classifications. No applicant was refused admission to the main trial on the basis of the interim expert classifications who would have progressed to the main trial on the basis of the final reference classifications.
Health economics
The cost-effectiveness results should be considered in the light of some limitations of the study. Because the ECHoES trial was virtual, participants were asked questions about a service not implemented yet, which posed challenges for optometrists in identifying appropriate costing information for their potential nAMD monitoring review provision. The health economics questionnaire was not a compulsory part of the trial and we wanted to avoid overburdening participants with lots of questions. Therefore, we used expert opinion to inform the economic evaluation about the types and models of different pieces of equipment, the average volume of patients using some of the equipment items and the expected working life of equipment. Furthermore, the costs used in the economic evaluation rely on the assumptions inherent within the decision trees shown in Figures 1 and 2.
Expert opinion was also used to help to map the pathways for the care cost model with respect to the likely courses of action arising from various decisions being made by optometrists and ophthalmologists. In particular, the optometrist pathway assumes that all patients classed as reactivated receive a second monitoring review in hospital (at the more expensive ophthalmologist cost, rather than the cheaper optometrist review cost), which increases costs for correct judgements of lesion reactivation in the optometrist pathway but also enables a second opportunity for any errors to be corrected before costly ranibizumab injections are given. In real-world implementations of shared care, other pathways may be devised or pathways may improve with time after they have been implemented. For example, the optometrist pathway is likely to include direct clinical examination, as well as review of CF and OCT images, and an integrated shared care pathway could allow the HES to review information obtained by community optometrists following rapid referral (as envisaged by the last item of the economic question, see Appendix 3, Q8), rather than repeating monitoring tests. Alternatively, shared care pathways may also improve with time after they have been implemented.
Views of patients and health professionals about the shared care model
A purposeful sampling approach was adopted within and across the research to ensure that the feasibility and acceptability of the proposed shared model of care for nAMD was captured from a range of perspectives. However, ophthalmologists and optometrists were recruited from specialist conferences, which may limit the extent to which their perspectives are representative of optometrists and ophthalmologists in the UK. Furthermore, service users were recruited from Macular Society support meetings and such individuals may be more proactive and informed about their condition than non-members. 37 Service users at the support groups who were willing to be contacted were white British, possibly because white people may be more susceptible to developing nAMD43 or because people from ethnic minority groups are less likely to access eye care services. 44,45 It should also be acknowledged that all service users attended the same eye hospital for their nAMD care and some aspects of their care may differ to other hospitals throughout the UK, although the themes relating to current experiences of hospital care have also been highlighted by previous research. 34,36,37
Focus groups were used to explore and understand key issues when considering barriers and facilitators to implementation. Separate rather than mixed focus groups were undertaken to capture any potential interprofessional trust issues which emerged from previous research. 35 One-to-one interviews were conducted with other health professionals because it was not possible to ascertain a convenient location, date and time to organise a third focus group. These interviews provided a rich account of the perceived feasibility and acceptability of a shared care scheme and allowed the findings from the focus groups to be followed up further and explored in-depth.
None of the participants had experience of shared care for nAMD. Research has questioned whether or not evaluations of hypothetical scenarios accurately relate to judgements in real-life situations. 46,47 However, participants gave negative as well as positive views of the shared care model, which suggests that they carefully considered the practicalities of implementation. In addition, focus groups provided an opportunity to prompt a range of issues about shared care to be discussed which participants may not have otherwise considered individually in one-to-one interviews. 48 Furthermore, the issues identified in this study about a hypothetical shared care scheme mirror many of the findings from studies in which participants gave feedback after direct experience of shared care. 35,41
Lessons for the future
Lessons for a future shared care approach to the management of neovascular age-related macular degeneration
It is clear that, given foreseeable health technologies for treating nAMD (including potential emerging therapies combining anti-VEGF drugs with stereotactic radiotherapy), the need for efficient methods of long-term review will remain an urgent priority. Some form of shared care with community-based optometrists is one approach for achieving this, notwithstanding the findings of the economic evaluation (see Future research). The ECHoES study has demonstrated that community-based optometrists can develop competency in decision-making about lesion reactivation that is equivalent to the competency of ophthalmologists working the HES. Although it was not straightforward to conduct the study in a short period of time, we were able to apply the methods we planned at the time of the grant application with few modifications. We would recommend further trials of this nature to address research questions where appropriate data repositories can be identified.
Lessons for a future economic evaluation alongside a virtual trial
The virtual trial imposed the limitation of asking participating optometrists to identify the resources and likely costs for providing a service that had not yet been implemented. This is not an intrinsic feature of virtual trials but is a likely feature, since the attraction of a virtual trial is greatest when there are obstacles to provide a service. The absence of an existing shared care pathway also contributed to uncertainty about the cost-effectiveness estimates. This was highlighted by the economic evaluation sensitivity analyses, notably the analysis which excluded rereview in the HES of a patient rapidly referred by a community-based optometrist.
Lessons for qualitative research alongside a virtual trial
This study highlights the importance of exploring the views of relevant stakeholders about the acceptability of shared care for nAMD and barriers to its implementation alongside a virtual trial. The qualitative research identified key concerns that would need to be addressed in formulating a concrete shared care model. Had the study proceeded as originally proposed by the applicants, the headline non-inferiority findings might have led to unreasonable optimism about the feasibility of shared care for nAMD.
Future research
The ECHoES trial platform remains in place and could be used either for further research or future training. It proved to be robust for carrying out the trial and the medical retina experts see no reason to alter its main features, including the rules for classifying lesion activity based on assessing lesion components. It would be interesting to assess the performance of other professional groups using the ECHoES trial training and assessment protocols, for example hospital optometrists and ophthalmologists in training who did not meet the qualifications/experience criteria specified for this study.
Further research is required to investigate whether lesion components can be defined more precisely, especially where there was less than good agreement between experts. DRT is a good example of this and as, like SRF, this component may be a key sign of reactivation, a clearer definition may be important.
Improvements in technology, and the expertise of OCT technicians in capturing OCT images, may make the vignettes based on the IVAN trial image repository increasingly irrelevant. Replacing these images with images for patients currently being managed in the HES would be easy to do, in principle, but would require investment (primarily, the time of experts to assess the new images to provide the reference classifications and the necessary approvals to use patients’ data).
A further, desirable information technology feature to add to the web application would be an interface with modern CF and OCT equipment, allowing images to be imported automatically for assessment, for example within a local network operating in an optometry practice (potentially across multiple sites). Wider network integration with the HES could allow for a telemedicine-style shared care, with optometrists having responsibility for capturing the images required and having discussions with HES ophthalmologists in scheduled ‘virtual’ clinics to review them. Such an arrangement could allow trust between professions to evolve over time, with subsequent handover of decision-making to specified optometrists, without the need for formal face-to-face training in hospitals.
A key attraction of a shared care model for nAMD is the ability to relieve the HES of some its workload. However, this benefit is not formally included in a conventional health economic evaluation (such as the one reported here), which provides a direct comparison of two or more interventions in terms or their cost-effectiveness. It would be useful for follow-up work to explore the process of freeing up HES resources through implementation of shared care for nAMD. One potential approach for doing this would be to use the simple framework of programme budgeting and marginal analysis, which can explicitly explore the resource implications of moving resources around within a given health service area. 49
Although the final results of the research exploring the health professionals’ views on shared care for nAMD were formulated from a combination of focus groups and interviews, the constant comparative method provided an opportunity to highlight similarities and differences between the disciplines. Future research could conduct a mixed focus group to enable the range of health professionals to engage in a discussion to address the differences of opinion that were identified in this study.
Chapter 7 Conclusion
The ECHoES study has demonstrated that community-based optometrists can develop competency in decision-making about lesion reactivation that is equivalent to the competency of ophthalmologists working in the HES. Overall, optometrists were as good as ophthalmologists in classifying the activity status of a lesion but tended to make different types of error. The tendency of optometrists to adopt a more cautious decision criterion than ophthalmologists, making them less likely to misclassify reactivated lesions, may be a desirable attribute.
Our cost-effectiveness analysis showed that there was little difference in costs and effects between optometrists and ophthalmologists in cost per correct retreatment decisions for patients with nAMD in the ECHoES trial. As optometrists have less training and experience compared with ophthalmologists, once they become more familiar with undertaking monitoring reviews, a shared care model such as this has the potential to represent a cost-effective way of managing patients with nAMD and the freeing up of resources in hospital eye clinics, especially as the cost for the review itself was cheaper in the community. The findings of the economic evaluation should be used carefully to guide further planning of shared care models, as the hypothetical nature of shared service provision in the ECHoES study gave rise to important uncertainties. Indeed, the economic model should be useful for exploring concrete shared care pathways.
Patients and professionals were enthusiastic about the opportunities afforded by a shared care model for nAMD, which has the potential to reduce clinic capacity and represent a more patient-centric model of care. However, ophthalmologists and service users need reassurance that the convenience of community monitoring would not compromise the standard of care. Training for optometrists, under the supervision of ophthalmologists, was deemed to be the most effective method to ensure competency; it might also improve communication and trust between the professions.
Acknowledgements
The trial was funded by the UK National Institute for Health Research HTA programme (reference: 11/129/195). The authors would like to thank Michelle McGaughey for her assistance in setting up and running the webinar training sessions and the Clinical Trials and Evaluation Unit database team in Bristol for their assistance in maintaining the web application.
Contributions of authors
Professor Barnaby C Reeves (Professorial Research Fellow, health services research) was involved in writing the application for trial funding, designing and managing the trial and drafting this report.
Ms Lauren J Scott (Medical Statistician, clinical trials) was involved in managing the trial, carrying out the statistical analysis and drafting this report.
Dr Jodi Taylor (Trial Co-ordinator, clinical trials) was involved in managing the trial and drafting this report.
Dr Ruth Hogg (Lecturer In Optometry, optometry) was involved in writing the application for trial funding and designing the trial.
Dr Chris A Rogers (Reader in Medical Statistics, clinical trials) was involved in writing the application for trial funding, designing and managing the trial and supervising the statistical analysis.
Dr Sarah Wordsworth (Senior Researcher, health economics) was involved in writing the application for trial funding, designing the trial, designing the health economic evaluation questionnaire and carrying out the health economic analysis.
Dr Daisy Townsend (Research Associate, qualitative research) was involved in conducting and analysing the findings of the interviews and focus group meetings.
Dr Alyson Muldrew (Research Scientist, ophthalmology) was involved in designing the trial and developing the vignette database.
Dr Tunde Peto (Head of Ophthalmic Image Analysis Centre, medical retina) provided trial input and expert assessment of vignettes.
Dr Mara Violato (Senior Researcher, health economics) was involved in designing the health economic evaluation questionnaire and carrying out the health economics analysis.
Dr Helen Dakin (Senior Researcher, health economics) was involved in carrying out the health economics analysis.
Ms Heike Cappel-Porter (Database Manager, clinical trials) created and maintained the web application.
Dr Nicola Mills (Research Fellow, qualitative research) was involved in designing the trial and supervised the collection and analysis of the focus group and interview data.
Dr Dermot O’Reilly (Clinical Senior Lecturer, public health) conceived the trial and was involved in writing the application for trial funding.
Professor Simon P Harding (Clinical Professor in Ophthalmology, medical retina) was involved in writing the application for trial funding and provided trial input and expert assessment of vignettes.
Professor Usha Chakravarthy (Clinical Professor in Ophthalmology, medical retina) was involved in writing the application for trial funding, designing the trial and developing the vignette database, and providing trial input and expert assessment of vignettes.
Publications
Townsend D, Reeves BC, Taylor J, Chakravarthy U, O’Reilly D, Hogg RE, et al. Health professionals’ and service users’ perspectives of shared care for monitoring wet age-related macular degeneration: a qualitative study alongside the ECHoES trial. BMJ Open 2015;5:e007400.
Taylor T, Scott L, Rogers CA, Muldrew A, O‘Reilly D, Wordsworth S, et al. The Effectiveness of Community vs. Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease: design and implementation of the ECHoES study [published online ahead of print 9 October 2015]. Eye (Lond) 2015. http://dx.doi.org/10.1038/eye.2015.170
Violato M, Dakin H, Chakravarthy U, Reeves BC, Peto T, Hogg RE, et al. Cost-effectiveness of community versus hospital eye service follow-up for patients with quiescent treated age-related macular degeneration alongside the ECHoES randomised trial. BMJ Open 2016;6:e011121.
Data sharing statement
A file containing anonymised individual participant data (i.e. data characterising the optometrists and ophthalmologists and their vignette assessments) may be made available for secondary research, conditional on assurance from the secondary researcher that the proposed use of the data is compliant with the Medical Research Council Policy on Data Preservation and Sharing regarding scientific quality, ethical requirements and value for money. A minimum requirement with respect to scientific quality will be a publicly available prespecified protocol describing the purpose, methods and analysis of the secondary research, for example a protocol for a Cochrane systematic review.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HTA programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HTA programme or the Department of Health.
References
- Ranibizumab and Pegaptanib for the Treatment of Age-Related Macular Degeneration. London: NICE; 2008.
- Rosenfeld PJ, Brown DM, Heier JS, Boyer DS, Kaiser PK, Chung CY, et al. Ranibizumab for neovascular age-related macular degeneration. N Engl J Med 2006;355:1419-31. http://dx.doi.org/10.1056/NEJMoa054481.
- Brown DM, Kaiser PK, Michels M, Soubrane G, Heier JS, Kim RY, et al. Ranibizumab versus verteporfin for neovascular age-related macular degeneration. N Engl J Med 2006;355:1432-44. http://dx.doi.org/10.1056/NEJMoa062655.
- Chakravarthy U, Harding SP, Rogers CA, Downes SM, Lotery AJ, . IVAN study investigators . Ranibizumab versus bevacizumab to treat neovascular age-related macular degeneration: one-year findings from the IVAN randomized trial. Ophthalmology 2012;119:1399-411. http://dx.doi.org/10.1016/j.ophtha.2012.04.015.
- Heier JS, Brown DM, Chong V, Korobelnik JF, Kaiser PK, Nguyen QD, et al. Intravitreal aflibercept (VEGF trap-eye) in wet age-related macular degeneration. Ophthalmology 2012;119:2537-48. http://dx.doi.org/10.1016/j.ophtha.2012.09.006.
- Martin DF, Maguire MG, Ying GS, Grunwald JE, Fine SL, . CATT research group . Ranibizumab and bevacizumab for neovascular age-related macular degeneration. N Engl J Med 2011;364:1897-908. http://dx.doi.org/10.1056/NEJMoa1102673.
- Chakravarthy U, Harding SP, Rogers CA, Downes SM, Lotery AJ, Culliford LA, et al. Alternative treatments to inhibit VEGF in age-related choroidal neovascularisation: 2-year findings of the IVAN randomised controlled trial. Lancet 2013;382:1258-67. http://dx.doi.org/10.1016/S0140-6736(13)61501-9.
- Martin DF, Maguire MG, Fine SL, Ying GS, Jaffe GJ, Grunwald JE, et al. Ranibizumab and bevacizumab for treatment of neovascular age-related macular degeneration: two-year results. Ophthalmology 2012;119:1388-98. http://dx.doi.org/10.1016/j.ophtha.2012.03.053.
- Chakravarthy U, Harding SP, Rogers CA, Downes S, Lotery AJ, Dakin H, et al. A randomised controlled trial to assess the effectiveness and cost-effectiveness of alternative treatments to Inhibit VEGF in Age-related choroidal Neovascularisation (IVAN). Health Technol Assess 2015;19.
- Hau S, Ehrlich D, Binstead K, Verma S. An evaluation of optometrists’ ability to correctly identify and manage patients with ocular disease in the accident and emergency department of an eye hospital. Br J Ophthalmol 2007;91:437-40. http://dx.doi.org/10.1136/bjo.2006.105593.
- Banes MJ, Culham LE, Bunce C, Xing W, Viswanathan A, Garway-Heathm D. Agreement between optometrists and ophthalmologists on clinical management decisions for patients with glaucoma. Br J Ophthalmol 2006;90:579-85. http://dx.doi.org/10.1136/bjo.2005.082388.
- Amoaku W, Blakeney S, Freeman M, Gale R, Johnston R, Kelly SP, et al. Action on AMD. Optimising patient management: act now to ensure current and continual delivery of best possible patient care. Eye 2012;26:S2-S21. http://dx.doi.org/10.1038/eye.2011.343.
- Kelly SP, Wallwork I, Haider D, Qureshi K. Teleophthalmology with optical coherence tomography imaging in community optometry. Evaluation of a quality improvement for macular patients. Clin Ophthalmol 2011;5:1673-8. http://dx.doi.org/10.2147/OPTH.S26753.
- Cameron JR, Ahmed S, Curry P, Forrest G, Sanders R. Impact of direct electronic optometric referral with ocular imaging to a hospital eye service. Eye 2009;23:1134-40. http://dx.doi.org/10.1038/eye.2008.196.
- Singer MA, Awh CC, Sadda S, Freeman WR, Antoszyk AN, Wong P, et al. HORIZON: An open-label extension trial of ranibizumab for choroidal neovascularization secondary to age-related macular degeneration. Ophthalmology 2012;119:1175-83. http://dx.doi.org/10.1016/j.ophtha.2011.12.016.
- Violato M, Dakin H, Chakravarthy U, Reeves BC, Peto T, Hogg RE, et al. Cost-effectiveness of community versus hospital eye service follow-up for patients with quiescent treated age-related macular degeneration alongside the ECHoES randomised trial. BMJ Open 2016;6. http://dx.doi.org/10.1136/bmjopen-2016-011121.
- Cochran WG, Cox MG. Experimental Designs. Hoboken, NJ: John Wiley & Sons; 1992.
- Fleiss JL. Balanced incomplete block-designs for interrater reliability studies. Appl Psychol Meas 1981;5:105-12. http://dx.doi.org/10.1177/014662168100500115.
- Taylor J, Scott LJ, Rogers CA, Muldrew A, O’Reilly D, Wordsworth S, et al. The Design and Implementation of a Study to Investigate the Effectiveness of Community Vs. Hospital Eye Service Follow-up for Patients With Neovascular Age-Related Macular Degeneration With Quiescent Disease (ECHoES) 2015. http://dx.doi.org/10.1038/eye.2015.170.
- ECHoES Trial Protocol. Southampton: National Institute for Health Research; 2013.
- Guide to the Methods of Technology Appraisal 2013. London: NICE; 2013.
- Petrou S, Gray A. Economic evaluation alongside randomised controlled trials: design, conduct, analysis, and reporting. BMJ 2011;342. http://dx.doi.org/10.1136/bmj.d1548.
- Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMC Med 2013;11. http://dx.doi.org/10.1186/1741-7015-11-80.
- Drummond MF, Sculpher MJ, Torrance GW, O’Brien B, Stoddart GL. Methods for the Economic Evaluation of Health Care Programmes. New York, NY: Oxford University Press; 2005.
- Dakin HA, Wordsworth S, Rogers CA, Abangma G, Raftery J, Harding SP, et al. Cost-effectiveness of ranibizumab and bevacizumab for age-related macular degeneration: 2-year findings from the IVAN randomised trial. BMJ Open 2014;4. http://dx.doi.org/10.1136/bmjopen-2014-005094.
- Strauss A, Corbin J, Denzin NK, Lincoln YS. Lincoln Handbook of Qualitative Research. London: Sage; 1994.
- Kanski JJ, Bowling B. Clinical Ophthalmology; A Systematic Approach. Philadelphia, PA: Saunders; 2011.
- British National Formulary (online). London: BMJ Group and Pharmaceutical Press; n.d.
- Curtis L. Unit Costs of Health and Social Care 2013. Canterbury: PSSRU, University of Kent; 2013.
- Office of National Statistics . Population of England Aged 50 Years and Over n.d. www.ons.gov.uk/ons/rel/snpp/sub-national-population-projections/2012-based-projections/rft-population-regions.xls (accessed 27 May 2015).
- Owen CG, Jarrar Z, Wormald R, Cook DG, Fletcher AE, Rudnicka AR. The estimated prevalence and incidence of late stage age related macular degeneration in the UK. Br J Ophthalmol 2012;96:752-6. http://dx.doi.org/10.1136/bjophthalmol-2011-301109.
- National Institute for Health and Care Excellence (NICE) . Macular Degeneration (Wet Age-Related) – Aflibercept: Costing Template n.d. www.nice.org.uk/guidance/ta294/resources (accessed 21 December 2014).
- Patton MQ. Qualitative Research & Evaluation Methods. London: Sage Publications; 2002.
- Bhargava J, Bhan-Bhargava A, Foss A, King A. Views of glaucoma patients on provision of follow-up care; an assessment of patient preferences by conjoint analysis. Br J Ophthalmol 2008;92:1601-5. http://dx.doi.org/10.1136/bjo.2008.140483.
- O’Connor P, Harper C, Brunton C, Clews S, Haymes S, Keeffe J. Shared care for chronic eye diseases: perspectives of ophthalmologists, optometrists and patients. Med J Aust 2012;196:646-50. http://dx.doi.org/10.5694/mja11.10856.
- Burton A, Shaw R, Gibson J. ‘I’d like to know what causes it, you know, anything I’ve done?’ Are we meeting the information and support needs of patients with macular degeneration? A qualitative study. BMJ Open 2013;3. http://dx.doi.org/10.1136/bmjopen-2013-003306.
- Mitchell J, Bradley P, Anderson S, Ffytche T, Bradley C. Perceived quality of health care in macular disease: a survey of members of the Macular Disease Society. Br J Ophthalmol 2002;86:777-81. http://dx.doi.org/10.1136/bjo.86.7.777.
- Holtzer-Goor KM, Plochg T, Lemij HG, van Sprundel E, Koopmanschap MA, Klazinga NS. Why a successful task substitution in glaucoma care could not be transferred from a hospital setting to a primary care setting: a qualitative study. Implement Sci 2013;8. http://dx.doi.org/10.1186/1748-5908-8-14.
- Gray SF, Spry PG, Brookes ST, Peters TJ, Spencer IC, Baker IA, et al. The Bristol shared care glaucoma study: outcome at follow up at 2 years. Br J Ophthalmol 2000;84:456-63. http://dx.doi.org/10.1136/bjo.84.5.456.
- Gray SF, Spencer IC, Spry PG, Brookes ST, Baker IA, Peters TJ, et al. The Bristol shared care glaucoma study-validity of measurement and patient satisfaction. J Public Health Med 1997;19:431-6. http://dx.doi.org/10.1093/oxfordjournals.pubmed.a024673.
- Konstantakopoulou E, Harper R, Edgar D, Lawrenson J. A qualitative study of stakeholder views regarding participation in locally commissioned enhanced optometric services. BMJ Open 2014;4. http://dx.doi.org/10.1136/bmjopen-2013-004781.
- Myint J, Edgar D, Kotecha A, Murdoch I, Lawrenson J. Barriers perceived by UK-based community optometrists to the detection of primary open angle glaucoma. Ophthalmic Physiol Opt 2010;30:847-53. http://dx.doi.org/10.1111/j.1475-1313.2010.00792.x.
- Klein R, Klein BE, Knudtson MD, Wong TY, Cotch MF, Liu K, et al. Prevalence of age-related macular degeneration in 4 racial/ethnic groups in the multi-ethnic study of atherosclerosis. Ophthalmology 2006;113:373-80. http://dx.doi.org/10.1016/j.ophtha.2005.12.013.
- Rahi J, Peckham CS, Cumberland PM. Visual impairment due to undiagnosed refractive error in working age adults in Britain. Br J Ophthalmol 2008;92:1190-4. http://dx.doi.org/10.1136/bjo.2007.133454.
- Gulliford M, Dodhia H, Chamley M, McCormick K, Mohamed M, Naithani S, et al. Socio-economic and ethnic inequalities in diabetes retinal screening. Diabet Med 2010;27:282-8. http://dx.doi.org/10.1111/j.1464-5491.2010.02946.x.
- Wilson J, While A. Methodological issues surrounding the use of vignettes in qualitative research. J Interprof Care 1998;12:79-87. http://dx.doi.org/10.3109/13561829809014090.
- Collette JL, Childs E. Minding the gap: Meaning, affect, and the potential shortcomings of vignettes. Soc Sci Res 2011;40:513-22. http://dx.doi.org/10.1016/j.ssresearch.2010.08.008.
- Willig C. Introducing Qualitative Research in Psychology. Maidenhead: McGraw-Hill; 2008.
- Donaldson C, Bate A, Mitton C, Dionne F, Ruta D. Rational disinvestment. Q J Med 2010;103:801-7. http://dx.doi.org/10.1093/qjmed/hcq086.
Appendix 1 ECHoES Trial Steering Committee
Independent members
-
Chairperson: Professor Jon Gibson, Professor of Ophthalmology at Aston University, Birmingham.
-
Patient representative/service user: Cathy Yelf, Macular Society, Andover.
-
Hospital optometrist: Dr Robert Harper, Manchester Royal Eye Hospital, Manchester.
-
Community optometrist: Mr Timothy Young, McDowell Opticians, Belfast.
-
Ophthalmologist: Professor Yit Yang, New Cross Hospital, Wolverhampton.
-
Statistician/researcher: Professor Nicholas Freemantle, University College London Medical School, London.
Non-independent members
-
Trial representatives: Professor Usha Chakravarthy, Professor Barney Reeves and Dr Jodi Taylor.
-
A representative of the sponsor institution.
-
A representative of the funding body.
Given the virtual nature of the trial, which did not impact on the care of any actual patients, we did not convene a Data Monitoring and Safety Committee.
Appendix 2 Screenshots from the web application
Appendix 3 Additional health economic evaluation information
Copy of ECHoES Resource use and cost questionnaire for optometrists
Components of a typical monitoring review
Component | Description | Skills required |
---|---|---|
History | Discussion of patient-reported vision status in each eye and comparison with status at previous visit | Communication skills |
Clinical examination: slit lamp biomicroscopy; anterior segment and macula | Clinical examination to ensure absence of VEGF-related adverse events and/or incidental other disease | Slit lamp and ophthalmoscopy skills |
Visual acuity assessment | Visual acuity recorded as letters read on an ETDRS chart at 4 m (with/without mirrors) using previously recorded refraction. The results will then be recorded in the patient medical record | Test and interpret visual acuity |
Administration of 1% tropicamide drops | Pupil dilatation. Drops will need to be administered 20 minutes before CF photography and spectral domain coherence tomography | Instillation of eye drops |
CF photography (or equivalent CF image) | One good-quality photograph centred on the centre of the macula of each eye | Taking and interpreting retinal images |
Spectral domain OCT | Cube scan of the posterior pole for each eye. Images will be acquired using a standardised protocol, which is pre-set on the OCT machine | Taking and interpreting OCT images |
Final assessment | A retreatment decision will be made on the basis of the visual acuity data and interpretation of images obtained. The decision and rationale will need to be entered in the patient record | Ability to assess the need for retreatment and arrange necessary follow-up |
Resource use and costs associated with training
The cost of each of the three training activities was calculated by multiplying the time spent on each activity by the unit cost of optometrist time. The average cost per hour of optometrist time was £62.13 (SD £34.62) and its calculation was based on participants’ reports on salary and hours worked (through the health economics questionnaire). Given that our objective was to estimate the cost of optometrist training per monitoring review, each of the three components of costs in the table below was divided by the annual number of patients (after the changes in the practice would take place, as reported by each optometrist in reply to question Q7 in the health economics questionnaire), to obtain the cost of optometrist training per monitoring review, that is £0.89 (SD £1.080).
Training type | Optometrist’s time (minutes), mean (SD) | Optometrist’s cost (£), mean (SD) |
---|---|---|
Attending webinars | 120 (0) | 124 (69.26) |
Revisiting webinars | 90 (68.74) | 96 (111.58) |
Consulting other resources | 64 (76.21) | 66 (112.35) |
Observations | 48 | 48 |
Sensitivity analysis 1: three ranibizumab injections and consultations instead of one
Lesion status assessment | Observationsa (%) | Pathways cost (£),b mean (SD) | |
---|---|---|---|
Experts | Optometrists | ||
Reactivated | Reactivated | 795 (39.43) | 2548.83 (67.90) |
Reactivated | Suspicious | 142 (7.04) | 103.61 (18.51) |
Reactivated | Quiescent | 57 (2.83) | 51.29 (9.08) |
Suspicious | Reactivated | 10 (0.50) | 118.12 (16.39) |
Suspicious | Suspicious | 11 (0.55) | 57.04 (9.10) |
Suspicious | Quiescent | 14 (0.69) | 52.96 (9.37) |
Quiescent | Reactivated | 105 (5.21) | 117.14 (32.61) |
Quiescent | Suspicious | 234 (11.61) | 78.31 (11.53) |
Quiescent | Quiescent | 648 (32.14) | 51.98 (8.23) |
Experts | Ophthalmologists | ||
Reactivated | Reactivated | 736 (36.51) | 2495.81 (70.01) |
Reactivated | Suspicious | 196 (9.72) | 153.18 (92.25) |
Reactivated | Quiescent | 62 (3.08) | 77.01 (45.49) |
Suspicious | Reactivated | 1 (0.05) | 2452.74 (n/a) |
Suspicious | Suspicious | 17 (0.84) | 68.84 (31.004) |
Suspicious | Quiescent | 17 (0.84) | 60.57 (17.16) |
Quiescent | Reactivated | 35 (1.73) | 2493.45 (65.87) |
Quiescent | Suspicious | 146 (7.24) | 150.34 (95.19) |
Quiescent | Quiescent | 806 (39.98) | 75.28 (44.72) |
Costs and effects | Optometrists (observations, n = 2016) | Ophthalmologists (observations, n = 2016) |
---|---|---|
Cost of a monitoring review (£), mean (SD) | 1047.03 (1213.05) | 1015.01 (1168.80) |
Proportion of correct assessments (SD) | 0.844 (0.363) | 0.854 (0.353) |
Incremental cost (£) (95% CI) | 32.02 (–60.87032 to 124.9) | |
Incremental benefit, proportion of correct assessments (95% CI) | –0.0099 (–0.045 to 0.025) | |
ICER, incremental cost per correct assessmenta | Dominated |
Sensitivity analysis 2: one aflibercept injection instead of one ranibizumab
Lesion status assessment | Observationsa (%) | Pathways cost (£),b mean (SD) | |
---|---|---|---|
Experts | Optometrists | ||
Reactivated | Reactivated | 795 (39.43) | 1009.24 (45.50) |
Reactivated | Suspicious | 142 (7.04) | 103.61 (18.51) |
Reactivated | Quiescent | 57 (2.83) | 51.29 (9.08) |
Suspicious | Reactivated | 10 (0.50) | 118.12 (16.39) |
Suspicious | Suspicious | 11 (0.55) | 57.04 (9.10) |
Suspicious | Quiescent | 14 (0.69) | 52.96 (9.37) |
Quiescent | Reactivated | 105 (5.21) | 117.14 (32.61) |
Quiescent | Suspicious | 234 (11.61) | 78.31 (11.53) |
Quiescent | Quiescent | 648 (32.14) | 51.98 (8.23) |
Experts | Ophthalmologists | ||
Reactivated | Reactivated | 736 (36.51) | 956.50 (46.41) |
Reactivated | Suspicious | 196 (9.72) | 153.18 (92.25) |
Reactivated | Quiescent | 62 (3.08) | 77.01 (45.49) |
Suspicious | Reactivated | 1 (0.05) | 2452.74 (n/a) |
Suspicious | Suspicious | 17 (0.84) | 68.84 (31.004) |
Suspicious | Quiescent | 17 (0.84) | 60.57 (17.16) |
Quiescent | Reactivated | 35 (1.73) | 2493.45 (65.87) |
Quiescent | Suspicious | 146 (7.24) | 150.34 (95.19) |
Quiescent | Quiescent | 806 (39.98) | 75.28 (44.72) |
Costs and effects | Optometrists (n = 2016) | Ophthalmologists (n = 2016) |
---|---|---|
Cost of a monitoring review (£), mean SD | 439.90 (460.90) | 425.61 (422.93) |
Proportion of correct assessments (SD) | 0.844 (0.363) | 0.854 (0.353) |
Incremental cost (£) (95% CI) | 14.29 (–19.91 to 48.49) | |
Incremental benefit, proportion of correct assessments (95% CI) | –0.0099 (–0.045 to 0.025) | |
ICER, incremental cost per correct assessmenta | Dominated |
Sensitivity analysis 3: one bevacizumab injection instead of one ranibizumab
Lesion status assessment | Observationsa (%) | Pathways cost (£),b mean (SD) | |
---|---|---|---|
Experts | Optometrists | ||
Reactivated | Reactivated | 795 (39.43) | 242.23 (45.50) |
Reactivated | Suspicious | 142 (7.04) | 103.61 (18.51) |
Reactivated | Quiescent | 57 (2.83) | 51.29 (9.08) |
Suspicious | Reactivated | 10 (0.50) | 118.12 (16.39) |
Suspicious | Suspicious | 11 (0.55) | 57.04 (9.10) |
Suspicious | Quiescent | 14 (0.69) | 52.96 (9.37) |
Quiescent | Reactivated | 105 (5.21) | 117.14 (32.61) |
Quiescent | Suspicious | 234 (11.61) | 78.31 (11.53) |
Quiescent | Quiescent | 648 (32.14) | 51.98 (8.23) |
Experts | Ophthalmologists | ||
Reactivated | Reactivated | 736 (36.51) | 189.50 (46.41) |
Reactivated | Suspicious | 196 (9.72) | 153.18 (92.25) |
Reactivated | Quiescent | 62 (3.08) | 77.01 (45.49) |
Suspicious | Reactivated | 1 (0.05) | 184.21 (n/a) |
Suspicious | Suspicious | 17 (0.84) | 68.84 (31.004) |
Suspicious | Quiescent | 17 (0.84) | 60.57 (17.16) |
Quiescent | Reactivated | 35 (1.73) | 189.12 (38.002) |
Quiescent | Suspicious | 146 (7.24) | 150.34 (95.19) |
Quiescent | Quiescent | 806 (39.98) | 75.28 (44.72) |
Costs and effects | Optometrists | Ophthalmologists |
---|---|---|
Cost of a monitoring review (£), mean (SD) | 137.43 (91.78) | 131.89 (77.12) |
Proportion of correct assessments (SD) | 0.844 (0.363) | 0.854 (0.353) |
Incremental cost (£) (95% CI) | 5.54 (–0.834 to 11.916) | |
Incremental benefit, proportion of correct assessments (95% CI) | –0.0099 (–0.045 to 0.025) | |
ICER, incremental cost per correct assessmenta | Dominated |
Sensitivity analysis 4: only monitoring review cost, no pathway cost
Lesion status assessment | Observationsa (%) | Pathways cost (£),b mean (SD) | |
---|---|---|---|
Experts | Optometrists | ||
Reactivated | Reactivated | 795 (39.43) | 51.79 (8.49) |
Reactivated | Suspicious | 142 (7.04) | 51.81 (9.26) |
Reactivated | Quiescent | 57 (2.83) | 51.29 (9.08) |
Suspicious | Reactivated | 10 (0.50) | 50.64 (7.12) |
Suspicious | Suspicious | 11 (0.55) | 57.04 (9.10) |
Suspicious | Quiescent | 14 (0.69) | 52.96 (9.37) |
Quiescent | Reactivated | 105 (5.21) | 51.49 (8.01) |
Quiescent | Suspicious | 234 (11.61) | 52.21 (7.69) |
Quiescent | Quiescent | 648 (32.14) | 51.98 (8.23) |
Experts | Ophthalmologists | ||
Reactivated | Reactivated | 736 (36.51) | 76.09 (43.66) |
Reactivated | Suspicious | 196 (9.72) | 76.59 (46.13) |
Reactivated | Quiescent | 62 (3.08) | 77.01 (45.49) |
Suspicious | Reactivated | 1 (0.05) | 89.70 (n/a) |
Suspicious | Suspicious | 17 (0.84) | 68.84 (31.004) |
Suspicious | Quiescent | 17 (0.84) | 60.57 (17.16) |
Quiescent | Reactivated | 35 (1.73) | 76.71 (38.22) |
Quiescent | Suspicious | 146 (7.24) | 75.17 (47.59) |
Quiescent | Quiescent | 806 (39.98) | 75.28 (44.72) |
Costs and effects | Optometrists (observations, n = 2016) | Ophthalmologists (observations, n = 2016) |
---|---|---|
Cost of a monitoring review (£), mean (SD) | 51.90 (8.36) | 75.60 (44.31) |
Proportion of correct assessments (SD) | 0.844 (0.363) | 0.854 (0.353) |
Incremental cost (£) (95% CI) | –23.70 (–26.09 to –21.31) | |
Incremental benefit, proportion of correct assessments (95% CI) | –0.0099 (–0.045 to 0.025) | |
ICER, incremental cost per correct assessment (95% CI) (£) Fieller’s methoda | 2389.07; lower limit: 535, upper limit: –943b |
Appendix 4 Additional figures
Appendix 5 Statistical analysis plan
1. Introduction to SAP
1.1 Scope
This statistical analysis plan (SAP) details information regarding the statistical analysis of the completed ECHoES trial and covers analyses of trial data outlined in the study protocol. It does not include the health economic evaluation or analysis of the focus group discussions.
1.2 Editorial changes
Any changes made to this SAP after approval must be clearly justified and documented as an amendment at the end of this document. The SAP should then be re-approved.
1.3 SAP document approval
The trial statistician should authorise this document.
2. Study background and objectives
2.1 Study background
Wet, or neovascular, age-related macular degeneration (nAMD) is a condition which causes severe sight loss in older people. There is currently no evidence about the effectiveness of community follow-up by optometrists for nAMD. However, there is evidence about the acceptability of further training to optometrists, and their effectiveness in providing ‘shared care’ with the Hospital eye clinic setting (HES), for other eye diseases. Community optometrists already participate successfully in shared care management schemes for patients with glaucoma and diabetic eye disease, and evaluations comparing agreement between management by optometrists and ophthalmologists has been shown to be acceptable in the context of glaucoma and accident and emergency services.
The question of interest to the NHS is whether community optometrists can be trained to make decisions about the need for retreatment in patients with nAMD whose disease has been rendered quiescent by treatment with anti-vascular endothelial growth factor (VEGF) drugs with the same accuracy as ophthalmologists working in the HES.
2.2 Study objectives
The aim of the trial is to test the hypothesis that, compared to conventional hospital eye clinic follow-up, community follow-up by optometrists (after appropriate training) is not inferior for patients with nAMD with stable vision. This hypothesis will be tested by comparing decisions made by samples of ophthalmologists working in the HES and optometrists working in the community about the need for retreatment, following a period in which patients have not required treatment. Rather than carrying out a new (prospective) trial, optometrists and ophthalmologists participating in the trial will make decisions about vignettes composed of clinical information and colour fundus (CF) and ocular coherence tomography (OCT) images collected in the course of the IVAN trial (HTA ref: 07/36/01). Retreatment decisions made by participants in both groups will be validated against a reference standard based on the opinions of three medical retinal experts (see Appendix 5, section 4.1).
The trial has five specific objectives:
-
To compare the proportion of retreatment decisions (based on lesion classifications, see Appendix 5, section 2.3) classified as ‘correct’ (against the reference standard, ‘reactivated’ vs. ‘suspicious’ or ‘inactive lesion’, see Appendix 5, section 4.1) made by optometrists and ophthalmologists.
-
To estimate the agreement, and nature of disagreements, between retreatment decisions made by optometrists and ophthalmologists.
-
To estimate the influence of vignette clinical and demographic information on retreatment decisions.
-
To estimate the cost-effectiveness of follow-up in the community by optometrists compared to follow-up by ophthalmologists in the HES. This is not covered in this SAP.
-
To ascertain the views of patient representatives, optometrists, ophthalmologists and clinical commissioners on the proposed shared care model. This is not covered in this SAP.
2.3 Terminology
Throughout the protocol and this SAP, the terms lesion classification, retreatment decision and referral decision are used interchangeably. Unless otherwise stated, the analyses will use the lesion classification question, for which participants must classify the vignette as reactivated, suspicious or quiescent. The logic behind this is that the identification of key vignette features and the resultant classification of lesions is the difficult part; retreatment/ referral decisions can be defined by rules based on these classifications.
2.4 Primary outcome
The primary outcome is a participant’s judgement of lesion classification (“Reactivated lesion” vs. “Inactive lesion” or “Suspicious lesion”) against the reference standard (see section 4.1). The number of ‘correct’ assessments will be compared between the two participant groups (see section 4.2 for details).
2.5 Secondary outcomes
-
The secondary outcomes are:
-
The frequency of “serious” errors judged likely to be sight-threatening. This classification will be assigned to a participant’s classification of a vignette when the participant’s decision is “lesion quiescent” and the reference standard classification is “lesion reactivation”, i.e. a definitive false negative classification by the participant. Definitive false positive decisions will not be considered serious but will be tabulated separately. Misclassifications involving classifications of “suspicious lesion”, whether assigned as the reference standard or by a participant, will also not be considered serious.
-
Judgements about the presence or absence of lesion components, e.g. blood, exudates and sub retinal fluid (SRF) in the fundus colour images; SRF, intra-retinal fluid/cysts and pigment epithelial detachment (PED) in the OCT images.
-
Participant-rated confidence in their decisions about the lesion classification, on a 5-point scale.
3. Study population
The study population consists of fully qualified optometrists registered with the General Optical Council (GOC) for at least 3 years, and ophthalmologists with 3 years post-registration experience in Ophthalmology. The planned number of participants to be included in the study is 96, with a 1:1 ratio of optometrists and ophthalmologists. On the basis that some participants will be unable to attend the webinars, will not complete the training vignettes or will not correctly assess an adequate number of training vignettes, more than 96 participants will be recruited to attend the webinars. The first 48 ophthalmologists and 48 optometrists enrolled in the study will be assigned their training vignettes and the additional participants will be assigned vignettes on an as needed basis.
The initial sample size calculation was based on the number of vignettes required to be assessed, and the number of participants was then decided based on this calculation.
3.1 Trial design
The participants are not randomised into the trial as by training they are either an optometrist or an ophthalmologist. However, randomisation, using a randomised block design, was used in assigning vignettes to each participant. The ‘main study’ vignettes (i.e. excluding training vignettes) were assigned to participants in a random order such that each vignette is seen seven times by an ophthalmologist and seven times by an optometrist, and each of the 48 participants in each group sees 42 vignettes. There are therefore a total of 288 (48 × 42 / 7) vignettes included in the trial. The selection of vignettes and the order they are viewed is matched for the two groups in a 1:1 ratio in order to reduce any bias that may be introduced by different assessors viewing different vignettes in different orders. Each participant is also assigned a random 24 training vignettes (and a further 24 training vignettes if they fail their first set) such that none of the vignettes in their training set(s) appear in their main study set. Again, this is matched across groups.
3.2 Flow of participants
Participants who are consented into the study must first complete two webinar training sessions. Once these have been complete they are allocated a training set of 24 vignettes, of which they must correctly assess at least 18 (75%) to pass their training. If they pass, they will then be allocated their 42 ‘main study’ vignettes. If they fail their first training vignettes, they are allocated a second training set of a further 24 vignettes. If they pass this second training set they will be allocated their 42 main study vignettes; if they fail their second set of training they will be withdrawn from the study. Only participants who assess all of their 42 main study vignettes will be included in the analysis population (see Appendix 5, section 3.5). Participants are followed up until they have assessed all of their main study vignettes or until they withdraw or are withdrawn from the study. The planned study time is approximately six months from registration until study completion.
The participant flow will be described via the flowchart.
3.3 Protocol deviations
There is no protocol deviations defined for this study.
3.4 Withdrawals
Participants can withdraw from the study at any time after registration. They may also be withdrawn by the study team if they do not correctly assess an adequate number of vignettes in their training sets (see Appendix 5, section 3.1 for details).
3.5 Analysis population
The analysis population will consist of the 48 ophthalmologists and 48 optometrists who complete their 42 main study vignettes. Participants who withdraw or are withdrawn before completion of their main study vignettes will not be included in the analysis population but their progression in the trial will be described in the flowchart.
4. Derivations
4.1 Reference standard
All vignettes will be assessed by three experts; it is expected that for most vignettes the classification decisions of all experts will agree. For those where there is disagreement, the three experts will collectively make a final consensus decision. The agreed assessment decision for each vignette (“reactivated”, “suspicious” or “quiescent” lesion) will be known as the reference standard and will be used to calculate ‘correct’ assessment decisions of all participants.
4.2 Primary outcome
For the primary outcome, a binary ‘active lesion’ variable will be derived for both the reference standard and the participant responses, as follows:
-
If a vignette is classified as “lesion reactivated” then active lesion = Yes
-
If a vignette is classified as “lesion quiescent” or “lesion suspicious” then active lesion = No
-
Else missing (by design there should not be any missing data)
An indicator of whether participants have made the correct classification for each vignette will then be derived as follows:
-
If the reference standard and a participant have both classified a vignette as an active lesion or have both classified a vignette as not an active lesion, then classification status = correct
-
If the reference standard have classified a vignette as an active lesion and a participant has not classified it as an active lesion, or vice versa, then classification status = incorrect
-
Else missing (by design there should not be any missing data)
An overall count of the number of vignettes that each participant correctly classified (out of a possible 42) will also be derived.
4.3 Other variables
New variable | Rules |
---|---|
Secondary outcome (a): ‘Serious’ errors judged to be sight threatening | If reference standard= lesion reactivated and participant opinion = lesion quiescent, then = Yes If reference standard = lesion reactivated and participant opinion = lesion suspicious or lesion reactivated, then = No Else missing |
Years since qualification for optometrists | Year since qualification = Date of consent – date of registration with the General Optical Council |
Years since qualification for ophthalmologists | First, a date of qualification was calculated: Dateofqual = Royal college date if available, else MRCOphth date if available, else FRCOphth date if available, else diploma date, else other appropriate date. Years of experience = Date of consent – dateofqual |
5. Statistical analyses
5.1 Participant characteristics
Participant characteristics will be described by group for participants in the analysis population.
Continuous variables will be summarised using the mean and standard deviation (SD) (or median and inter quartile range (IQR) if the distribution is skewed), and categorical data will be summarised as a number and percentage. The summary statistic headings given are those we expect to use based on a-priori knowledge of the measurements but may change.
Characteristics of the participants will be described and compared formally using t-tests, Mann-Whitney tests, chi-squared tests or Fishers exact tests as appropriate.
5.2 Primary and secondary outcome data
5.2.1 Adjustment in models
The intention is to adjust all primary and secondary outcome models for participant and vignette number as random effects, and the order that the vignettes were viewed in as a fixed effect (modelled as tertiles: 1–14, 15–28 and 29–42).
5.2.2 Analysis models
All outcomes listed in the study protocol will be presented as follows. For all formal group comparisons, the ophthalmologists will be the reference group.
5.2.3 The primary outcome
The number and percentage of correct active lesion decisions will be presented by group, and formally compared using logistic regression. The sensitivity and specificity for each of the professional groups, as well as a summary of individual participants, will also be presented; a more detailed descriptive breakdown of vignette classifications will be presented separately.
The adjusted odds ratio (OR) and corresponding 95% confidence interval (CI) for the group effect (generated from the logistic regression model) will be presented. Using this OR and 95% CI, we will address the non-inferiority limit stated in the protocol: less than 10% absolute difference between the groups assuming the ophthalmologist group would correctly assess 95% of their vignettes. To test this it is necessary to convert this 10% difference to the odds scale as follows:
Using the values in the below table, the OR of a correct response is calculated by (c/d) / (a/b). The non-inferiority limit on the odds scale is therefore ((0.85/0.15) / (0.95/0.05) = 0.298.
Ophthalmologist | Optometrist | Overall | |
---|---|---|---|
Correct | a | c | a + c |
Incorrect | b | d | b + d |
Overall | a + b | c + d | a + b + c + d |
In addition to the analysis described above, the influence of key vignette features on correct lesion decisions will be investigated using Poisson regression, and presented as incidence rate ratios (IRR) and 95% CIs. Key features include baseline VA, index VA, age and sex of patients, smoking status and past history of angina or heart disease. All features will be retained in the model regardless of statistical significance. The reference standard and group will also be fitted as fixed effects. Interactions between the vignette features and group will also be tested and if significant at the 5% level, estimates will be presented separately for the two groups.
Note: The protocol states that the primary outcome would be analysed using Poisson regression. However, after further consideration we are proposing to fit a logistic rather than Poisson model in order to fully account for the incomplete block design in the analysis. Fitting a Poisson model would not allow us to include both participant and vignette in the model.
5.2.4 Secondary outcome (a) – serious errors
The numbers and percentages of serious errors judged to be sight threatening will be presented by group. Additionally, the frequencies of such errors will be presented at participant level. The groups will be compared using logistic regression, applied to the subset of vignettes with a reference standard classification of reactivated (other vignettes by definition cannot result in a serious error).
5.2.5 Secondary outcome (b) – lesion characteristics
Participants’ judgements about presence (or absence) of each of the lesion components (e.g. SRF, PED and bloods) will be summarised by numbers and percentages in each group and formal comparisons between the groups will be made using logistic regression. There is no agreed reference standard for the lesion components so participant’s classifications will only be compared to each other, not to a ‘correct’ response.
5.2.6 Secondary outcome (c) – confidence in decision making
Participants’ confidence in each of their lesion classification decisions, on a five point scale, will be summarised for each group. Logistic regression will be used to formally compare confidence (5 vs. 1 to 4) between the two groups. Additionally, the number of correct reactivation decisions will be tabulated by confidence level for each group.
5.2.7 Statistical significance
For hypothesis tests, two-tailed p-values < 0.05 are considered statistically significant. Likelihood ratio tests will be used in preference to Wald tests for hypothesis testing.
5.2.8 Model assumptions
Underlying assumptions will be checked using standard methods, e.g. graphical plots etc. If assumptions are not valid then alternative methods of analysis will be sought. If outlying observations are found and the models do not fit the data adequately, such observations will be excluded from the main analyses. Sensitivity analyses may be performed to examine the effect on the study’s conclusions of excluding outlying observations.
5.2.9 Subgroup analyses
There are no subgroup analyses planned for this study.
5.2.10 Sensitivity analyses
A sensitivity analysis of the primary outcome, reclassifying vignettes graded as suspicious as ‘active lesion’ group, rather than ‘no active lesion’ group will be undertaken, to assess the sensitivity of the conclusions to the classification of the vignettes graded as suspicious.
5.2.11 Additional analysis
Lesion classification decisions will be tabulated against referral decisions for ophthalmologists and optometrists.
A descriptive analysis of the time taken to complete each vignette, and how this time changes with experience in the trial (learning curve), will be performed. The relationship between this time and participants ‘success’ in correctly classifying vignettes will also be explored.
Cross tabulations and Kappa statistics will be used to compare experts’ initial classifications with the final reference standard. Similarly, cross tabulations and Kappa statistics will be used to compare lesion component classifications between the three experts.
5.2.12 Missing data
By design there should be no missing data in this study apart from time taken to complete vignettes. The time that user “saved” the assessment decision was recorded for each vignette and therefore the time between saving one vignette and the next can be calculated. However, the time taken for the first vignette (of each session) cannot be calculated as the time of the start of the session was not captured. Similarly, if a participant takes a break within a session this cannot be identified (expect that the time between saving the decisions on two consecutive vignettes may be excessively long). As time taken to complete vignettes is not an outcome specified in the protocol, and the only vignette where the time will be missing is the first of a session and the vignettes were randomised these times will be assumed to be missing at random and a complete case analysis will be performed.
5.2.13 Multiple testing
No formal adjustment will be made for multiple testing.
5.3 Safety data
There is no safety data in this study as there are no risks to participants and therefore it is not possible for clinical adverse events to be attributed to study specific procedures.
6. AMENDMENTS TO THE SAP
Previous version | Previous date | New version | New date | Brief summary of changes |
---|---|---|---|---|
List of abbreviations
- AMD
- age-related macular degeneration
- BCVA
- best corrected visual acuity
- CCG
- clinical commissioning group
- CEAC
- cost-effectiveness acceptability curve
- CF
- colour fundus
- CI
- confidence interval
- DRT
- diffuse retinal thickening
- ECHoES
- Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease study
- ETDRS
- Early Treatment of Diabetic Retinopathy Study
- GP
- general practitioner
- HES
- Hospital Eye Service
- HTA
- Health Technology Assessment
- ICER
- incremental cost-effectiveness ratio
- IQR
- interquartile range
- IRC
- intraretinal cyst
- IRR
- incidence rate ratio
- ISRCTN
- International Standard Randomised Controlled Trial Number
- IVAN
- randomised controlled trial to assess the effectiveness and cost-effectiveness of alternative treatments to Inhibit VEGF in Age-related choroidal Neovascularisation
- nAMD
- neovascular age-related macular degeneration
- OCT
- optical coherence tomography
- OR
- odds ratio
- PED
- pigment epithelial detachment
- PPI
- patient and public involvement
- SD
- standard deviation
- SRF
- subretinal fluid
- TSC
- Trial Steering Committee
- VAT
- value-added tax
- VEGF
- vascular endothelial growth factor