Notes
Article history
The research reported in this issue of the journal was funded by the HTA programme as project number NIHR127666. The contractual start date was in November 2018. The draft report began editorial review in June 2019 and was accepted for publication in November 2019. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
Sian Taylor-Phillips, Chris Stinton and Hannah Fraser are partly supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care West Midlands at University Hospitals Birmingham NHS Foundation Trust. Sian Taylor-Phillips is funded by a NIHR Career Development fellowship (reference CDF-2016-09-018).
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2020. This work was produced by Fraser et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
2020 Queen’s Printer and Controller of HMSO
Chapter 1 Introduction
Description of the health problem
Sore throat is a common condition;1,2 clinical descriptions of acute sore throat include acute pharyngitis and tonsillitis, which are both infections of the upper respiratory airway affecting the mucosa. 3,4 In a Scottish survey, 31% of respondents reported having experienced a severe sore throat in the previous 12 months. 1 Symptoms of sore throat include pain in the throat and may also include fever or headache; however, not all patients will require or seek medical advice and/or treatment for these symptoms. An analysis of UK primary care use data identified a reduction of diagnosed episodes of sore throat in the UK between 1993 and 2001. 2 This finding may suggest changes in patient behaviour regarding self-care, changes in general practitioner (GP) diagnosis and recording of sore throat or an actual change in the prevalence of sore throat, although there is no evidence to support these theories. Despite this reduction, sore throat and other respiratory tract infections (RTIs) remain a common reason for primary care use; one-quarter of the population will visit their GP because of a RTI each year. 5
In the UK, diagnosis of sore throat is currently based mainly on clinical assessment and it is recommended by the National Institute for Health and Care Excellence (NICE) that the FeverPAIN6 or Centor7 criteria are also used. The FeverPAIN and Centor tools were designed to predict group A Streptococcus (strep A) (Centor, FeverPAIN), group C Streptococcus (strep C) (FeverPAIN) and group G Streptococcus (strep G) (FeverPAIN),6,7 and have been proposed as methods by which clinicians can identify which patients are most likely to benefit from antibiotic use for sore throat. 8 This is because sore throat is often a self-limiting illness; most cases have a viral aetiology and, therefore, antibiotics would not be an effective treatment in these instances. In addition, as antibiotics reduce the duration of symptoms by only a very short period, this must be traded off against the side effects. Around 5–17% of sore throats are due to a bacterial infection, typically group A beta-haemolytic Streptococcus, also known as ‘Streptococcus pyogenes’, ‘group A Streptococcus’, ‘GAS’ or ‘strep A’. 5,8 Expert advice suggests that bacterial sore throat can also be caused by group C and group G streptococci; however, strep A is thought to account for around 80% of bacterial throat infections, and groups C and G streptococci are thought to account for around 20%. Most cases of strep A infection resolve without complications and, in fact, many people carry the bacterium without experiencing illness. Despite these factors, most patients presenting with sore throat in the UK will be given antibiotics in primary care. 9,10 Although rates of antibiotic prescribing for sore throat declined between 1993 and 2001, more recent prescribing data, for 2011, remain close to the 2001 figure, with median practice-prescribed antibiotics for sore throat at 60%. 8 RTIs, which include sore throats, account for a large proportion of antibiotic use in general practice in the UK (approximately 60%). 8
There are clinical and epidemiological reasons why clinicians may prescribe antibiotics for sore throat in the absence of microbiological confirmation. The first is practical. The current reference standard, culture of the bacteria grown from a throat swab, takes > 18 hours for a result to be obtained. 11 Where clinicians suspect strep A infection based on clinical judgement and use of the FeverPAIN or Centor criteria, there is an opportunity to reduce the risk or harm caused by complications such as tonsillitis, pharyngitis, scarlet fever, impetigo, erysipelas (an infection in the upper layer of the skin), glomerular nephritis, rheumatic fever, cellulitis and pneumonia. Some vulnerable patient groups, such as those who are immunocompromised, are at a higher risk of developing invasive strep A infection. To prevent onward transmission, current Public Health England (PHE) guidance on invasive strep A infection management12 indicates use of antibiotics in close contacts of people who have invasive strep A infection if they have symptoms of strep A infection, such as sore throat, themselves or are in a particular risk group or setting. 11 Although these factors must be considered in understanding the reasons for use of antibiotics to treat sore throat in the absence of more accurate diagnosis, another factor that has an impact on use is patient demand. Although patient attendances for minor ailments at GP surgeries have reduced, when patients do visit their GP there is an expectation of intervention, and this is increasingly the case. 11 Furthermore, RTIs account for a high proportion of working days lost in the UK – in 2016, almost one-quarter (24.8%, 34 million days) – so ensuring that patients receive appropriate and timely treatment also has an economic impact on the UK economy and on patients. 13 However, this rationale and the demands need to be balanced with the aforementioned statistics regarding the low prevalence of bacterial infection in sore throat and the risk of antimicrobial resistance (AMR).
Overuse or inappropriate use of antibiotics can lead to bacteria developing resistance, leading to an emergence of multidrug-resistant pathogens, which are increasingly difficult to treat. AMR could contribute to an estimated 10 million deaths every year globally by 2050 and a global productivity cost of £66T. 11 In response to this threat, ‘antimicrobial stewardship’ has been a central strategy adopted by the Chief Medical Officer and NICE. 11,14 Point-of-care testing in primary care has been recognised as an emerging technology for aiding targeted antibiotic prescribing in cases of sore throat, by supporting clinicians with diagnosis and communicating appropriate use of antibiotics to patients. 15 Several technologies have been developed for point-of-care testing in primary care for appropriate administration of antibiotics to those who would benefit and to prevent delay and associated complications.
The NICE Diagnostics Advisory Committee is tasked with providing guidance to the NHS about the use of point-of-care tests for the detection of strep A in sore throat infections. To inform the Diagnostics Advisory Committee, the External Assessment Group (EAG) has provided this assessment of the clinical accuracy and cost-effectiveness of point-of-care tests for the detection of strep A as a replacement or adjunct for standard assessment procedures. The potential value of the point-of-care tests is in rapidly determining the presence and nature of a bacterial infection.
Aetiology and pathology
Most sore throats are caused by an infection, mainly viral, and so are typically spread from person to person via respiratory droplets; non-infectious causes are uncommon. 3 In the case of infectious causes, viruses, bacteria or fungi invade the upper respiratory mucosa, causing a local inflammatory response. 4 Complications associated with sore throat caused by infection are rare; however, strep A infection has a small risk of the following complications:3
-
otitis media
-
acute sinusitis
-
peritonsillar abscess
-
rheumatic fever and post-streptococcal glomerulonephritis are also complications associated with strep A throat infection; however, these are extremely rare in developed countries
-
invasive strep A, if the bacteria move from the throat into a sterile body site (which can lead to severe infections, sepsis and streptococcal toxic shock syndrome).
Children are most likely to carry or be infected by strep A; however, people aged > 65 years or those whose immune system is compromised (e.g. people living with a human immunodeficiency virus infection, diabetes mellitus, heart disease or cancer and people using high-dose steroids or intravenous drugs) are at higher risk of developing an invasive strep A infection. 16
A Fusobacterium necrophorum infection affecting the pharynx or tonsils can (very rarely) lead to Lemierre’s syndrome (sepsis and jugular vein thrombosis). 3
Diagnosis and care pathway
Figure 1 depicts the care pathway for assessing and treating a sore throat as outlined in the NICE antimicrobial prescribing guidance on sore throat [NICE Guidance (NG) number 84 (NG84)]. 8 Most uncomplicated sore throats are managed without seeking medical advice and tend to resolve within 1 week. 10 Suggested conservative measures include simple analgesia, maintaining hydration, salt gargling and throat lozenges. In selected cases in which a GP, or a pharmacist or a health-care practitioner in the secondary care setting, such as in accident and emergency, feels that the patient may benefit from antibiotics, the prescriber should apply either the FeverPAIN or the Centor scores to guide their decision-making. The NICE antimicrobial prescribing guideline on acute sore throat does not make any recommendations about using point-of-care tests or throat cultures to confirm strep A infection. 8
Significance to the NHS and current service cost
The significance of sore throat and inappropriate use of antibiotics to the NHS broadly falls into two categories: the first is associated with health-care use directly owing to sore throat and the second is the impact of inappropriate use of antibiotics contributing to AMR.
Respiratory tract infections, including sore throat, account for a large proportion of primary care use and antibiotic prescribing. 10 However, there is already evidence that the majority of patients prefer to self-medicate minor ailments, such as sore throat, where they feel able to do so. 1,2,14 For example, a visit to the general practice for a diagnosis and treatment for sore throat incurs the cost of the visit to a general practice and any treatment prescribed. In addition, in the current system, in which GPs can use the FeverPAIN or Centor criteria to inform antibiotic prescribing, there is the potential cost of additional health-care use for patients whose condition does not improve or who develop complications owing to ineffective or no treatment being prescribed. The risk of complications, however, is low and current prescribing activity suggests overuse rather than underuse of antibiotics for sore throat. Another cost associated with the current system is laboratory costs where the reference standard for diagnosis is used, namely throat swabs sent for culture.
Although these costs and the impact of minor ailment use on the NHS is a key consideration, the primary aim of the intervention being considered is to reduce inappropriate antibiotic prescribing. Doing so could support a reduction in promoters of AMR. The main antibiotic prescribed by general practice is penicillin, and this is the first-line treatment currently recommended by NICE for suspected strep A throat infection. 8,15 Across Europe, an estimated 25,000 people die each year as a result of hospital infections caused by the five most common resistant bacteria, and a parliamentary report estimated the annual cost to the NHS to be £180M per year. 17 Although it is often possible at present to use alternative treatments to treat resistant infection, costs of treatment and risk of mortality for a resistant infection are likely to be approximately double the cost of a non-resistant infection. 17 One study investigating the cost of a 10-month outbreak of a type of antibiotic-resistant bacteria (carbapenemase-producing enterobacteria) found the total cost to be close to £1M. The main cost was missed revenue from cancellation of planned surgical procedures owing to ward closures and lack of bed space. Other costs were associated with additional staff time, increased length of patient stay in hospital, screening, bed and ward closure, contact precautions, anti-infective costs, human papillomavirus decontamination and ward-based monitors. 17 In addition to health-care costs and risk of litigation associated with AMR-related harm, there is a wider societal cost of lost productivity and reduced quality of life for patients suffering the effects of AMR infections.
Clear definition of interventions
There are rapid tests for the strep A bacterium, which are intended to be used in addition to clinical scoring systems, such as FeverPAIN and Centor. The purpose is to increase diagnostic confidence in a suspected strep A infection, to guide antimicrobial prescribing decisions in people presenting with an acute sore throat and to contribute to improving antimicrobial stewardship. The tests may be suitable for use in all settings where patients may present with an acute sore throat; these include both primary and secondary care, and community pharmacies. 11
Twenty-one rapid tests for strep A detection are available. The tests use either immunoassay detection methods [rapid antigen detection tests (RADTs)] or molecular methods [polymerase chain reaction (PCR) or isothermal nucleic acid amplification]. The tests listed in the following section were identified from the NICE scope on point-of-care testing in primary care for strep A infection in sore throat.
Comparative technical overview of the point-of-care tests for group A Streptococcus
Seventeen RADTs were identified, and their product properties are summarised in Table 1. The type of information provided by each of the manufacturers is summarised in Appendix 14. For each test, the limit of detection has been defined as the lowest concentration of strep A in a sample that can be distinguished from negative samples. Of these, 16 tests use lateral flow techniques (also known as immunochromatographic or immunofluorescent assays) and one test is a turbidimetric immunoassay.
Product | Test format and supply | Method | Limit of detection | Description of results | Time to result (minutes)a |
---|---|---|---|---|---|
Clearview® Exact Strep A cassetteb (Abbott Laboratories, Lake Bluff, IL, USA) | 25 individually pouched test cassettes | Lateral flow (immunochromatography) | 5 × 104 organisms/test | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
Clearview Exact Strep A dipstick – testb strip (Abbott Laboratories, Lake Bluff, IL, USA) |
25 test kits Dipstick |
Lateral flow (immunochromatography) | 5 × 104 organisms/test | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
BD Veritor Plus system group A strep assay – cassette (Becton Dickinson and Company, Sparks, MD, USA) |
30 test kits Test cassette |
Lateral flow (immunochromatography) |
Strain 12384: 1 × 105 CFU/ml Strain 19615: 5 × 104 CFU/ml Strain 25663: 2 × 105 CFU/ml |
Analysed by a BD Veritor system analyser module. Results are displayed visually | 5 |
Strep A Rapid Test – cassette (Biopanda Reagents, Belfast, UK) | 20 test cassettes | Lateral flow (immunochromatography) | 1 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
Strep A Rapid Test – test strip (Biopanda Reagents, Belfast, UK) | No information provided | Lateral flow (immunochromatography) | 1 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
NADAL® Strep A – test strip (nal von minden GmbH, Regensburg, Germany) | 40 test strips including controls, 50 test strips (tube) including controls, as well as positive and negative control vials | Lateral flow (immunochromatography) | 1.5 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
NADAL Strep A – cassette (nal von minden GmbH, Regensburg, Germany) | 20 test cassettes including controls as well as positive and negative control vials | Lateral flow (immunochromatography) | 1.5 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
NADAL Strep A plus – cassette (nal von minden GmbH, Regensburg, Germany) | 20-pack cassettes including controls and five-pack cassettes including controls | Lateral flow (immunochromatography) | 1.5 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
NADAL Strep A plus – test strip (nal von minden GmbH, Regensburg, Germany) | 40 test strips | Lateral flow (immunochromatography) | 1.5 × 105 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
NADAL Strep A scan test – cassette (nal von minden GmbH, Regensburg, Germany) | 20-pack cassettes including controls | Lateral flow (immunochromatography) | 1.5 × 105 organisms/swab | Extracted solution is placed into the test cassette, with the Colibri placed on top. Analysed using a Colibri reader and Colibri USB and software (nal von minden GmbH, Regensburg, Germany) | 5 |
OSOM Strep A test – test strip (Sekisui Diagnostics, Burlington, MA, USA) | 50-test pack | Lateral flow (immunochromatography) | Not known | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
QuikRead Go Strep A test kit (Orion Diagnostica, Espoo, Finland) | 50 tests including controls | Turbidimetric immunoassay | 7 × 104 CFU/swab | Analysed using the QuikRead Go instrument | < 7a |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories, Lake Bluff, IL, USA) | 20 or 40 tests | Lateral flow (immunochromatography) | Not known | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
bioNexia® Strep A plus – cassette (bioMérieux, Marcy-l’Étoile, France) | 25 test cassettes | Lateral flow (immunochromatography) | 1 × 104 organisms/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
bioNexia Strep A dipstick – test strip (bioMérieux, Marcy-l’Étoile, France) | 25 test strips | Lateral flow (immunochromatography) | Not known | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
Biosynex Strep A – cassette (Biosynex, Illkirch-Graffenstaden, France) | Not reported | Lateral flow (immunochromatography) | 1 × 105 bacteria/swab | Positive results are indicated by two lines: one in the control region and the other in the test region. Read by visual inspection | 5 |
Sofia Strep A FIA (Quidel, San Diego, CA, USA) | 25 cassettes, including positive and negative control vials | Lateral flow (immunofluorescence) |
Strain Bruno [CIP 104226]: 1.86 × 104 CFU/test Strain CDC-SS-1402: 9.24 × 103 CFU/test Strain CDC-SS-1460: 2.34 × 104 CFU/test |
Analysed using the Sofia analyser, which interprets the immunofluorescent signal using on-board method-specific algorithms. Results are displayed on screen as positive, negative or invalid | 5–6 |
Product | Test supply and format | Method | Analyser | Limit of detection | Description of results | Time to result (minutes)a |
---|---|---|---|---|---|---|
Alere i Strep A (Abbott Laboratories, Lake Bluff, IL, USA) | 24 test kits | Isothermal nucleic acid amplification | Alere i instrument | Strain:
|
Alere i instrument heats, mixes and detects, then presents results automatically on the digital display | < 8 |
Alere i Strep A 2 (ID NOW™ Strep A 2)b (Abbott Laboratories, Lake Bluff, IL, USA) | Information not available | Isothermal nucleic acid amplification | Alere i instrument | Not provided by manufacturer | Alere i instrument heats, mixes and detects, then presents results automatically on the digital display | < 6 |
cobas® Strep A Assay (Roche Diagnostics, Basel, Switzerland) | Strep A assay box of 20 | PCR | cobas Liat® analyser (Roche Diagnostics, Basel, Switzerland) | Strain:
|
Results displayed digitally | < 15 |
Xpert® Xpress Strep A (Cepheid, Sunnyvale, CA, USA) | Each kit contains sufficient reagents to process 10 specimens or quality control samples | PCR | GeneXpert® system (Cepheid, Sunnyvale, CA, USA) | Strain:
|
Results displayed digitally | ≥ 18 |
Four molecular tests were identified which use nucleic acid amplification techniques, either PCR or isothermal nucleic acid amplification, to amplify and detect a specific fragment of the GAS genome (Table 2). In each test, any strep A deoxyribonucleic acid present in the sample is labelled during the reaction, producing fluorescent light, which is monitored by a reader. If fluorescence reaches a specific threshold, the test is considered positive. If the threshold is not reached during the set time (usually up to 15 minutes), the test is negative.
The lateral flow (immunochromatographic and immunofluorescence) tests require a throat swab, which is typically placed into a specimen extraction tube and mixed with reagents to extract the sample from the swab. The swab is discarded and then either a test strip is immersed in the extracted solution or drops of the extracted solution are added to the sample well of a test cassette. The sample then migrates along the test strip or cassette, with any strep A antigens present in the sample binding to immobilised strep A antibodies in the test strip or cassette. When strep A is present at levels above the detection limit of the test, a line appears in the test line region of the strip or cassette. A control line shows technical success of the test. Results should be discarded when the control line indicates that the test has failed (i.e. no line appears in the control line region). Depending on the technology, the results are read either by visual inspection or by using an automated test reader device.
The turbidimetric immunoassay has similar sample collection and extraction steps to the lateral flow tests, but the extracted solution is placed into a cuvette that is prefilled with reagents. This contains rabbit anti-strep A antibodies, which bind to strep A antigens present in the sample. The QuikRead Go (Orion Diagnostica, Espoo, Finland) instrument measures the absorbance of each cuvette and converts the absorbance value into a positive or negative result.
Several of the companies recommend that negative RADT results are confirmed by microbiological culture of a throat swab.
Target population
The population of interest is people aged ≥ 5 years presenting to health-care providers in a primary care (GP surgeries and walk-in centres), secondary care (urgent care/walk-in centres and emergency departments) or community pharmacy setting with symptoms of an acute sore throat. These patients are identified as being more likely (FeverPAIN score of 2 or 3 points) or most likely (FeverPAIN score of 4 or 5 points, or a Centor score of 3 or 4 points) to benefit from an antibiotic by a clinical scoring tool. Relevant subgroups to be evaluated may include children (aged 5–14 years), adults (aged 15–75 years) and the elderly (adults aged > 75 years). In elderly patients, the infection is more likely to be invasive and have a higher associated mortality rate.
Comparator
The comparator is antibiotic prescribing based on clinical judgement and clinical scoring tools alone for strep A. However, the literature search for the comparator arm may also result in evidence referring to clinical scoring for group C and group G streptococci. The clinical scoring tools that may be used in NHS practice are FeverPAIN and Centor/modified Centor (McIsaac). These criteria are based on research evidence that assessed the individual and combination of sore throat symptoms most likely to be present in patients with clinically confirmed streptococcal infection (whether strep A or non-strep A).
FeverPAIN
The FeverPAIN clinical scoring tool includes the following variables:
-
clinical history
-
sore throat (none, mild, moderate or severe)
-
cough or cold symptoms (none, mild, moderate or severe)
-
muscle aches (none, mild, moderate or severe)
-
fever in last 24 hours (yes or no)
-
onset of illness (0–3, 4–7 or > 7 days)
-
-
clinical examination
-
cervical glands (none, 1–2 or > 2 cm)
-
inflamed tonsils (none, mild, moderate or severe)
-
pus on tonsils (yes or no).
-
The result of FeverPAIN is presented as a score ranging from 0 to 5 points, with 1 point assigned for each symptom present.
Centor
The Centor clinical scoring tool includes the following variables:
-
cough (yes or no)
-
exudate or swelling on tonsils (yes or no)
-
tender/swollen anterior cervical lymph nodes (yes or no)
-
temperature > 38 °C (yes or no).
Expert advice suggests that the McIsaac (modified Centor) clinical scoring tool may also be used. The McIsaac score adjusts the Centor score to account for the higher incidence of strep A in children and the reduced incidence of strep A in older adults. This adds age criteria (3–14 years, 15–44 years and ≥ 45 years) and adds 1 point for those aged < 15 years and subtracts 1 point for those aged > 45 years. The Centor result is presented as a score ranging from 0 to 4 points (0–5 points for the modified Centor), with 1 point assigned for each symptom present. 18
Reference standard
The reference standard for assessing the test accuracy of point-of-care tests for strep A infections is microbiological culture of throat swabs using standard blood agar or streptococcal selective agar as the culture medium. In the latter, antibiotics can be added to the standard blood agar to suppress the normal pharyngeal microflora, thus improving the yield of the strep A bacteria. However, there is no consensus on the preferred medium. 19
Throat swab culture remains the best reference standard for diagnosing streptococcal pharyngitis. However, several studies have identified discordance between throat swab culture with PCR or other measures. 20–22
In recent studies, PCR techniques were used as arbitrators of discordant results between throat culture and point-of-care tests. 20,23,24 In point-of-care tests, a threshold quantity of viable organisms must be exceeded for culture to be positive, whereas PCR-based tests are able to detect the genome of organisms irrespective of their viability. However, PCR cannot distinguish between acute strep A pharyngitis and asymptomatic pharyngeal carriage and, therefore, may detect carriage in the absence of a streptococcal infection. Therefore, our reference standard does not include PCR. Furthermore, some of the index tests are PCR based, and so a PCR-based reference standard would be biased in favour of these index tests. Where such arbitration using PCR is reported, we have included these data in this report, but the main analysis uses culture as the reference standard.
Chapter 2 Definition of the decision problem
Decision question
This report undertaken for the NICE Diagnostics Assessment Programme examines the clinical effectiveness and cost-effectiveness of point-of-care tests for diagnosing group A streptococcal infections in people who present with an acute sore throat in primary care, secondary care or community pharmacy settings. The report will help NICE to make recommendations about how well the tests work and whether or not the benefits are worth the cost of the tests, when used in the NHS in England and Wales. The assessment also considers other outcomes, including antibiotic prescription behaviour, clinical improvement in patients’ symptoms and costs associated with treatment, based on evidence identified through systematic literature searches.
The decision question for this project is what is the clinical effectiveness and cost-effectiveness of rapid antigen detection and molecular tests in patients with high clinical scores (i.e. Centor scores of ≥ 3 points, FeverPAIN scores of ≥ 4 points), compared with the use of clinical scoring tools alone, for increasing the diagnostic confidence of suspected group A streptococcal infection in people who present with an acute sore throat in primary, secondary or pharmacy care?
Overall aim of the assessment
The overall aim of this report was to present evidence on the clinical effectiveness and cost-effectiveness of rapid antigen detection and molecular tests in those with high clinical scores (i.e. Centor scores of ≥ 3 points, FeverPAIN scores of ≥ 4 points), compared with the use of clinical scoring tools alone, for increasing the diagnostic confidence of suspected group A streptococcal infection in people aged ≥ 5 years who present with an acute sore throat in primary, secondary or pharmacy care.
Objectives
-
To systematically review the evidence for the clinical effectiveness of selected rapid tests for group A streptococcal infections in people aged ≥ 5 years with a sore throat presenting in a primary, secondary or pharmacy setting.
-
To systematically review existing economic evaluations and develop a de novo economic model to assess the cost-effectiveness of rapid tests in conjunction with clinical scoring tools for group A streptococcal infections compared with clinical scoring tools alone.
Chapter 3 Clinical effectiveness review
Methods
Search strategies for clinical effectiveness
The search strategy for the clinical effectiveness review is detailed in Appendix 1. An iterative procedure was used to develop the database search strategies, building on the scoping searches undertaken by NICE for this assessment and the searches underpinning the related MedTech innovation briefing published by NICE in 2018. 16 Database searches were run in November and December 2018 and were updated in March 2019. No date or language limits were applied. Grey literature searches were undertaken in February and March 2019.
Briefly, the search strategy included:
-
databases – MEDLINE [via OvidSP (Health First, Rockledge, FL, USA)], MEDLINE In-Process & Other Non-Indexed Citations (via OvidSP), MEDLINE Epub Ahead of Print (via OvidSP), MEDLINE Daily Update (via OvidSP), EMBASE (via OvidSP), Cochrane Database of Systematic Reviews [via Wiley Online Library (John Wiley & Sons, Inc., Hoboken, NJ, USA)], Cochrane Central Register of Controlled Trials (CENTRAL) (via Wiley Online Library), Database of Abstracts of Reviews of Effects (DARE) [via the Centre for Reviews and Dissemination (CRD)], Health Technology Assessment (HTA) database (via CRD), Science Citation Index and Conference Proceedings [via the Web of Science™ (Clarivate Analytics, Philadelphia, PA, USA)] and the PROSPERO International Prospective Register of Systematic Reviews (via CRD)
-
trial database – ClinicalTrials.gov
-
reference lists of relevant reviews and included studies
-
online resources of health services research organisations and regulatory bodies – International Network of Agencies for Health Technology Assessment (INAHTA), the US Food and Drug Administration (FDA) medical devices, FDA Clinical Laboratory Improvement Amendments (CLIA) database and European Commission medical devices
-
online resources of selected professional societies and conferences – British Society for Antimicrobial Chemotherapy, British Infection Association, PHE, British Society for Antimicrobial Chemotherapy, Royal College of Pathologists, streptococcal biology conference, Lancefield International Symposium on Streptococci and Streptococcal Diseases, Federation of Infection Societies Conference, The European Congress of Clinical Microbiology and Infectious Diseases (ECCMID), Microbiology Society Conference, American Society of Microbiology, and Association of Clinical Biochemistry and Laboratory Medicine
-
online resources of manufacturers of the included rapid tests.
Inclusion and exclusion of relevant studies (Boxes 1 and 2)
-
People aged ≥ 5 years presenting with symptoms of an acute sore throat. Where possible, relevant subgroups evaluated included children (aged 5–14 years), adults (aged 15–75 years) and the elderly (adults aged > 75 years); however, mixed populations were acceptable. Studies of children aged < 5 years could be included providing ≥ 90% were above this age.
-
Point-of-care tests for strep A (including RADTs and molecular tests as described in Tables 1 and 2).
-
Clinical scoring tools (such as FeverPAIN, Centor or McIsaac).
-
Microbiological culture of throat swabs.
-
Outcomes of test performance:
-
test accuracy – sensitivity, specificity, PPV and NPV. Where possible, evaluated by relevant clinical scores (Centor/McIsaac ≥ 3 points and FeverPAIN ≥ 4 points)
-
discordant results with throat culture
-
test failure rates
-
time to antimicrobial prescribing decision
-
changes to antimicrobial prescribing decision
-
number of appointments required per episode
-
number of delayed or immediate antibiotic prescriptions issued.
-
-
Clinical outcomes:
-
morbidity, including post-strep A infection complications, such as rheumatic fever and side effects from antibiotic therapy
-
mortality
-
contribution to antimicrobial stewardship and onward transmission of infection.
-
-
Patient-reported outcomes:
-
health-related quality of life
-
patient satisfaction with test and antimicrobial prescribing decision
-
health-care professional satisfaction with test and antimicrobial prescribing decision.
-
-
Costs.
For test accuracy data:
-
clinical test accuracy studies that compare the index tests (point-of-care tests for strep A) with throat swab culture
-
studies of head-to-head comparisons of rapid tests were eligible for inclusion if test accuracy statistics were reported for each test.
For data on other clinical outcomes:
-
any study design comparing the index tests (point-of-care tests for strep A) and/or clinical scoring tools (Centor, McIsaac or FeverPAIN) with biological culture as a reference standard.
-
Primary care (GP clinics and walk-in centres), secondary care (urgent care/walk-in centres and emergency departments) or community pharmacy settings.
NPV, negative predictive value; PPV, positive predictive value.
-
Patients without acute sore throat.
-
Patients with existing comorbidities.
-
Patients with known invasive strep A infection.
-
Other point-of-care tests that are not listed in the NICE scope.
-
For test accuracy data: no comparison of index test vs. throat culture reported.
-
For other outcomes: no comparison of index test vs. throat culture or clinical scoring tools (Centor, McIsaac or FeverPAIN).
-
Reviews, biological studies, case reports, editorials and opinions, poster presentations without supporting abstracts, non-English-language reports, meeting abstracts without sufficient information to produce 2 × 2 contingency tables for test performance.
-
Studies published before 1998 (keeping in line with the 1998 directive of the European parliament requiring all in vitro diagnostic devices to have a CE marking). 25
-
Hospital inpatient.
CE, Conformité Européenne.
Study selection strategy
All publications that were identified in searches from all sources were collated in EndNote (Clarivate Analytics) and deduplicated. Two reviewers independently screened the titles and abstracts of all records identified by the searches (Cohen’s kappa = 0.997) and discrepancies were resolved through discussion. Full copies of all studies deemed potentially relevant were obtained and two reviewers independently assessed these for inclusion; any disagreements were resolved by consensus or discussion with a third reviewer. Records excluded at full-text stage and reasons for exclusion were documented.
Data extraction strategy
All data were extracted by one reviewer, using a piloted data extraction form. A second reviewer checked the extracted data on test accuracy [2 × 2 table, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV)], and a third reviewer checked other extracted data. Any disagreements were resolved by consensus. A sample data extraction form used in this review is available in Appendix 2. Test accuracy statistics for rapid/index tests were derived from data extracted onto 2 × 2 contingency tables in the format shown in Table 3. As shown, A represents the number of patients positive for strep A by rapid test and throat culture (true positives); B represents the number of patients positive for strep A by rapid test but not throat culture (false positives); C represents the number of patients negative for strep A by rapid test but positive by throat culture (false negatives); and D represents the number of patients negative for strep A by rapid test and throat culture (true negatives). Sensitivity was calculated as A/(A + C), specificity as D/(B + D), PPV as A/(A + B) and NPV as D/(C + D). Similarly, using data extracted in the formats shown in Tables 4 and 5, we calculated accuracy statistics for the current pathway (Centor/McIssac/FeverPAIN scores) based on NICE thresholds. Where PCR techniques were employed to arbitrate discordant results between microbiological culture and rapid tests, we report the PCR results for the discordant cases. We also extracted test accuracy data for each index test with culture as the reference standard in studies of head-to-head (direct) comparisons of index tests. Data on other outcomes of test performance, morbidity, antibiotic-prescribing behaviour, population characteristics and settings were also extracted using the extraction form.
Culture + | Culture – | Total | |
---|---|---|---|
Index test + | A | B | A + B |
Index test – | C | D | C + D |
Total | A + C | B + D | A + B + C + D |
Culture + | Culture – | Total | |
---|---|---|---|
Centor/McIsaac score of ≥ 3 points | A | B | A + B |
Centor/McIsaac score of < 3 points | C | D | C + D |
Total | A + C | B + D | A + B + C + D |
Culture + | Culture – | Total | |
---|---|---|---|
FeverPAIN score of ≥ 4 points | A | B | A + B |
FeverPAIN score of < 4 points | C | D | C + D |
Total | A + C | B + D | A + B + C + D |
Quality assessment strategy for test accuracy studies
Quality assessment of eligible test accuracy studies was undertaken with a tailored Quality Assessment of Diagnostic Accuracy Studies – 2 (QUADAS-2) tool. 26 Methodological quality was assessed by a single reviewer and findings were checked by a second reviewer. Disagreements were resolved by consensus or use of a third reviewer.
Quality assessment aimed to assess the risk of bias and applicability concerns of included studies where one (or more) of our 21 scoped tests was the index test(s), and with biological throat culture as the reference standard. Additional tests outside the scope were not quality appraised.
Modifications to tailor the QUADAS-2 form to the research question in terms of the risk-of-bias assessment were as follows (see Appendix 4 for the tailored QUADAS-2 form and guidance notes).
Patient selection domain
Two further signalling questions were added to this domain. The first was ‘were selection criteria clearly described?’. It is important that the correct patient groups were included in the studies. Patients aged < 5 years follow a different NICE clinical pathway27 because they are more likely to present with a sore throat and less likely to be able to articulate their symptoms, and it is less likely that a throat swab can be obtained. Likewise, a clinical score (such as Centor or FeverPAIN) should be reported, with patients included only if they have a score of > 3 points on Centor or > 4 points on FeverPAIN. Those with lower scores may be systematically different and, therefore, test accuracy may also differ, introducing bias. Including patients aged < 5 years and with a low clinical score also raises applicability concerns.
The second signalling question that was added was ‘were patients seen in an ambulatory care setting?’. Patients seen as inpatients may vary in severity and have comorbidities affecting their diagnosis.
Index test domain
Two questions were added within this domain. The first was ‘was a separate swab undertaken for the index test?’. This question was added as manufacturers’ specifications require separate swabs to be taken for index and reference standard tests. Using one swab for multiple purposes may reduce the quantity of the sample for testing and, thus, affect the accuracy of the test. The second question was ‘is the test reading objective?’. Some of the tests require a subjective reading of whether or not a line, indicating a positive result, has appeared. Owing to this, there is always a high level of bias in any rapid test that requires a determination of the result by the human reader. Tests with automated readings have been shown to have improved specificity and reduce operator errors, especially in unclear results. 28
Comparator domain
One additional signalling question was added in this domain: ‘was a separate swab taken for throat culture testing?’. Using one swab for multiple purposes may reduce the quantity of the sample for testing and, thus, affect the accuracy of the test. Under this domain, the directions for taking a throat culture specimen were clarified based on the PHE guidelines on UK Standards for Microbiology Investigations. 29
Flow and timing
Two further signalling questions were added to the flow and timing domain. The first was ‘were both index test(s) and reference standard (and comparator where included) all carried out at the same appointment?’. The swabs for a rapid test and culture should be obtained at the same appointment. The levels of strep A are likely to vary by day, so taking a later sample could introduce systematic bias.
The additional signalling question was ‘were both index test(s) and reference standard (and comparator where included) all carried out prior to commencement of antibiotics?’. Patients should not have been treated with antibiotics prior to testing, as antibiotics are likely to have reduced the amount of strep A present.
Quality appraisal strategy for studies of prescribing behaviour and clinical outcomes
Quality appraisal for studies of prescribing behaviour and clinical outcomes used two different tools: the Cochrane risk-of-bias tool for randomised controlled trials (RCTs)30 and the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for analytical cross-sectional studies. 31 Methodological quality was assessed by a single reviewer and findings were checked by a second reviewer. Disagreements were resolved by consensus or use of a third reviewer.
Assessment of test accuracy
To assess the accuracy of the point-of-care tests, we planned to conduct a series of meta-analyses on the available data. Data from studies that either presented 2 × 2 tables for one of the index tests compared with culture or provided information that allowed calculation of the 2 × 2 table were included in the meta-analyses.
The median age of participants was used to categorise each study into one of the three age groups of interest, with two reviewers discussing when the categorisation was not straightforward. Setting was also considered to inform the age categorisation where necessary (e.g. if the study was conducted within a paediatric department). The setting of each study was treated as a categorical variable, indicating primary care (health-care centre, GP clinic or primary care clinic), secondary care (emergency department, private paediatric clinic, outpatient clinic, urgent care clinic or walk-in centre) and pharmacy setting or mixed.
For the purpose of the meta-analysis, the throat score of the population was dichotomised to 0, if the study population included patients who had scores below the threshold set in the scope, and 1, if the study population matched the scope (Centor/McIsaac score of ≥ 3 points or FeverPAIN score of ≥ 4 points). Alternative throat score classification of study populations was also considered, using the categories of a population matching the scope (see Chapter 1, Target population), a population restricted by throat score but still including patients not in the scope (e.g. Centor score of 2 points) and a population without any restriction by throat score.
Methods of analysis/synthesis
We planned to use bivariate models to conduct each meta-analysis, as they allow simultaneous estimation of both sensitivity and specificity, accounting for correlation between the two measures. Where at least two studies existed for a test, we used a random-effects model to allow for deviation in test performance across each study. If bivariate or random-effects models failed to converge or produced results with unexpectedly wide confidence intervals (CIs) around key parameters, then simpler models (e.g. fixed-effect or univariate models) were used instead. 32,33 Where bivariate models were used, a comparison with the equivalent univariate models was made, and any difference noted. It was not anticipated that any meaningful difference between the two model types would be observed given the small number of data available.
For index tests that had just one study, a meta-analysis was not conducted. The impacts of age, setting and prevalence on test performance were all assessed through the meta-analysis of relevant subgroups. NICE advised the EAG against meta-analysis across rapid tests from different manufacturers.
Clinical effectiveness results
Search results
Figure 2 is a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram that illustrates the study selection process for the clinical effectiveness review. The search identified 5919 records through database and other searches. Following duplicate removal, we screened 3309 records, of which 3072 were excluded by their titles and abstracts, leaving 237 assessed for their eligibility to be included in the review. A total of 199 studies were subsequently excluded with reasons, leaving 38 studies [26 full texts,6,20,23,24,34–55 three abstracts,56–58 five manufacturers’ studies (submitted directly to NICE in response to a request for information) and four FDA documents59–62]. The most common reason for exclusion at this stage was not reporting any of the rapid tests listed in the scope. The full list of excluded studies with reasons for exclusion can be found in Appendix 3.
Study characteristics
Characteristics of the 38 studies included in the clinical effectiveness review are described in Figures 3 and 4 and Table 6. Of the 29 studies (full texts and abstracts)6,20,23,24,34–58 identified by the search, 26 of the studies reported test accuracy data. 20,23,24,34–52,54,57,58,63 Three of the identified studies6,53,55 reported only other outcomes (such as antibiotic-prescribing rates) and did not report test accuracy. In addition, five studies were sent by manufacturers in response to a request for information by NICE and four FDA documents were retrieved. 59–62
Study (first author and year of publication) | Data source | Setting | Study population | Index test | Comparison with Centor/McIsaac/FeverPAIN scores? | Throat swab culture medium | Outcomes | Test accuracy in high-risk subpopulations with Centor/McIsaac scores of ≥ 3 points or FeverPAIN score of ≥ 4 points | |||
---|---|---|---|---|---|---|---|---|---|---|---|
n | Age group as reported | Sore throat clinical score criteria | Strep A prevalence (%) | ||||||||
Published articles and abstracts | |||||||||||
Anderson 200356 | Abstract | Secondary | 353 | Children (0–14 years) |
No criteria reported Used clinical symptoms |
15 | Clearview Strep A | No | NR | Test accuracy | NR |
Azrad 201934 | Published article | Secondary | 100 | NR |
No criteria reported Used clinical symptoms |
25 | BD Veritor system | No | Streptococcal selective agar | Test accuracy | NR |
QuikRead Go Strep A test kit (Orion Diagnostica) | |||||||||||
Berry 201820 | Published article | Secondary | 215 | Children (age range not reported) | NR | 19.5 | Alere i Strep A test | No | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
NR |
BD Veritor system | NR | ||||||||||
Bird 201835 | Published article | Secondary | 395 | Children | McIsaac ≥ 3 points | NR or not calculable | bioNexia Strep A | Yes/Centor | NA |
Test accuracy Antibiotic-prescribing behaviour |
NR |
Bura 201736 | Published article | Primary | 101 | Adults (18–44 years) | Centor ≥ 2 points | 22.7 | OSOM Strep A test (Sekisui Diagnostics) | Yes/Centor | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
No |
Cohen 201537 | Published article | Secondary | 481 | Children (median age 11 years) | McIsaac all scores | 30.3 | Alere i Strep A test | Yes/McIsaac | Blood agar | Test accuracy | No |
Dimatteo 200138 | Published article | Secondary | 383 | Adults (18–86 years) | Centor ≥ 1 point | NR or not calculable | Alere™ TestPack +Plus Strep A (Abbott Laboratories) | Yes/McIsaac | Streptococcal selective agar | Test accuracy | NR |
Humair 200639 | Published article | Primary | 224 | Adults (15–65 years) |
Centor = 2 points Centor > 2 points |
46.9 | Alere TestPack +Plus Strep A (Abbott Laboratories) | Yes/Centor | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
Yes |
Johansson 200340 | Published article | Primary | 144 | Mixed (children and adults) | NR | 31.4 | Alere TestPack +Plus Strep A (Abbott Laboratories) | No | NR |
Test accuracy Antibiotic-prescribing behaviour |
NR |
Johnson 200141 | Published article | Primary | 522 | Adults (median age 26 years) |
No criteria reported Used clinical symptoms |
NR or not calculable | Alere TestPack +Plus Strep A (Abbott Laboratories) | No | Blood agar | Test accuracy | NR |
Kurtz 200042 | Published article | Secondary | 257 | Children (4–15 years) |
No criteria reported Used clinical symptoms |
31.1 | Alere TestPack +Plus Strep A (Abbott Laboratories) | No | Blood agar | Test accuracy | NR |
Lacroix 201823 | Published article | Secondary | 1002 | Children | McIsaac ≥ 2 points | 38 | Sofia Strep A FIA (Quidel) | No | Blood agar | Test accuracy | No |
Alere TestPack +Plus Strep A test (Abbott Laboratories) | |||||||||||
Lindbæk 200443 | Published article | Primary | 306 | Adults (median age 23.9 years) | NR | 35.9 | Alere TestPack +Plus Strep A (Abbott Laboratories) | No | Streptococcal selective agar | Test accuracy | NR |
Little 20136 | Published article | Primary | 1760 | Mixed (aged ≥ 3 years) | FeverPAIN ≥ 1 point | NR or not calculable | Alere TestPack +Plus Strep A (Abbott Laboratories) | Yes | None | Antibiotic-prescribing behaviour | No |
Llor 200944 | Published article | Primary | 222 | Adults (median age 30.6 years) | Centor ≥ 2 points | 21.2 | OSOM Strep A | Yes/Centor | Blood agar | Test accuracy | No |
Llor 201145 | Published article | Primary | 276 | Adults (median age 31.7 years) | Centor ≥ 1 point | 16.7 | OSOM Strep A test | Yes/Centor | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
Yes |
McIsaac 200446 | Published article | Primary | 787 | Children (3–17 years) and adults (≥ 18 years); results reported separately by group | McIsaac all scores | 29 | TestPack Plus Strep A test (Abbott Laboratories) | Yes/McIsaac | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
No |
Nerbrand 200247 | Published article | Primary | 615 | Mixed (children and adults |
No criteria reported Used clinical symptoms |
21.1 | TestPack Plus Strep A test (Abbott Laboratories) | No | Blood agar | Test accuracy | NR |
Pauchard 201357 | Abstract | Secondary | 193 | Children (3–18 years) | McIsaac > 2 points | 37 | Strep A Rapid Test (Biopanda Reagents) | Yes/McIsaac | NR | Test accuracy | NR |
Penney 201648 | Published article | Secondary | 147 | Children (mean age 8.8 years) |
No criteria reported Used clinical symptoms |
40.1 | Alere TestPack +Plus Strep A test (Abbott Laboratories) | No | Streptococcal selective agar | Test accuracy | NR |
Rogo 201149 | Published article | Secondary | 228 | Children |
No criteria reported Used clinical symptoms |
28.9 | OSOM Strep A test | No | Blood agar | Test accuracy | NR |
Rosenberg 200250 | Published article | Secondary | 126 | Mixed (children and adults) | Centor all scores | 25.4 | TestPack Plus Strep A test (Abbott Laboratories) | Yes/Centor | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
NR |
Santos 200351 | Published article | Secondary | 49 | Children (1–12 years) |
No criteria reported Used clinical symptoms |
30 | Alere TestPack +Plus Strep A (Abbott Laboratories) | No | Blood agar | Test accuracy | NR |
Stefaniuk 201752 | Published article | Primary | 44 | Children | McIsaac/Centor all scores | 26.3 | QuikRead Go Strep A test kit (Orion Diagnostica) | Yes/Centor | Blood agar |
Test accuracy Antibiotic-prescribing behaviour |
No |
96 | Adults and children | McIsaac/Centor all scores | 22.4 | ||||||||
Thornley 201653 | Published article | Pharmacy | 149 | NR | Centor > 2 points | 24.2 | OSOM Strep A test (Sekisui Diagnostics) | Yes/Centor | None | Antibiotic-prescribing behaviour | NA |
Valverde 201858 | Abstract | Secondary | 580 | Mixed (aged ≥ 0 years) | NR | NR or not calculable | TestPack Plus Strep A test | No | Blood agar | Test accuracy | NR |
Wang 201724 | Published article | Primary | 427 | Children | Centor ≥ 1 point | 30.2 | cobas Liat Strep A Assay (Roche Diagnostics) | No | NR | Test accuracy | No |
Weinzierl 201854 | Published article | Secondary | 160 | Children (median age 6.5 years) | NR | 38 | OSOM Strep A test | No | Blood agar | Test accuracy | NR |
Alere i Strep A test | |||||||||||
Worrall 200755 | Published article | Primary | 533 | NR | Centor all scores | NR or not calculable | Clearview Exact Strep A (Abbott Laboratories) | Yes/Centor | NA | Antibiotic-prescribing behaviour | NA |
Manufacturer’s studies provided in responses to request by NICE | |||||||||||
Biopanda Reagents | Manufacturer’s information | Secondary | 160 | Median age 6.5 years | NA | 23.2 | Alere i Strep A test | No | Blood agar | Test accuracy | NR |
Cepheid | Manufacturer’s information | Primary | 577 | NR | NA | 25.6 | Xpert Xpress | Yes/Centor | NA | Test accuracy | NR |
nal von minden GmbH | Manufacturer’s information | Unknown | 244 | Mixed (adults and children) | NA | 34.4 | NADAL Strep A test | No | Blood agar | Test accuracy | NR |
Orion Diagnostica | Manufacturer’s information | Primary | 271 | NR | NA | 32.8 | QuikRead Go Strep A test kit (Orion Diagnostica) | No | Streptococcal selective agar | Test accuracy | NR |
Roche Diagnostics | Manufacturer’s information | Mixed | 570 | Mixed (aged ≥ 3 years) | NA | 30.4 | cobas Liat Strep A Assay (Roche Diagnostics) | No | Blood agar | Test accuracy | NR |
FDA documents | |||||||||||
Abbott Laboratories61 | FDA document | Mixed | 981 | NR | NA | 20.2 | Alere i Strep A 2 test | No | Blood agar | Test accuracy | NR |
Becton Dickinson59 | FDA document | Mixed | 796 | Mixed (aged ≥ 0 years) | NA | 18.7 | BD Veritor system | No | Blood agar | Test accuracy | NR |
Cepheid62 | FDA document | Mixed | 618 | NR | NA | 25.6 | Xpert Xpress Strep A (Cepheid) | No | NR | Test accuracy | NR |
Quidel60 | FDA document | Mixed | 736 | NR | NA | 17.4 | Sofia Strep A FIA (Quidel) | No | Blood agar | Test accuracy | NR |
The tests, their settings, the populations they cover and the head-to-head studies are illustrated in Figures 3 and 4.
Population
The 38 included studies comprised ≈ 14,000 symptomatic participants. Prevalence of strep A ranged from 15% to 49%, with no clear demographic or clinical patterns accounting for this variation. 39,56 Similarly, prevalence estimates of strep A were no more or less likely to be higher in secondary or primary care settings. The study population comprised adults and children; however, the exact proportions are unknown as they were not reported in about half of the included studies. In most of the included studies, participants aged < 18 years were identified as children. In fact, only two studies met the age criterion for children (ages 5 to 14 years) as defined in the protocol and scope. 50,51 Hence, studies that included children aged < 5 years as well as ≥ 5 years were included in the present review. More so, only two studies met the age criterion for adults (age ≥ 15 years) as defined in the protocol and scope and, therefore, the findings of the review may be applicable to only a mixed population. 39,52
All 38 studies included patients with a sore throat; however, other clinical characteristics were insufficiently reported across most of the included studies. For instance, sore throat clinical scores (e.g. Centor/McIsaac/FeverPAIN scores) were reported in 16 studies,6,23,35–39,44–46,50,52,53,55,57 of which two exclusively included patients with high clinical scores (i.e. Centor ≥ 3 points, FeverPAIN ≥ 4 points). 6,53 Both of these studies were on prescribing behaviours. However, there were two test accuracy studies that included patients with lower clinical scores (i.e. Centor scores of < 3 points) but reported test accuracy results separately by Centor score. 39,45
Recent antibiotic use prior to enrolment was considered in eight included studies, and patients without any recent use of antibiotics prior to recruitment were eligible for inclusion in these studies. 24,35,36,42,45,47,48,50
Index tests
There were more studies evaluating RADTs (76%, 29/38) than were evaluating molecular tests (18%, 7/38) or studies comparing both rapid tests and molecular tests (5%, 2/38). For instance, the Alere™ TestPack +Plus Strep A (Abbott Laboratories) was the most common antigen detection test, which was evaluated in 13 studies (excluding unpublished studies conducted by the manufacturers). 6,23,38–43,46–48,51,58 Conversely, the only molecular test evaluated in a peer-reviewed journal article was the PCR-based cobas Liat Strep A Assay (Roche Diagnostics). 24
As shown in Table 6, there were four studies providing head-to-head comparisons of index tests: BD Veritor System (Becton Dickinson) and QuikRead Go (Orion Diagnostica);34 Alere i Strep A (Abbott Laboratories) and BD Veritor System (Becton Dickinson);20 Alere i Strep A (Abbott Laboratories) and Sofia Strep A fluorescent immunoassay (FIA) (Quidel);23 and Alere i Strep A (Abbott Laboratories) and OSOM Strep A. 54 Essentially, each index test was compared with throat culture as the reference standard in order to obtain test accuracy.
The search strategy revealed test accuracy studies of OSOM Ultra Strep A (Sanofi Genzyme and Sekisui Diagnostics). 64,65 However, these studies were subsequently excluded because the EAG could not confirm whether it is the same as the OSOM Strep A test (Genzyme and Sekisui Diagnostics), which is listed among the scoped rapid tests. Similarly, it was unclear if Sofia Strep A+ Plus FIA (Quidel)66 and OSOM Strep A (Sekisui Diagnostics)67 were identical to Sofia Strep A FIA (Quidel) and OSOM Strep A (Sekisui Diagnostics), respectively; hence, studies of the former were excluded.
Comparator and reference standard
Index tests were compared with Centor, McIsaac or FeverPAIN scoring tools in 12 studies. 6,35,36,38,45,46,50,52,53,55,57,68 However, only six of these studies directly compared test accuracy between clinical scoring tools and point-of-care tests. 39,44–46,52 Only two reported test accuracy in patients with high clinical scores (Centor ≥ 3 points, FeverPAIN ≥ 4 points). 39,45
The culture medium used for the reference standard (blood agar or streptococcal selective agar) was reported in all but five studies. 24,40,56,57 Neither the manufacturers’ submissions (submitted directly to NICE in response to a request for information) nor the FDA studies provided information on index tests compared with clinical scoring tools.
Outcomes
Thirty-eight studies were included across all outcomes. Twenty-six published articles (full texts and abstracts) reported test accuracy data (of which seven also reported on antibiotic-prescribing rates and five reported on test failure rate); there were an additional five submissions from manufacturers and four FDA documents.
Five studies had insufficient data to construct 2 × 2 contingency tables to ascertain the accuracy of index tests with microbiological throat culture as the reference standard. 6,35,47,53,55 These studies were further excluded from the assessment of test accuracy in Point-of-care/index tests. An attempt to verify at least some of the discrepant results between rapid tests and microbiological culture was undertaken in only five studies. 20,23,24,37,43 Antibiotic-prescribing behaviour was reported in 12 studies. 6,20,35,36,39,40,45,46,50,52,53,55 None of the other outcomes in the scope or protocol was reported in any of the included studies.
Setting
Participants in the included studies were recruited from GP/primary care clinics/family practices,6,40,41,43–47 community pharmacies,53 paediatric clinics,42,49,54,56 paediatric emergency departments,23,48,57 hospital outpatient departments20,51 and emergency departments. 35,50 There were two multicentre studies with mixed populations from primary and secondary care settings: Cohen et al. 37 sampled patients from the emergency department (secondary care) and urgent care clinics (primary care); and Wang et al. 24 sampled patients from paediatric clinics (secondary care) and family practices (primary care).
Only one unpublished study supplied by the manufacturers confirmed the study setting (Orion Diagnostica, primary care). The remaining unpublished studies conducted by manufacturers may have included mixed populations from primary and secondary care settings; however, this is purely speculative as study settings were not reported in these studies. However, these studies provide no evidence to suggest any recruitment of inpatients.
Study design
The 26 published studies on test accuracy comprised one RCT45 and 25 cohort studies. 20,23,24,34–44,46,48–52,55–58,69 It was unclear what study design had been undertaken in any of the unpublished studies provided by the manufacturers or the FDA.
The 12 studies that provided data on antibiotic-prescribing rates comprised three RCTs,6,45,55 one before-and-after cohort study20 and eight one-armed cohort studies. 35,36,39,40,46,50,52,53
Quality considerations of included studies
The assessment of risk of bias and applicability for the 26 included test accuracy studies20,23,24,34–46,48–52,55–58,69 using the QUADAS-2 tool are summarised in Table 7 and Figure 5. Four of the included studies compared two index tests that are relevant to this review, so there are 30 quality assessment ratings for the index test domains. Likewise, one study included two different culture mediums as its reference standard, so there are 27 quality assessment ratings across the reference standard domains.
Study (first author and year of publication) | Risk of bias | Applicability concerns | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Patient selection | Index test | Additional index test | Reference standard | Additional reference standard | Flow and timing | Patient selection | Index test | Additional index test | Reference standard | Additional reference standard | |
Andersen 200356 | Unclear | High | NA | Unclear | NA | Unclear | High | Unclear | NA | Unclear | NA |
Azrad 201934 | High | Low | Low | Unclear | NA | High | High | Low | Low | Unclear | NA |
Berry 201820 | Unclear | High | Low | High | NA | Unclear | High | Low | Low | Unclear | NA |
Bird 201835 | Unclear | High | NA | Unclear | NA | High | High | Unclear | NA | Unclear | NA |
Bura 201736 | High | High | NA | Low | NA | High | High | Low | NA | Low | NA |
Cohen 201537 | Unclear | Low | NA | Low | NA | Unclear | High | Low | NA | Low | NA |
Dimatteo 200138 | High | High | NA | High | NA | High | Low | Unclear | NA | High | NA |
Humair 200639 | Low | High | NA | Unclear | NA | Low | Low | Low | NA | Low | NA |
Johansson 200340 | Unclear | High | NA | Unclear | NA | High | High | Unclear | NA | Unclear | NA |
Johnson 200141 | High | High | NA | High | NA | High | High | Low | NA | High | NA |
Kurtz 200042 | Unclear | High | NA | High | Low | Unclear | High | Low | NA | High | Low |
Lacroix 201823 | Unclear | Low | High | Low | NA | Low | High | Low | Low | Low | NA |
Lindbæk 200443 | Unclear | High | NA | High | NA | Low | High | Low | NA | High | NA |
Llor 200944 | Low | High | NA | Unclear | NA | Low | High | Low | NA | Unclear | NA |
Llor 201145 | Low | High | NA | Unclear | NA | Unclear | Low | Low | NA | Low | NA |
McIsaac 200446 | Unclear | High | NA | Low | NA | High | Unclear | Unclear | NA | Low | NA |
Nerbrand 200247 | Unclear | High | NA | Unclear | NA | Low | High | Low | NA | Low | NA |
Pauchard 201357 | Unclear | High | NA | Unclear | NA | Unclear | High | Unclear | NA | Unclear | NA |
Penney 201648 | Low | High | NA | Unclear | NA | Low | High | Low | NA | Unclear | NA |
Rogo 201149 | Unclear | High | NA | High | NA | Unclear | High | Low | NA | Unclear | NA |
Rosenberg 200250 | High | High | NA | Low | NA | Low | High | Low | NA | Low | NA |
Santos 200351 | Unclear | High | NA | Unclear | NA | High | High | Low | NA | Low | NA |
Stefaniuk 201752 | Unclear | Low | NA | Unclear | NA | Unclear | High | Low | NA | Low | NA |
Valverde 201858 | Unclear | High | NA | Low | NA | Unclear | High | Unclear | NA | Low | NA |
Wang 201724 | Unclear | Low | NA | Unclear | NA | Low | High | Low | NA | Unclear | NA |
Weinzierl 201854 | Unclear | Low | High | Low | NA | Unclear | High | Unclear | Unclear | Low | NA |
Risk of bias for test accuracy studies
In general, the methodological and reporting quality of the included studies was poor, with risk of bias considered to be high in two or more domains for 13 studies (50%). 20,34–36,38,40–43,46,49–51 No study was considered to be at a low risk of bias in all four domains.
In 65.4% of studies (17/26),20,23,24,35,37,42,43,46,47,49,51,52,54,56–58,70 it was not clear whether patients were consecutively included or a convenience sample had been chosen, and only 15.4% (4/26 studies)39,44,45,48 were rated as having a low risk of bias in the patient selection domain (domain 1: patient selection). The selection process in the remaining 19.2% of studies (5/26)34,36,41,50 was rated as being at a high risk of bias, with studies clearly reporting convenience samples, having case–control designs or having made inappropriate exclusions from the eligible screening population.
The key risks of bias were surrounding how the index test was undertaken (22/30 domains were rated as being at high risk, 73.3%, 22/26 studies). 20,23,35,36,38–46,48–51,55–58,69 Although all of the included studies were on predeveloped tests that had in-built thresholds, in many cases use of the index test required a subjective reading by a clinician (domain 2: index tests). There were further concerns that studies often used the same swab intended for the index test to first streak the agar for biological culture, rather than taking an additional swab sample. Using one swab for multiple purposes may reduce the amount of the sample and underestimate the accuracy of the test.
Unclear or incomplete reporting was common in the reference standard domain (domain 3: reference standard). In all studies, time taken to process the biological culture exceeded that of the rapid test, with biological cultures generally reported 48 hours following sample collection. However, many studies did not state that laboratory staff were blinded to the results of the index test or reference standard (domain 3: reference standard, 13/27 studies, 48.1%). 24,34,35,39,40,44,45,47,48,51,52,56,57 There was a high risk of bias in 22.2% of the studies (6/27)20,38,41–43,49 because the methods of biological culture testing did not match current UK guidelines. 29
The flow of patients through the studies was rated as being at a high risk of bias in 31% of studies (8/26, domain 4: flow and timing). 34–36,38,40,41,46,51 The majority of these (62.5%, 5/8 studies)34–36,41,46,51 had incomplete testing and made exclusions from the analysis. However, in two of these studies only some patients received the reference standard (partial verification bias). In one study,38 only patients with negative rapid test results received the reference standard; in the other,40 only those with positive rapid test results were given the reference standard. The use of antibiotics was a further concern, with one study directly reporting 61 patients taking antibiotics at the time of testing,34 and 90% (9/10) of unclear ratings were linked to prior/current antibiotic use not being reported. 20,37,42,45,49,52,54,56–58
Applicability of study findings for test accuracy studies
The applicability of study findings was assessed with regard to three domains: patient selection, index test (rapid or molecular test) and reference standard (biological culture). There were significant concerns regarding the applicability of the studies to UK practice for patient selection in 22 of the 26 studies (85%, domain 1: patient selection). 20,23,24,34–37,40–44,48–52,55–58,69 In the UK, the test would be given only following an assessment using a clinical scoring tool, such as Centor or FeverPAIN. The rapid test would be given only to people with Centor scores of ≥ 3 points and FeverPAIN scores of ≥ 4 points. In all 22 studies, either a clinical scoring tool was not used or, if used, patients were included with scores lower than UK cut-off points and test accuracy data were not reported separately by score. In addition, 17 of the 22 studies (77%)20,23,24,35,37,40,42,48–52,55–58,69 included children aged < 5 years. Children aged < 5 years follow a different clinical pathway owing to differences in the presentation of symptoms and difficulties around communication and sample collection. 27 Concerns regarding the applicability of the index test were rated as being low for the majority of the studies (21/30 domains, 70%, 18 studies),20,23,24,34,36,37,39,41–45,48–52,69 with studies reporting that the tests were carried out in accordance with manufacturer’s guidelines. The eight remaining studies35,38,40,46,54,56–58 were rated as being unclear as this was not specified (domain 2: index test). Only four studies (4/27, 14.8%)38,40,42,43 were rated as having high concern for the applicability with respect to the reference standard (owing to deviations from UK guidelines on the undertaking of appropriate culture methods with respect to agar type, incubation period or atmosphere; domain 3: reference standard).
Assessment of studies of prescribing behaviour and clinical outcomes
There were 12 studies that reported on antibiotic-prescribing behaviour. 6,20,35,36,39,40,45,46,50,52,53,55 Of these, three studies were RCTs6,45,55 and were quality appraised using the Cochrane risk-of-bias tool for RCTs. 30 There were six studies (including one before-and-after study) that were single-arm cohorts and have been appraised using the JBI critical appraisal checklist31 for analytical cross-sectional studies. 20,36,40,50,52,71 The remaining three studies were one-armed cohort studies using predetermined guidelines to hypothetically estimate prescribing behaviour and offer no information on what happened in the real world or on what clinicians would do. 39,46,53 These studies were not quality appraised and have been briefly summarised later in the results (see Antibiotic-prescribing behaviours: other study designs).
Randomised controlled trials
Risk of bias of the included trials is shown in Figure 6 and Table 8. The domains regarding blinding were removed, as we were interested in test–treat trials measuring prescribing decisions with and without rapid tests. Therefore, clinicians could not be blinded to test results, and we considered blinding to which exact test was used to be unnecessary in this context. In general, the methodological quality of the RCTs was rated as being fair, with all studies having at least one domain rated as unclear. There was unclear risk of bias in four domains across the three studies (random sequence generation, allocation concealment, incomplete outcome data and selective outcome reporting). This was owing to insufficient information presented on which to make an assessment. The remaining applicable domains were judged to be at a low risk of bias.
Cohort studies
Risk of bias in the included cohort studies is shown in Figure 7 and Table 9. No study was rated as having had high methodological quality across all areas. There was low methodological quality regarding criteria for inclusion in 83% of studies (five out of six) and details regarding the study subjects in 33% of studies two out of six). 20,35,40,50,52 These studies reported the details of the patients, but provided no information on the details of those who are making the prescribing decisions. The outcome of interest in these studies was prescribing behaviour. The measurement of prescribing behaviour considered to be valid and reliable was recording in medical records; only 33% of the studies clearly reported this. 20,36 A confounder in the studies was current antibiotic use; 33% (two out of six) of studies did not clearly specify current or recent antibiotic use as an excluding factor.
Study (first author and year of publication) | Were the criteria for inclusion in the sample clearly defined? | Were the study subjects and the setting described in detail? | Was the exposure measured in a valid and reliable way? | Were objective, standard criteria used for measurement of the condition? | Were confounding factors identified? | Were strategies to deal with confounding factors stated? | Were the outcomes measured in a valid and reliable way? | Was statistical analysis appropriate? |
---|---|---|---|---|---|---|---|---|
Berry 201820 | No | Yes | Yes | No | Unclear | Unclear | Yes | Yes |
Bird 201835 | No | Yes | Yes | Yes | Yes | Yes | No | Yes |
Bura 201736 | Unclear | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Johansson 200340 | No | No | Yes | Unclear | Yes | Yes | No | Yes |
Rosenberg 200250 | No | Yes | Yes | Yes | Yes | Yes | Unclear | Yes |
Stefaniuk 201752 | No | No | Yes | Yes | Unclear | Unclear | Unclear | Yes |
Current pathway (clinical scoring tools only)
Accuracy of clinical scoring tools with culture as the reference standard
Accuracy statistics for Centor39,44,45,52 and McIsaac scores46,52,57 with microbiological culture as the reference standard are presented in Table 10. The results show wide variations in the test accuracy of sore throat clinical scoring tools. Specificity point estimates were reported between 0.172 and 0.648, and sensitivity point estimates were reported between 0.735 and 0.972. This suggests that these tools might be better at identifying people who do have Streptococcus than they are at identifying people who do not.
Study (first author and year of publication) | Strep A prevalence (%) | Setting | Clinical score | Test accuracy statistics | ||||
---|---|---|---|---|---|---|---|---|
Culture + | Culture – | Total | Sensitivity (95% CI) | Specificity (95% CI) | ||||
Humair 200639 | 46.9 | Primary care/GP clinic | Centor score of ≥ 3 points | 105 | 119 | 224 | 0.750 (0.678 to 0.822) | 0.487 (0.423 to 0.551) |
Centor score of < 3 points | 35 | 113 | 148 | |||||
Total | 140 | 232 | 372 | |||||
Llor 200944 | 21.2 | Primary care/GP clinic | Centor score of ≥ 3 points | 47 | 104 | 151 | 0.855 (0.761 to 0.948) | 0.377 (0.304 to 0.451) |
Centor score of < 3 points | 8 | 63 | 71 | |||||
Total | 55 | 167 | 222 | |||||
Llor 201145 | 16.7 | Primary care/GP clinic | Centor score of ≥ 3 points | 36 | 80 | 116 | 0.735 (0.587 to 0.846) | 0.648 (0.581 to 0.709) |
Centor score of < 3 points | 13 | 147 | 160 | |||||
Total | 49 | 227 | 276 | |||||
McIsaac 200446 | 29 | Primary care/GP clinic | McIsaac score of ≥ 3 points | 193 | 375 | 568 | 0.846 (0.800 to 0.893) | 0.329 (0.290 to 0.368) |
McIsaac score of < 3 points | 35 | 184 | 219 | |||||
Total | 228 | 559 | 787 | |||||
Pauchard 201357 | 37 | Hospital | McIsaac score of ≥ 3 points | 69 | 101 | 170 | 0.972 (0.893 to 0.995) | 0.172 (0.112 to 0.253) |
McIsaac score of ≤ 2 points | 2 | 21 | 23 | |||||
Total | 71 | 122 | 193 | |||||
Stefaniuk 201752 | 22.4 | Primary care/GP clinic | Centor/McIsaac score of ≥ 3 points | 37 | 39 | 76 | 0.861 (0.714 to 0.942) | 0.250 (0.145 to 0.392) |
Centor/McIsaac score of ≤ 2 points | 6 | 13 | 19 | |||||
Total | 43 | 52 | 95 |
Rosenberg et al. 50 and Johansson et al. 40 also reported accuracy statistics for sore throat symptoms with culture as the reference standard. However, these studies provided insufficient data to construct 2 × 2 contingency tables using the recommended clinical scoring threshold (see Data extraction strategy). The use of different clinical scoring tools, age selection criteria, clinical score inclusion criteria and settings across the seven contributing studies precluded any pooling.
Accuracy of clinical scoring tools split by age group
Two46,52 of the six studies included a mixed population of adults and children. In the study by McIsaac et al. ,46 a threshold of > 2 points (Modified Centor/McIsaac score) produced a sensitivity estimate of 0.884 (95% CI 0.820 to 0.928) and a specificity estimate of 0.234 (95% CI 0.188 to 0.287) in children aged 3–17 years, and a sensitivity estimate of 0.767 (95% CI 0.651 to 0.858) and a specificity estimate of 0.439 (95% CI 0.378 to 0.501) in adults aged ≥ 18 years.
In the study by Stefaniuk et al. ,52 a threshold of > 2 points (Modified Centor/McIsaac score) produced a sensitivity estimate of 1.00 (95% CI 0.80 to 1.00) and a specificity estimate of 0.083 (95% CI 0.015 to 0.285) in children aged 1–14 years, and a sensitivity estimate of 0.739 (95% CI 0.513 to 0.889) and a specificity estimate of 0.414 (95% CI 0.241 to 0.609) in participants aged ≥ 15 years. As previously discussed (see Population), this overlap across age groups potentially limits subgroup analysis. However, none of the other six studies included patients aged < 14 years.
Accuracy of clinical scoring tools split by primary/secondary care setting
Patients were recruited from primary care settings in five39,44–46,52 of the six studies. Details of the primary care setting studies are outlined in Table 10. In brief, these studies provided point estimates of sensitivity of 0.74–0.86 and of specificity of 0.25–0.65. The single study from a secondary care setting57 reported a higher point estimate for sensitivity (0.972, 95% CI 0.893 to 0.995) and a lower point estimate for specificity (0.172, 95% CI 0.112 to 0.253), albeit with overlapping CIs with some of the primary care setting studies, than the other five studies. This may have been a result of the setting or other sources of heterogeneity between studies.
Accuracy of clinical scoring tools using polymerase chain reaction to resolve discordant cases
No analysis of discordant results between sore throat clinical scores and microbiological culture was undertaken in any of the included studies.
Point-of-care/index tests
Accuracy of point-of-care tests with culture as the reference standard
The systematic review identified 35 pieces of literature that provided evidence comparing the performance of 18 of the named index tests with culture. These were 23 peer-reviewed papers, three abstracts, five manufacturer responses (submitted directly to NICE in response to a request for information) and four FDA reports. Two studies reported results that were inconsistent, which prevented the construction of a reliable 2 × 2 contingency table, and were excluded during the data extraction. 35,47 A summary of the final 33 pieces of literature can be found in Table 11. The sources provided by the manufacturers were not peer reviewed, and neither were three abstracts. The sources identified from FDA reports received some scrutiny from the FDA. The remaining 21 studies were published in peer-reviewed journals. All sensitivity and specificity estimates are presented alongside their 95% CI. Meta-analyses were performed where appropriate, a summary of which can be found in Figure 8.
Study (first author and year of publication) | Care setting | Age group | Clinical tool score restriction | Strep A infections prevalence (%) | Reference type | N | TP (n) | FN (n) | FP (n) | TN (n) | Accuracy data (95% CI) |
---|---|---|---|---|---|---|---|---|---|---|---|
Clearview Exact Strep A Cassette – Abbott Laboratories | |||||||||||
Andersen 2003 (abstract)56 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
Clearview Exact Strep A Dipstick – Abbott Laboratories | |||||||||||
Andersen 2003 (abstract)56 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
BD Veritor Plus System – Becton Dickinson | |||||||||||
Azrad 201934 | Secondary | NR | None | 25.0 | Streptococcal selective agar | 100 | 20 | 5 | 16 | 59 |
Sensitivity 0.80 (0.59 to 0.92) Specificity 0.79 (0.67 to 0.87) PPV 0.56 (0.38 to 0.72) NPV 0.92 (0.82 to 0.97) |
aBecton Dickinson (FDA)59 | NR | Children and adults | None | 18.7 | Blood agar | 796 | 144 | 5 | 29 | 618 |
Sensitivity 0.97 (0.92 to 0.99) Specificity 0.96 (0.94 to 0.97) PPV 0.83 (0.77 to 0.88) NPV 0.99 (0.98 to 1.00) |
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 32 | 10 | 11 | 162 |
Sensitivity 0.76 (0.60 to 0.87) Specificity 0.94 (0.89 to 0.97) PPV 0.74 (0.56 to 0.86) NPV 0.94 (0.89 to 0.97) |
Strep A Rapid Test Cassette – Biopanda Reagents | |||||||||||
Biopanda Reagents (MFR)a | Primary | Children and adults | None | 23.2 | Blood agar | 526 | 116 | 6 | 9 | 395 |
Sensitivity 0.95 (0.89 to 0.98) Specificity 0.98 (0.96 to 0.99) PPV 0.93 (0.87 to 0.96) NPV 0.99 (0.97 to 0.99) |
Strep A Rapid Test Strip – Biopanda Reagents | |||||||||||
No data | |||||||||||
NADAL Strep A Strip – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Cassette – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Plus Cassette – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Plus Strip – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Scan – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
OSOM Strep A Strip – Sekisui Diagnostics | |||||||||||
Bura 201736 | Primary | Adults | Centor score of ≥ 2 points | 22.7 | Blood agar | 101 | 22 | 1 | 2 | 76 |
Sensitivity 0.96 (0.76 to 1.00) Specificity 0.97 (0.90 to 1.00) PPV 0.92 (0.72 to 0.99) NPV 0.99 (0.92 to 1.00) |
Llor 200944 | Primary | Adults | Centor score of ≥ 2 points | 24.8 | Blood agar | 222 | 52 | 3 | 14 | 153 |
Sensitivity 0.95 (0.85 to 0.99) Specificity 0.92 (0.86 to 0.95) PPV 0.79 (0.69 to 0.86) NPV 0.98 (0.94 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of ≥ 2 pointsb | 17.8 | Blood agar | 276 | 44 | 5 | 14 | 213 |
Sensitivity 0.90 (0.78 to 0.97) Specificity 0.94 (0.90 to 0.97) PPV 0.76 (0.65 to 0.84) NPV 0.98 (0.95 to 0.99) |
Rogo 201149 | Secondary | Children | None | 28.9 | Blood agar | 228 | 65 | 1 | 1 | 161 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.99 (0.96 to 1.00) PPV 0.98 (0.91 to 1.00) NPV 0.99 (0.96 to 1.00) |
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 54 | 7 | 9 | 90 |
Sensitivity 0.89 (0.77 to 0.95) Specificity 0.91 (0.83 to 0.96) PPV 0.86 (0.74 to 0.93) NPV 0.93 (0.85 to 0.97) |
QuikRead Go Strep A Kit – Orion Diagnostica | |||||||||||
Azrad 201934 | Secondary | NR | None | 25.0 | Streptococcal selective agar | 100 | 20 | 5 | 20 | 55 |
Sensitivity 0.80 (0.59 to 0.92) Specificity 0.73 (0.62 to 0.83) PPV 0.50 (0.34 to 0.66) NPV 0.92 (0.81 to 0.97) |
Orion Diagnostica (MFR)a | Primary | Children and adults | None | 32.8 | Streptococcal selective agar | 271 | 74 | 15 | 5 | 177 |
Sensitivity 0.83 (0.73 to 0.90) Specificity 0.97 (0.93 to 0.99) PPV 0.94 (0.86 to 0.97) NPV 0.92 (0.87 to 0.95) |
Stefaniuk 201752 | Primary | Children and adultsb | None | 45.3 | Blood agar | 95 | 39 | 4 | 8 | 44 |
Sensitivity 0.91 (0.78 to 0.97) Specificity 0.85 (0.72 to 0.93) PPV 0.83 (0.72 to 0.90) NPV 0.92 (0.81 to 0.97) |
Alere TestPack Plus Cassette – Abbott Laboratories | |||||||||||
Dimatteo 200138 | Secondary | Adults | Centor score of ≥ 1 point | NR | Streptococcal selective agar | NR | NR | 22 | NR | 361 | NPV 0.94 (0.91 to 0.96) |
Humair 200639 | Primary | Adults | Centor score of ≥ 2 pointsb | 37.6 | Blood agar | 372 | 128 | 12 | 11 | 221 |
Sensitivity 0.91 (0.86 to 0.95) Specificity 0.95 (0.92 to 0.98) PPV 0.92 (0.87 to 0.95) NPV 0.95 (0.91 to 0.97) |
Johansson 200340 | Primary | Children and adults | None | 31.4 | NR | 144 | 46 | 7 | 4 | 87 |
Sensitivity 0.87 (0.74 to 0.94) Specificity 0.96 (0.89 to 0.99) PPV 0.92 (0.80 to 0.97) NPV 0.93 (0.85 to 0.97) |
Johnson 200141 | Primary | Adults | None | NR | Blood agar | NR | 445 | NR | 77 | NR | PPV 0.85 (0.82 to 0.88) |
Kurtz 200042 | Secondary | Children | None | 31.1 | Blood agar | 257 | 64 | 16 | 13 | 164 |
Sensitivity 0.80 (0.71 to 0.89) Specificity 0.93 (0.89 to 0.97) PPV 0.83 (0.75 to 0.92) NPV 0.91 (0.87 to 0.95) |
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 271 | 87 | 21 | 623 |
Sensitivity 0.76 (0.71 to 0.80) Specificity 0.97 (0.95 to 0.98) PPV 0.93 (0.89 to 0.95) NPV 0.88 (0.85 to 0.90) |
Lindbæk 200443 | Primary | Children and adults | None | 35.9 | Streptococcal selective agar | 306 | 106 | 4 | 27 | 169 |
Sensitivity 0.96 (0.91 to 0.99) Specificity 0.86 (0.80 to 0.91) PPV 0.80 (0.72 to 0.86) NPV 0.98 (0.94 to 0.99) |
McIsaac 200446 | Primary | Children and adultsb | McIsaac score of ≥ 2 points | 29.0 | Blood agar | 787 | 189 | 39 | 5 | 554 |
Sensitivity 0.83 (0.77 to 0.88) Specificity 0.99 (0.98 to 1.00) PPV 0.97 (0.94 to 0.99) NPV 0.93 (0.91 to 0.95) |
Penney 201648 | Secondary | Children | None | 40.1 | Streptococcal selective agar | 147 | 45 | 14 | 0 | 88 |
Sensitivity 0.76 (0.65 to 0.87) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.90 to 1.00) NPV 0.86 (0.78 to 0.92) |
Rosenberg 200250 | Secondary | Children and adults | None | 25.4 | Blood agar | 126 | 24 | 8 | 1 | 93 |
Sensitivity 0.75 (0.56 to 0.88) Specificity 0.99 (0.93 to 1.00) PPV 0.96 (0.78 to 1.00) NPV 0.92 (0.85 to 0.96) |
Santos 200351 | Secondary | Children | None | 30.6 | Blood agar | 49 | 11 | 4 | 2 | 32 |
Sensitivity 0.73 (0.45 to 0.91) Specificity 0.94 (0.79 to 0.99) PPV 0.85 (0.54 to 0.97) NPV 0.89 (0.73 to 0.96) |
Valverde 2018 (abstract)58 | Secondary | Children and adults | None | 40.0 | Blood agar | 580 | 181 | 16 | 27 | 356 |
Sensitivity 0.92 (0.87 to 0.95) Specificity 0.93 (0.90 to 0.95) PPV 0.87 (0.82 to 0.91) NPV 0.96 (0.93 to 0.97) |
bioNexia Strep A Plus Cassette – bioMérieux | |||||||||||
No data | |||||||||||
bioNexia Strep A Dipstick – bioMérieux | |||||||||||
Pauchard 2013 (abstract)57 | Secondary | Children | None | 36.8 | NR | 193 | 60 | 11 | 11 | 111 |
Sensitivity 0.85 (0.74 to 0.92) Specificity 0.91 (0.84 to 0.95) PPV 0.85 (0.76 to 0.93) NPV 0.91 (0.86 to 0.96) |
Biosynex Strep A Cassette | |||||||||||
No data | |||||||||||
Sofia Strep A FIA – Quidel | |||||||||||
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 305 | 53 | 31 | 613 |
Sensitivity 0.85 (0.81 to 0.89) Specificity 0.95 (0.93 to 0.97) PPV 0.91 (0.87 to 0.94) NPV 0.92 (0.90 to 0.94) |
aQuidel (FDA)60 | NR | NR | None | 17.4 | Blood agar | 736 | 116 | 12 | 24 | 584 |
Sensitivity 0.91 (0.84 to 0.95) Specificity 0.96 (0.94 to 0.97) PPV 0.83 (0.75 to 0.89) NPV 0.98 (0.96 to 0.99) |
Alere i Strep A – Abbott Laboratories | |||||||||||
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 42 | 0 | 15 | 158 |
Sensitivity 1.00 (0.90 to 1.00) Specificity 0.91 (0.86 to 0.95) PPV 0.74 (0.60 to 0.84) NPV 1.00 (0.97 to 1.00) |
Cohen 201537 | Secondary | Children and adultsb | None | 30.3 | Blood agar | 481 | 141 | 6 | 18 | 316 |
Sensitivity 0.96 (0.91 to 0.98) Specificity 0.95 (0.91 to 0.97) PPV 0.89 (0.82 to 0.93) NPV 0.98 (0.96 to 0.99) |
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 60 | 1 | 0 | 99 |
Sensitivity 0.98 (0.90 to 1.00) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.93 to 1.00) NPV 0.99 (0.94 to 1.00) |
Alere i Strep A 2 – Abbott Laboratories | |||||||||||
aAbbott Laboratories (FDA)61 | NR | NR | None | 20.2 | Blood agar | 981 | 195 | 3 | 52 | 731 |
Sensitivity 0.98 (0.95 to 0.99) Specificity 0.93 (0.91 to 0.95) PPV 0.79 (0.73 to 0.84) NPV 1.00 (0.99 to 1.00) |
cobas Liat Strep A Assay – Roche Diagnostics | |||||||||||
Roche Diagnostics (MFR)a | NR | Children and adults | None | 30.4 | Blood agar | 570 | 170 | 3 | 23 | 374 |
Sensitivity 0.98 (0.95 to 1.00) Specificity 0.94 (0.91 to 0.96) PPV 0.88 (0.82 to 0.92) NPV 0.99 (0.97 to 1.00) |
Wang 201724 | Primary | Children | Centor score of ≥ 1 point | 30.2 | NR | 427 | 126 | 3 | 20 | 278 |
Sensitivity 0.98 (0.93 to 0.99) Specificity 0.93 (0.90 to 0.96) PPV 0.86 (0.79 to 0.91) NPV 0.99 (0.97 to 1.00) |
Xpert Xpress Strep A – Cepheid | |||||||||||
Cepheid (MFR)a | NR | NR | None | 23.9 | NR | 577 | 138 | 0 | 26 | 413 |
Sensitivity 1.00 (0.97 to 1.00) Specificity 0.94 (0.91 to 0.96) PPV 0.84 (0.79 to 0.89) NPV 1.00 (0.99 to 1.00) |
aCepheid (FDA)62 | Primary and secondary | Children and adults | None | 25.6 | NR | 618 | 157 | 1 | 27 | 433 |
Sensitivity 0.99 (0.96 to 1.00) Specificity 0.94 (0.91 to 0.96) PPV 0.85 (0.79 to 0.90) NPV 1.00 (0.99 to 1.00) |
Clearview Exact Strep A Cassette and Clearview Exact Strep A Dipstick (Abbott Laboratories)
The only evidence related to the Clearview Exact Strep A Cassette and Dipstick was provided by Andersen et al. ,56 who did not report which version of the test they used. Andersen et al. 56 reported a sensitivity of 0.68 (95% CI 0.55 to 0.81) and a specificity of 0.95 (95% CI 0.93 to 0.98) when examining children presenting in a secondary care setting.
BD Veritor Plus System (Becton Dickinson)
Azrad et al. 34 and Berry et al. 20 both presented results for the BD Veritor Plus System compared with culturing of samples in a secondary care setting. Azrad et al. 34 did not report the age group, and Berry et al. 20 looked at children. The sensitivities of the test were 0.80 (95% CI 0.59 to 0.92) and 0.76 (95% CI 0.60 to 0.87), and the specificities were 0.79 (95% CI 0.67 to 0.87) and 0.94 (95% CI 0.89 to 0.97), for Azrad et al. 34 and Berry et al. ,20 respectively. Becton Dickinson provided data to the FDA that estimated a sensitivity of 0.97 (95% CI 0.92 to 0.99) and a specificity of 0.96 (95% CI 0.94 to 0.97). 59
Univariate models were fitted to the two studies for the BD Veritor Plus System. The models estimated a sensitivity of 0.78 (95% CI 0.67 to 0.87) and a specificity of 0.90 (95% CI 0.86 to 0.93). Heterogeneity of the studies could not be assessed using I2 because only two studies were present.
Strep A Rapid Test Cassette (Biopanda Reagents)
The only evidence related to the Strep A Rapid Test Cassette was provided by Biopanda Reagents in response to a request for information by NICE. Biopanda Reagents reported a sensitivity of 0.95 (95% CI 0.89 to 0.98) and a specificity of 0.98 (95% CI 0.96 to 0.99) in a population of children and adults in a primary care setting.
NADAL Strep A Strip, NADAL Strep A Cassette, NADAL Strep A Plus Cassette, NADAL Strep A Plus Strip and NADAL Strep A Scan (nal von minden GmbH)
The only evidence related to the NADAL Strep A Cassettes, Strips and Scan tests was provided by nal von minden GmbH in response to a request for information by NICE and did not distinguish between any of the NADAL varieties. It reported a sensitivity of 0.98 (95% CI 0.91 to 1.00) and a specificity of 0.98 (95% CI 0.93 to 0.99) from a study undertaken in a secondary care setting including both children and adults.
OSOM Strep A Strip (Sekisui Diagnostics)
Five studies compared the OSOM Strep A Strip with culture. 36,44,45,49,54 Bura et al. ,36 Llor et al. 44 and Llor et al. 45 all examined adult populations presenting at primary care centres and reported sensitivities of 0.96 (95% CI 0.76 to 1.00), 0.95 (95% CI 0.85 to 0.99) and 0.90 (95% CI 0.78 to 0.97), and specificities of 0.97 (95% CI 0.90 to 1.00), 0.92 (95% CI 0.86 to 0.95) and 0.94 (95% CI 0.90 to 0.97), respectively.
Meanwhile Rogo et al. 49 and Weinzierl et al. 54 examined children in secondary care, with respective sensitivities of 0.98 (95% CI 0.91 to 1.00) and 0.89 (95% CI 0.77 to 0.95), and specificities of 0.99 (95% CI 0.96 to 1.00) and 0.91 (95% CI 0.83 to 0.96).
Despite having five sources of data, a bivariate model failed to converge for the OSOM test. However, univariate models did converge. These models estimated a sensitivity of 0.94 (95% CI 0.89 to 0.98) and a specificity of 0.95 (95% CI 0.91 to 0.98). The I2 for the analysis of sensitivity was 40.65%, suggesting some heterogeneity, whereas the I2 for the analysis of specificity was 79.14%, suggesting high heterogeneity.
QuikRead Go Strep A Kit (Orion Diagnostica)
Azrad et al. 34 and Stefaniuk et al. 52 both compared the accuracy of the QuikRead Go Strep A Kit with culture, and reported respective sensitivities of 0.80 (95% CI 0.59 to 0.92) and 0.91 (95% CI 0.78 to 0.97), and specificities of 0.73 (95% CI 0.62 to 0.83) and 0.85 (95% CI 0.72 to 0.93). Azrad et al. 34 investigated both child and adult patients in a primary care setting, whereas the data from Stefaniuk et al. 52 reflected a secondary care setting but did not report the ages of the patients. Orion Diagnostica also provided data from its own study in response to a request for information by NICE, which estimated a sensitivity of 0.83 (95% CI 0.73 to 0.90) and a specificity of 0.97 (95% CI 0.93 to 0.99) for children and adults in primary care.
Univariate models were fitted to the two studies that investigated the QuikRead Go test. The resulting sensitivity was 0.87 (95% CI 0.78 to 0.95) and the specificity was 0.78 (95% CI 0.71 to 0.85). Heterogeneity of the studies could not be assessed using I2 because only two studies were present.
Alere TestPack Plus Cassette (Abbott Laboratories)
There were 12 published studies that compared the accuracy of the Alere TestPack Plus Cassette with culture. Two studies did not report sufficient data to estimate a complete 2 × 2 contingency table. 38,41 Four of the remaining studies were conducted in a primary care setting. One of these was in an adult population: Humair et al. 39 estimated a sensitivity of 0.91 (95% CI 0.86 to 0.95) and a specificity of 0.95 (95% CI 0.92 to 0.98). The other primary care-based studies combined child and adult populations: Lindbæk et al. 43 (sensitivity 0.94, 95% CI 0.90 to 0.99; specificity 0.86, 95% CI 0.80 to 0.91), Johansson et al. 40 (sensitivity 0.87, 95% CI 0.74 to 0.94; specificity 0.96, 95% CI 0.89 to 0.99) and McIsaac et al. 46 (sensitivity 0.83, 95% CI 0.77 to 0.88; specificity 0.99, 95% CI 0.98 to 1.00).
Six other studies were conducted in secondary care settings, three of which assessed children without any restriction from a clinical tool score. 42,48,51 Kurtz et al. ,42 Penney et al. 48 and Santos et al. 51 reported sensitivities of 0.80 (95% CI 0.71 to 0.89), 0.76 (95% CI 0.65 to 0.87) and 0.73 (95% CI 0.45 to 0.91), respectively. Their specificities were 0.93 (95% CI 0.89 to 0.97), 1.00 (95% CI 0.95 to 1.00) and 0.94 (95% CI 0.79 to 0.99), respectively. Lacroix et al. 23 also examined children in secondary care, but restricted the study population to those with a McIsaac score of ≥ 2 points. 23 The sensitivity was 0.76 (95% CI 0.71 to 0.80) and the specificity was 0.97 (95% CI 0.95 to 0.98). Two studies examined both children and adults in secondary care: Rosenberg et al. 50 estimated a sensitivity of 0.75 (95% CI 0.56 to 0.88) and a specificity of 0.99 (95% CI 0.93 to 1.00), and Valverde et al. 58 estimated a sensitivity of 0.92 (95% CI 0.87 to 0.95) and a specificity of 0.93 (95% CI 0.90 to 0.95).
For the Alere TestPack Plus test, a bivariate model was fitted to meta-analyse all studies. The model suggested that the test had a sensitivity of 0.85 (95% CI 0.79 to 0.90) and a specificity of 0.96 (95% CI 0.94 to 0.98). Univariate models were also investigated and were identical to two decimal places. The I2 for the sensitivity and specificity analyses were 82.96% and 76.14%, respectively, suggesting that high heterogeneity is present in both of the meta-analyses.
bioNexia Strep A Dipstick (bioMérieux)
Only one abstract presented data for bioMérieux’s bioNexia Strep A Dipstick. Pauchard et al. 57 conducted a study in children in a secondary care setting, and estimated a sensitivity of 0.85 (95% CI 0.74 to 0.92) and a specificity of 0.91 (95% CI 0.84 to 0.95).
Sofia Strep A fluorescent immunoassay (Quidel)
One peer-reviewed study presented data comparing the Sofia Strep A FIA with culture. Lacroix et al. 23 used the test on children in a secondary care setting, and estimated a sensitivity of 0.85 (95% CI 0.81 to 0.89) and a specificity of 0.95 (95% CI 0.93 to 0.97). Quidel also provided data from their own study to the FDA, which estimated a sensitivity of 0.91 (95% CI 0.84 to 0.95) and a specificity of 0.96 (95% CI 0.94 to 0.97). 60
Alere i Strep A (Abbott Laboratories)
Three studies compared the Alere i Strep A test with culture, all in a secondary care setting. Berry et al. 20 and Weinzierl et al. 54 looked only at children, and estimated respective sensitivities of 1.00 (95% CI 0.90 to 1.00) and 0.98 (95% CI 0.90 to 1.00), and specificities of 0.91 (95% CI 0.86 to 0.95) and 1.00 (95% CI 0.95 to 1.00). Cohen et al. 37 examined both children and adults, and produced respective estimates of sensitivity and specificity of 0.96 (95% CI 0.91 to 0.98) and 0.95 (95% CI 0.91 to 0.97).
When meta-analysed using univariate models, the three studies using the Alere i Strep A test yielded a sensitivity of 0.98 (95% CI 0.95 to 1.00) and a specificity of 0.96 (95% CI 0.90 to 1.00). The sensitivity I2 was 20.64% (low) and the specificity I2 was 87.95% (high).
Alere i Strep A 2 (Abbott Laboratories)
Only manufacturer information submitted to the FDA was available for the Alere i Strep A 2 test, which reported a sensitivity of 0.98 (95% CI 0.95 to 0.99) and a specificity of 0.93 (95% CI 0.91 to 0.95), but did not report the age of patients or the care setting. 61
cobas Liat Strep A Assay (Roche Diagnostics)
There were two sources of data comparing the cobas Liat Strep A Assay with culture. Wang et al. 24 carried out the test in children in a primary care setting, and estimated a sensitivity of 0.98 (95% CI 0.93 to 0.99) and a specificity of 0.94 (95% CI 0.91 to 0.96). The manufacturer (Roche Diagnostics) provided the other source in response to a request for information by NICE, which produced estimates of sensitivity and specificity of 0.98 (95% CI 0.95 to 1.00) and 0.94 (95% CI 0.91 to 0.96), respectively. Roche Diagnostics stated that the data it provided overlapped with the Wang et al. 24 study. The data supplied by Roche Diagnostics were identical to the data available from the FDA for this test.
Xpert Xpress Strep A (Cepheid)
Only manufacturer information was available for the Xpert Xpress Strep A test by Cepheid, which was provided in response to a request for information by NICE. The data provided by the manufacturer reported a sensitivity of 1.00 (95% CI 0.97 to 1.00) and a specificity of 0.94 (95% CI 0.91 to 0.96). This differed slightly from the information available from the FDA, which had a sensitivity of 0.99 (95% CI 0.96 to 1.00) and a specificity of 0.94 (95% CI 0.91 to 0.96). 62 Owing to the differences in sample size and the resolution of discordant samples, we have treated these sources as two independent studies, but it is not clear if there is overlap in patients.
Biosynex Strep A Cassette (Biosynex), Strep A Rapid Test Strip (Biopanda Reagents) and bioNexia Strep A Plus Cassette (bioMérieux)
No data were identified for any of the following tests:
-
Biosynex Strep A Cassette test (Biosynex)
-
bioNexia Strep A Plus Cassette test (bioMérieux)
-
Strep A Rapid Test Strip (Biopanda Reagents).
Summary
Figures 9 and 10 present the sensitivity and specificity for all studies that had complete 2 × 2 data. Data were available for only 18 tests, and just seven tests were used in more than one independent study. Ignoring manufacturer and FDA sources of data, this reduces to 10 tests with published data and five tests with more than one independent study.
Note that where studies provided performance data by subgroups, these were incorporated into the relevant analyses when producing estimates to feed into the cost-effectiveness modelling. It is clear that there is a large degree of heterogeneity between the studies, and it is difficult to attribute any observed differences in test performance to the tests themselves. The CIs in Figures 8–10 may differ slightly to those in Table 11, owing to differences in their method of calculation.
It is apparent that the data sourced from the manufacturer responses (submitted directly to NICE in response to a request for information) and the FDA submissions consistently provided higher estimates of sensitivity and specificity than the peer-reviewed studies. This supports the view that the manufacturer data may be judged as being at high risk of bias, and any cost-effectiveness analyses incorporating them may be unreliable.
Head-to-head (direct) comparison between tests
Initially, we sought to identify whether or not there was evidence to support the hypothesis that the tests might have different levels of accuracy. Owing to the large degree of interstudy variability, the most informative studies were those that conducted multiple tests on the same patient population, of which there were four.
Azrad et al. 34 compared both the BD Veritor System (Becton Dickinson) and the QuikRead Go Strep A Kit (Orion Diagnostica) with culture for 100 patients. The BD Veritor System had a sensitivity of 0.80 (95% CI 0.59 to 0.92) and a specificity of 0.79 (95% CI 0.67 to 0.87). The QuikRead Go test had an identical sensitivity, of 0.80 (95% CI 0.59 to 0.92), and a slightly lower point estimate for specificity, of 0.73, with overlapping CIs (95% CI 0.62 to 0.83).
Berry et al. 20 compared both the BD Veritor System and the Alere i Strep A tests with culture. The tests performed differently, with the BD Veritor System having a sensitivity of 0.76 (95% CI 0.60 to 0.87) and a specificity of 0.94 (95% CI 0.89 to 0.97) and Alere i Strep A having a sensitivity of 1.00 (95% CI 0.90 to 1.00) and a specificity of 0.91 (95% CI 0.86 to 0.95).
Lacroix et al. 23 investigated both the Alere TestPack Plus and the Sofia Strep A FIA tests. Again, the tests performed differently, with the Alere TestPack Plus having lower detection rates, with a sensitivity of 0.76 (95% CI 0.71 to 0.80) and a specificity of 0.97 (95% CI 0.95 to 0.98). Meanwhile, the Sofia Strep A FIA had a sensitivity of 0.85 (95% CI 0.81 to 0.89) and a specificity of 0.95 (95% CI 0.93 to 0.97).
Finally, Weinzierl et al. 54 assessed the Alere i Strep A and the OSOM Strep A Strip tests. The OSOM Strep A Strip had a sensitivity of 0.89 (95% CI 0.77 to 0.95) and a specificity of 0.91 (95% CI 0.83 to 0.96), whereas the Alere i Strep A test had a sensitivity of 0.98 (95% CI 0.90 to 1.00) and a specificity of 1.00 (95% CI 0.95 to 1.00).
Conclusion
There is insufficient evidence to conduct a meaningful comparison of the rapid tests or to establish any reliable hierarchy of test performance. Although some tests may perform similarly, the existing evidence does not allow identification of any clear groups of tests, and it is likely that there is some variation in accuracy of the 21 tests. There is considerable heterogeneity, potentially caused by the differences in study design and population.
Accuracy of point-of-care tests in the population at high risk of group A Streptococcus infection as defined by sore throat clinical scores
The primary population of interest in this review is patients with high clinical scores (Centor score of ≥ 3 points, FeverPAIN score of ≥ 4 points). We report test accuracy data in that population, and for other thresholds of clinical measuring tools, such as Centor or McIsaac, as were reported by published studies. The majority of studies either did not place or did not report placing a restriction of the clinical scoring tool on their patient populations.
Eight studies present results based on some restriction of Centor or McIsaac (either ≥ 1 point or 2 or 3 points), which informed test accuracy data for four tests. A summary of evidence is provided in Table 12.
Study (first author and year of publication) | Care setting | Age group | Clinical tool score restriction | Strep A infections prevalence (%) | Reference type | N | TP (n) | FN (n) | FP (n) | TN (n) | Accuracy data (95% CI) |
---|---|---|---|---|---|---|---|---|---|---|---|
OSOM Strep A Strip – Sekisui Diagnostics | |||||||||||
Bura 201736 | Primary | Adults | Centor score of ≥ 2 points | 22.7 | Blood agar | 101 | 22 | 1 | 2 | 76 |
Sensitivity 0.96 (0.76 to 1.00) Specificity 0.97 (0.90 to 1.00) PPV 0.92 (0.72 to 0.99) NPV 0.99 (0.92 to 1.00) |
Llor 200944 | Primary | Adults | Centor score of ≥ 2 points | 24.8 | Blood agar | 222 | 52 | 3 | 14 | 153 |
Sensitivity 0.95 (0.85 to 0.99) Specificity 0.92 (0.86 to 0.95) PPV 0.79 (0.69 to 0.86) NPV 0.98 (0.94 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of ≥ 1 point | 17.8 | Blood agar | 276 | 44 | 5 | 14 | 213 |
Sensitivity 0.90 (0.78 to 0.97) Specificity 0.94 (0.90 to 0.97) PPV 0.76 (0.65 to 0.84) NPV 0.98 (0.95 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of > 2 points | 31.0 | Blood agar | 116 | 33 | 3 | 3 | 77 |
Sensitivity 0.92 (0.76 to 0.98) Specificity 0.96 (0.89 to 0.99) PPV 0.92 (0.78 to 0.97) NPV 0.96 (0.90 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of = 1 or 2 points | 8.1 | Blood agar | 160 | 11 | 2 | 11 | 136 |
Sensitivity 0.85 (0.55 to 0.98) Specificity 0.93 (0.87 to 0.96) PPV 0.50 (0.35 to 0.65) NPV 0.99 (0.95 to 1.00) |
Alere TestPack Plus Cassette – Abbott Laboratories | |||||||||||
Dimatteo 200138 | Secondary | Adults | Centor score of ≥ 1 point | NR | Streptococcal selective agar | 22 | 361 | NPV 0.94 (0.91 to 0.96) | |||
Humair 200639 | Primary | Adults | Centor score of ≥ 2 points | 37.6 | Blood agar | 372 | 128 | 12 | 11 | 221 |
Sensitivity 0.91 (0.86 to 0.95) Specificity 0.95 (0.92 to 0.98) PPV 0.92 (0.87 to 0.95) NPV 0.95 (0.91 to 0.97) |
Humair 200639 | Primary | Adults | Centor score of = 2 points | 23.6 | Blood agar | 148 | 28 | 7 | 4 | 109 |
Sensitivity 0.80 (0.63 to 0.92) Specificity 0.96 (0.91 to 0.99) PPV 0.88 (0.73 to 0.95) NPV 0.94 (0.89 to 0.97) |
Humair 200639 | Primary | Adults | Centor score of > 2 points | 46.9 | Blood agar | 224 | 100 | 5 | 7 | 112 |
Sensitivity 0.95 (0.89 to 0.98) Specificity 0.94 (0.88 to 0.98) PPV 0.93 (0.87 to 0.97) NPV 0.96 (0.90 to 0.98) |
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 271 | 87 | 21 | 623 |
Sensitivity 0.76 (0.71 to 0.80) Specificity 0.97 (0.95 to 0.98) PPV 0.93 (0.89 to 0.95) NPV 0.88 (0.85 to 0.90) |
McIsaac 200446 | Primary | Children and adultsa | McIsaac score of ≥ 2 points | 29.0 | Blood agar | 787 | 189 | 39 | 5 | 554 |
Sensitivity 0.83 (0.77 to 0.88) Specificity 0.99 (0.98 to 1.00) PPV 0.97 (0.94 to 0.99) NPV 0.93 (0.91 to 0.95) |
Sofia Strep A FIA – Quidel | |||||||||||
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 305 | 53 | 31 | 613 |
Sensitivity 0.85 (0.81 to 0.89) Specificity 0.95 (0.93 to 0.97) PPV 0.91 (0.87 to 0.94) NPV 0.92 (0.90 to 0.94) |
cobas Liat Strep A Assay – Roche Diagnostics | |||||||||||
Wang 201724 | Primary | Children | Centor score of ≥ 1 point | 30.2 | NR | 427 | 126 | 3 | 20 | 278 |
Sensitivity 0.98 (0.93 to 0.99) Specificity 0.93 (0.90 to 0.96) PPV 0.86 (0.79 to 0.91) NPV 0.99 (0.97 to 1.00) |
Only two studies presented data for populations that matched the NICE scope, that is having either a Centor or McIsaac score of ≥ 3 points or a FeverPAIN score of ≥ 4 points. 39,45 We dichotomised the data from these studies into (1) patients meeting the scope based on throat score and (2) patients not meeting the scope.
Humair et al. 39 investigated the Alere TestPack Plus test in adults presenting in a primary care setting, with a Centor score of ≥ 2 points. In the Centor score = 2 points and Centor score of > 2 points subgroups, the sensitivities were 0.80 (95% CI 0.63 to 0.92) and 0.95 (95% CI 0.89 to 0.98), and the specificities were 0.96 (95% CI 0.91 to 0.99) and 0.94 (95% CI 0.88 to 0.98), respectively. The subgroups had 148 and 224 patients, respectively.
Llor et al. 45 investigated adult patients in a primary care setting with a Centor score of ≥ 1 point when assessing the performance of the OSOM Strep A Strip. In the population with a Centor score of 1 or 2 points, consisting of 160 patients, the OSOM Strep A Strip had a sensitivity of 0.85 (95% CI 0.55 to 0.98) and a specificity of 0.93 (95% CI 0.87 to 0.96). In the population with a Centor score of > 2 points, with 116 patients, the test had a sensitivity of 0.92 (95% CI 0.76 to 0.98) and a specificity of 0.96 (95% CI 0.89 to 0.99).
The remaining data for studies that restricted their populations by throat score are presented in the following sections.
OSOM Strep A Strip
Three studies compared the OSOM Strep A Strip in a restricted population. Bura et al. 36 and Llor et al. 44 both focused on patients with a Centor score of ≥ 2 points, and reported sensitivities of 0.96 (95% CI 0.76 to 1.00) and 0.95 (95% CI 0.85 to 0.99), and specificities of 0.97 (95% CI 0.90 to 1.00) and 0.92 (95% CI 0.86 to 0.95), respectively.
Meanwhile Llor et al. 45 considered patients with a Centor score of ≥ 1 point and reported a sensitivity of 0.90 (95% CI 0.78 to 0.97) and a specificity of 0.94 (95% CI 0.90 to 0.97).
Alere TestPack Plus Cassette
Four studies investigating the Alere TestPack Plus restricted their population by throat score. Dimatteo et al. 38 looked only at patients with a Centor score of ≥ 1 point but did not present complete 2 × 2 information and so sensitivity and specificity could not be calculated. Lacroix et al. 23 and McIsaac et al. 46 both examined test performance in patients with McIsaac scores of ≥ 2 points. The former estimated a sensitivity of 0.76 (95% CI 0.71 to 0.80) and a specificity of 0.97 (95% CI 0.95 to 0.98), and the latter estimated a sensitivity of 0.83 (95% CI 0.77 to 0.88) and a specificity of 0.99 (95% CI 0.98 to 1.00).
Humair et al. 39 also considered only patients with a Centor score of ≥ 2 points, but also presented results by score subgroup mentioned in Accuracy of point-of-care tests in the population at high risk of group A Streptococcus infection as defined by sore throat clinical scores. In the full population, a sensitivity of 0.91 (95% CI 0.86 to 0.95) and a specificity of 0.95 (95% CI 0.92 to 0.98) were reported.
Sofia Strep A fluorescent immunoassay
One study compared Sofia Strep A FIA with culture and restricted the population by throat score. Lacroix et al. 23 used Sofia Strep A FIA in patients with a McIsaac score of ≥ 2 points. In this population, the test had a sensitivity of 0.85 (95% CI 0.81 to 0.89) and a specificity of 0.95 (95% CI 0.93 to 0.97).
cobas Liat Strep A Assay
One study compared cobas Liat Strep A Assay with culture in patients restricted by throat score. Wang et al. 24 used the test in patients with a Centor score of ≥ 1 point. In this population, the test had a sensitivity of 0.98 (95% CI 0.93 to 0.99) and a specificity of 0.93 (95% CI 0.90 to 0.96).
Conclusion
The limited evidence suggests that some tests may have a higher sensitivity in patient populations that have a higher score according to a clinical tool, such as Centor.
Accuracy of point-of-care tests split by age group
We sought to identify whether or not there was evidence to support the hypothesis that the tests might have different performance characteristics based on the age group on which the test is being used. No studies were categorised into the age groups as detailed in the NICE scope, and so we classified them into child and adult populations where possible, or a combination of children and adults. No studies presented results specific to a ≥ 60-year-old population, although patients in this category may have been included within an ‘adult’ population. Seven studies concentrated on exclusively adult populations, providing accuracy data for two tests. 36,38,39,41,43–45 Ten studies looked exclusively at children, providing data for nine tests. 20,23,24,42,48,49,51,54,56,57 Three studies considered both adults and children, and presented accuracy data for them separately, allowing a within-trial comparison to be made. 37,46,52 Each of these three studies investigated a different test.
Cohen et al. 37 examined both adults and children when investigating the accuracy of the Alere i Strep A test. In children, the test had a sensitivity of 0.96 (95% CI 0.91 to 0.99) and a specificity of 0.93 (95% CI 0.89 to 0.96). In adults, the sensitivity was 0.95 (95% CI 0.74 to 1.00) and the specificity was 0.97 (95% CI 0.92 to 0.99).
McIsaac et al. 46 examined the Alere TestPack Plus test in child and adult populations, presenting the results by age category. In children, the sensitivity was 0.86 (95% CI 0.79 to 0.91) and the specificity was 0.99 (95% CI 0.97 to 1.00). In adults, the sensitivity was 0.77 (95% CI 0.65 to 0.86) and the specificity was 0.99 (95% CI 0.97 to 1.00).
Stefaniuk et al. 52 used the QuikRead Go Strep A Kit test in both adults and children. In children, a sensitivity of 0.80 (95% CI 0.56 to 0.94) and a specificity of 0.91 (95% CI 0.72 to 0.99) were estimated. In adults, the test sensitivity was 1.00 (95% CI 0.86 to 0.95) and specificity was 0.79 (95% CI 0.60 to 0.92).
Further age-specific results are presented below and in Table 13.
Study (first author and year of publication) | Care setting | Age group | Clinical tool score restriction | Strep A infections prevalence (%) | Reference type | N | TP (n) | FN (n) | FP (n) | TN (n) | Accuracy data (95% CI) |
---|---|---|---|---|---|---|---|---|---|---|---|
Clearview Exact Strep A Cassette – Abbott Laboratories | |||||||||||
Andersen 200356 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
Clearview Exact Strep A Dipstick – Abbott Laboratories | |||||||||||
Andersen 200356 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
BD Veritor Plus System – Becton Dickinson | |||||||||||
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 32 | 10 | 11 | 162 |
Sensitivity 0.76 (0.60 to 0.87) Specificity 0.94 (0.89 to 0.97) PPV 0.74 (0.56 to 0.86) NPV 0.94 (0.89 to 0.97) |
OSOM Strep A Strip – Sekisui Diagnostics | |||||||||||
Bura 201736 | Primary | Adults | Centor score of ≥ 2 points | 22.7 | Blood agar | 101 | 22 | 1 | 2 | 76 |
Sensitivity 0.96 (0.76 to 1.00) Specificity 0.97 (0.90 to 1.00) PPV 0.92 (0.72 to 0.99) NPV 0.99 (0.92 to 1.00) |
Llor 200944 | Primary | Adults | Centor score of ≥ 2 points | 24.8 | Blood agar | 222 | 52 | 3 | 14 | 153 |
Sensitivity 0.95 (0.85 to 0.99) Specificity 0.92 (0.86 to 0.95) PPV 0.79 (0.69 to 0.86) NPV 0.98 (0.94 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of ≥ 2 pointsa | 17.8 | Blood agar | 276 | 44 | 5 | 14 | 213 |
Sensitivity 0.90 (0.78 to 0.97) Specificity 0.94 (0.90 to 0.97) PPV 0.76 (0.65 to 0.84) NPV 0.98 (0.95 to 0.99) |
Rogo 201149 | Secondary | Children | None | 28.9 | Blood agar | 228 | 65 | 1 | 1 | 161 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.99 (0.96 to 1.00) PPV 0.98 (0.91 to 1.00) NPV 0.99 (0.96 to 1.00) |
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 54 | 7 | 9 | 90 |
Sensitivity 0.89 (0.77 to 0.95) Specificity 0.91 (0.83 to 0.96) PPV 0.86 (0.74 to 0.93) NPV 0.93 (0.85 to 0.97) |
QuikRead Go Strep A Kit – Orion Diagnostica | |||||||||||
Stefaniuk 201752 | Primary | Children | None | 46.5 | Blood agar | 43 | 16 | 4 | 2 | 21 |
Sensitivity 0.80 (0.56 to 0.94) Specificity 0.91 (0.72 to 0.99) PPV 0.89 (0.68 to 0.97) NPV 0.84 (0.68 to 0.93) |
Stefaniuk 201752 | Primary | Adults | None | 44.2 | Blood agar | 52 | 23 | 0 | 6 | 23 |
Sensitivity 1.00 (0.85 to 1.00) Specificity 0.79 (0.60 to 0.92) PPV 0.79 (0.65 to 0.89) NPV 1.00 (0.85 to 1.00) |
Alere TestPack Plus Cassette – Abbott Laboratories | |||||||||||
Dimatteo 200138 | Secondary | Adults | Centor score of ≥ 1 point | NR | Streptococcal selective agar | 22 | 361 | NPV 0.94 (0.91 to 0.96) | |||
Humair 200639 | Primary | Adults | Centor score of ≥ 2 pointsa | 37.6 | Blood agar | 372 | 128 | 12 | 11 | 221 |
Sensitivity 0.91 (0.86 to 0.95) Specificity 0.95 (0.92 to 0.98) PPV 0.92 (0.87 to 0.95) NPV 0.95 (0.91 to 0.97) |
Johnson 200141 | Primary | Adults | None | NR | Blood agar | 445 | 77 | PPV 0.85 (0.82 to 0.88) | |||
Kurtz 200042 | Secondary | Children | None | 31.1 | Blood agar | 257 | 64 | 16 | 13 | 164 |
Sensitivity 0.80 (0.71 to 0.89) Specificity 0.93 (0.89 to 0.97) PPV 0.83 (0.75 to 0.92) NPV 0.91 (0.87 to 0.95) |
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 271 | 87 | 21 | 623 |
Sensitivity 0.76 (0.71 to 0.80) Specificity 0.97 (0.95 to 0.98) PPV 0.93 (0.89 to 0.95) NPV 0.88 (0.85 to 0.90) |
McIsaac 200446 | Primary | Children | McIsaac score of ≥ 2 points | 34.1 | Blood agar | 454 | 133 | 22 | 3 | 296 |
Sensitivity 0.86 (0.79 to 0.91) Specificity 0.99 (0.97 to 1.00) PPV 0.98 (0.93 to 0.99) NPV 0.93 (0.90 to 0.95) |
McIsaac 200446 | Primary | Adults | McIsaac score of ≥ 2 points | 21.9 | Blood agar | 333 | 56 | 17 | 2 | 258 |
Sensitivity 0.77 (0.65 to 0.86) Specificity 0.99 (0.97 to 1.00) PPV 0.97 (0.88 to 0.99) NPV 0.94 (0.91 to 0.96) |
Penney 201648 | Secondary | Children | None | 40.1 | Streptococcal selective agar | 147 | 45 | 14 | 0 | 88 |
Sensitivity 0.76 (0.65 to 0.87) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.90 to 1.00) NPV 0.86 (0.78 to 0.92) |
Santos 200351 | Secondary | Children | None | 30.6 | Blood agar | 49 | 11 | 4 | 2 | 32 |
Sensitivity 0.73 (0.45 to 0.91) Specificity 0.94 (0.79 to 0.99) PPV 0.85 (0.54 to 0.97) NPV 0.89 (0.73 to 0.96) |
bioNexia Strep A Dipstick – bioMérieux | |||||||||||
Pauchard 201357 | Secondary | Children | None | 36.8 | NR | 193 | 60 | 11 | 11 | 111 |
Sensitivity 0.85 (0.74 to 0.92) Specificity 0.91 (0.84 to 0.95) PPV 0.85 (0.76 to 0.93) NPV 0.91 (0.86 to 0.96) |
Sofia Strep A FIA – Quidel | |||||||||||
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 305 | 53 | 31 | 613 |
Sensitivity 0.85 (0.81 to 0.89) Specificity 0.95 (0.93 to 0.97) PPV 0.91 (0.87 to 0.94) NPV 0.92 (0.90 to 0.94) |
Alere i Strep A – Abbott Laboratories | |||||||||||
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 42 | 0 | 15 | 158 |
Sensitivity 1.00 (0.90 to 1.00) Specificity 0.91 (0.86 to 0.95) PPV 0.74 (0.60 to 0.84) NPV 1.00 (0.97 to 1.00) |
Cohen 201537 | Secondary | Children | None | Blood agar | 355 | 123 | 5 | 15 | 212 |
Sensitivity 0.96 (0.91 to 0.99) Specificity 0.93 (0.89 to 0.96) PPV 0.89 (0.83 to 0.93) NPV 0.98 (0.95 to 0.99) |
|
Cohen 201537 | Secondary | Adults | None | Blood agar | 126 | 18 | 1 | 3 | 104 |
Sensitivity 0.95 (0.74 to 1.00) Specificity 0.97 (0.92 to 0.99) PPV 0.86 (0.66 to 0.95) NPV 0.99 (0.94 to 1.00) |
|
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 60 | 1 | 0 | 99 |
Sensitivity 0.98 (0.90 to 1.00) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.93 to 1.00) NPV 0.99 (0.94 to 1.00) |
cobas Liat Strep A Assay – Roche Diagnostics | |||||||||||
Wang 201724 | Primary | Children | Centor score of ≥ 1 point | 30.2 | NR | 427 | 126 | 3 | 20 | 278 |
Sensitivity 0.98 (0.93 to 0.99) Specificity 0.93 (0.90 to 0.96) PPV 0.86 (0.79 to 0.91) NPV 0.99 (0.97 to 1.00) |
Clearview Exact Strep A Cassette and Clearview Exact Strep A Dipstick (Abbott Laboratories)
Only data for a child population were available for the Clearview Exact Strep A Cassette and Dipstick tests, and were provided by Andersen et al. ,56 who did not distinguish between the cassette and dipstick varieties. Andersen et al. 56 reported a sensitivity of 0.68 (95% CI 0.55 to 0.81) and a specificity of 0.95 (95% CI 0.93 to 0.98).
BD Veritor Plus System (Becton Dickinson)
The only age-specific test accuracy data for the BD Veritor Plus System were in children, and published by Berry et al. 20 The sensitivity of the test was 0.76 (95% CI 0.60 to 0.87) and the specificity was 0.94 (95% CI 0.89 to 0.97).
OSOM Strep A Strip (Sekisui Diagnostics)
Three studies presented data for the OSOM Strep A Strip in adult patients. 36,44,45 Bura et al. ,36 Llor et al. 44 and Llor et al. 45 reported respective sensitivities of 0.96 (95% CI 0.76 to 1.00), 0.95 (95% CI 0.85 to 0.99) and 0.90 (95% CI 0.78 to 0.97), and specificities of 0.97 (95% CI 0.90 to 1.00), 0.92 (95% CI 0.86 to 0.95) and 0.94 (95% CI 0.90 to 0.97). Rogo et al. 49 and Weinzierl et al. 54 both studied children only, and estimated sensitivities of 0.98 (95% CI 0.91 to 1.00) and 0.89 (95% CI 0.77 to 0.95), and specificities of 0.99 (95% CI 0.96 to 1.00) and 0.91 (95% CI 0.83 to 0.96), respectively.
QuikRead Go Strep A Kit (Orion Diagnostica)
Stefaniuk et al. 52 examined both adults and children. In children, a sensitivity of 0.80 (95% CI 0.56 to 0.94) and a specificity of 0.91 (95% CI 0.72 to 0.99) were estimated. In adults, the test sensitivity was 1.00 (95% CI 0.85 to 1.00) and the specificity was 0.79 (95% CI 0.60. 0.92).
Alere TestPack Plus (Abbott Laboratories)
Three studies used the Alere TestPack Plus test in adult populations. 38,39,41 Dimatteo et al. 38 and Johnson et al. 41 did not provide complete results, and sensitivity and specificity could not be calculated. Humair et al. 39 did provide sufficient information and the test’s sensitivity was 0.91 (95% CI 0.86 to 0.95). The specificity was 0.95 (95% CI 0.92 to 0.98).
Four studies used the test in child populations only. 23,42,48,51 The lowest sensitivity was reported by Santos et al. 51 (0.73, 95% CI 0.45 to 0.91) and the highest by Kurtz et al. 42 (0.80, 95% CI 0.71 to 0.89). The sensitivities ranged from 0.93 (95% CI 0.89 to 0.97), as reported by Kurtz et al. ,42 to 1.00 (95% CI 0.95 to 1.00), as reported by Penney et al. 48 McIsaac et al. 46 conducted the test in both groups. In children, the sensitivity was 0.86 (95% CI 0.79 to 0.91) and the specificity was 0.99 (95% CI 0.97 to 1.00). In adults, the sensitivity was 0.77 (95% CI 0.65 to 0.86), with a specificity of 0.99 (95% CI 0.97 to 1.00).
bioNexia Strep A Dipstick (bioMérieux)
Only data for children were available for bioMérieux’s bioNexia Strep A Dipstick. Pauchard et al. 57 estimated a sensitivity of 0.85 (95% CI 0.74 to 0.92) and a specificity of 0.91 (95% CI 0.84 to 0.95).
Sofia Strep A fluorescent immunoassay (Quidel)
One study compared Sofia Strep A FIA with culture in children, with no adult data available. Lacroix et al. 23 reported a sensitivity of 0.85 (95% CI 0.81 to 0.89) and a specificity of 0.95 (95% CI 0.93 to 0.97).
Alere i Strep A (Abbott Laboratories)
Two studies presented data for the Alere i Strep A test for child populations. 20,54 Berry et al. 20 and Weinzierl et al. 54 reported respective sensitivities of 1.00 (95% CI 0.90 to 1.00) and 0.98 (95% CI 0.90 to 1.00), and specificities of 0.91 (95% CI 0.86 to 0.95) and 1.00 (95% CI 0.95 to 1.00).
Cohen et al. 37 examined both adults and children, and presented results by age group. In children, the test had a sensitivity of 0.96 (95% CI 0.91 to 0.99) and a specificity of 0.93 (95% CI 0.89 to 0.96). In adults, the sensitivity was 0.95 (95% CI 0.74 to 1.00) and the specificity was 0.97 (95% CI 0.92 to 0.99).
cobas Liat Strep A Assay (Roche Diagnostics)
Only data for a child population were available for the cobas Liat Strep A Assay. Wang et al. 24 reported that the test had a sensitivity of 0.98 (95% CI 0.93 to 0.99) and a specificity of 0.93 (95% CI 0.90 to 0.96).
Meta-analyses were carried out to compare the accuracy estimates of the child and adult populations for both the OSOM Strep A Strip and the Alere TestPack Plus tests, as these were the only tests with sufficient data.
For the TestPack Plus, in children, the sensitivity was estimated as 0.80 (95% CI 0.74 to 0.84) and the specificity as 0.98 (95% CI 0.95 to 0.99). In adults, the sensitivity was estimated as 0.87 (95% CI 0.82 to 0.91) and the specificity as 0.98 (95% CI 0.96 to 0.99).
For OSOM, the dichotomisation of studies into the two age categories was identical to the dichotomisation for primary and secondary care settings. Univariate models fitted to the children/secondary care data estimated a sensitivity of 0.95 (95% CI 0.90 to 0.98) and a specificity of 0.97 (95% CI 0.95 to 0.99). Models fitted to the adult/primary care data estimated a sensitivity of 0.93 (95% CI 0.88 to 0.97) and a specificity of 0.94 (95% CI 0.91 to 0.97).
Conclusion
It is unclear whether or not test accuracy varies based on the age of the population in which the test is being used. Further evidence is required.
Accuracy of point-of-care tests split by primary/secondary care setting
We sought to identify whether or not there was evidence to support the hypothesis that the tests might have different performance characteristics based on the setting in which the test is being used. No studies provided a breakdown of results comparing test accuracy between primary and secondary settings. Fourteen studies considered patients in a secondary care setting, which provided data for nine tests. Ten studies looked at patients in primary care settings, which covered four tests. A summary of care-setting-related data can be found in Table 14.
Study (first author and year of publication) | Care setting | Age group | Clinical tool score restriction | Strep A infections prevalence (%) | Reference type | N | TP (n) | FN (n) | FP (n) | TN (n) | Accuracy data (95% CI) |
---|---|---|---|---|---|---|---|---|---|---|---|
Clearview Exact Strep A Cassette – Abbott Laboratories | |||||||||||
Andersen 200356 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
Clearview Exact Strep A Dipstick – Abbott Laboratories | |||||||||||
Andersen 200356 | Secondary | Children | None | 15.0 | NR | 353 | 36 | 17 | 15 | 285 |
Sensitivity 0.68 (0.55 to 0.81) Specificity 0.95 (0.93 to 0.98) PPV 0.71 (0.58 to 0.83) NPV 0.94 (0.92 to 0.97) |
BD Veritor Plus System – Becton Dickinson | |||||||||||
Azrad 201934 | Secondary | NR | None | 25.0 | Streptococcal selective agar | 100 | 20 | 5 | 16 | 59 |
Sensitivity 0.80 (0.59 to 0.92) Specificity 0.79 (0.67 to 0.87) PPV 0.56 (0.38 to 0.72) NPV 0.92 (0.82 to 0.97) |
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 32 | 10 | 11 | 162 |
Sensitivity 0.76 (0.60 to 0.87) Specificity 0.94 (0.89 to 0.97) PPV 0.74 (0.56 to 0.86) NPV 0.94 (0.89 to 0.97) |
NADAL Strep A Strip – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Cassette – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Plus Cassette – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Plus Strip – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
NADAL Strep A Scan – nal von minden GmbH | |||||||||||
nal von minden GmbH (MFR)a | Secondary | Children and adults | None | 34.4 | Blood agar | 244 | 82 | 2 | 4 | 156 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.98 (0.93 to 0.99) PPV 0.95 (0.88 to 0.99) NPV 0.99 (0.95 to 1.00) |
OSOM Strep A Strip – Sekisui Diagnostics | |||||||||||
Bura 201736 | Primary | Adults | Centor score of ≥ 2 points | 22.7 | Blood agar | 101 | 22 | 1 | 2 | 76 |
Sensitivity 0.96 (0.76 to 1.00) Specificity 0.97 (0.90 to 1.00) PPV 0.92 (0.72 to 0.99) NPV 0.99 (0.92 to 1.00) |
Llor 200944 | Primary | Adults | Centor score of ≥ 2 points | 24.8 | Blood agar | 222 | 52 | 3 | 14 | 153 |
Sensitivity 0.95 (0.85 to 0.99) Specificity 0.92 (0.86 to 0.95) PPV 0.79 (0.69 to 0.86) NPV 0.98 (0.94 to 0.99) |
Llor 201145 | Primary | Adults | Centor score of ≥ 2 pointsb | 17.8 | Blood agar | 276 | 44 | 5 | 14 | 213 |
Sensitivity 0.90 (0.78 to 0.97) Specificity 0.94 (0.90 to 0.97) PPV 0.76 (0.65 to 0.84) NPV 0.98 (0.95 to 0.99) |
Rogo 201149 | Secondary | Children | None | 28.9 | Blood agar | 228 | 65 | 1 | 1 | 161 |
Sensitivity 0.98 (0.91 to 1.00) Specificity 0.99 (0.96 to 1.00) PPV 0.98 (0.91 to 1.00) NPV 0.99 (0.96 to 1.00) |
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 54 | 7 | 9 | 90 |
Sensitivity 0.89 (0.77 to 0.95) Specificity 0.91 (0.83 to 0.96) PPV 0.86 (0.74 to 0.93) NPV 0.93 (0.85 to 0.97) |
QuikRead Go Strep A Kit – Orion Diagnostica | |||||||||||
Azrad 201934 | Secondary | NR | None | 25.0 | Streptococcal selective agar | 100 | 20 | 5 | 20 | 55 |
Sensitivity 0.80 (0.59 to 0.92) Specificity 0.73 (0.62 to 0.83) PPV 0.50 (0.34 to 0.66) NPV 0.92 (0.81 to 0.97) |
Stefaniuk 201752 | Primary | Children and adultsb | None | 45.3 | Blood agar | 95 | 39 | 4 | 8 | 44 |
Sensitivity 0.91 (0.78 to 0.97) Specificity 0.85 (0.72 to 0.93) PPV 0.83 (0.72 to 0.90) NPV 0.92 (0.81 to 0.97) |
Alere TestPack Plus Cassette – Abbott Laboratories | |||||||||||
Dimatteo 200138 | Secondary | Adults | Centor score of ≥ 1 point | NR | Streptococcal selective agar | NR | NR | 22 | NR | 361 | NPV 0.94 (0.91 to 0.96) |
Humair 200639 | Primary | Adults | Centor score of ≥ 2 pointsb | 37.6 | Blood agar | 372 | 128 | 12 | 11 | 221 |
Sensitivity 0.91 (0.86 to 0.95) Specificity 0.95 (0.92 to 0.98) PPV 0.92 (0.87 to 0.95) NPV 0.95 (0.91 to 0.97) |
Johansson 200340 | Primary | Children and adults | None | 31.4 | NR | 144 | 46 | 7 | 4 | 87 |
Sensitivity 0.87 (0.74 to 0.94) Specificity 0.96 (0.89 to 0.99) PPV 0.92 (0.80 to 0.97) NPV 0.93 (0.85 to 0.97) |
Johnson 200141 | Primary | Adults | None | NR | Blood agar | NR | 445 | NR | 77 | NR | PPV 0.85 (0.82 to 0.88) |
Kurtz 200042 | Secondary | Children | None | 31.1 | Blood agar | 257 | 64 | 16 | 13 | 164 |
Sensitivity 0.80 (0.71 to 0.89) Specificity 0.93 (0.89 to 0.97) PPV 0.83 (0.75 to 0.92) NPV 0.91 (0.87 to 0.95) |
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 271 | 87 | 21 | 623 |
Sensitivity 0.76 (0.71 to 0.80) Specificity 0.97 (0.95 to 0.98) PPV 0.93 (0.89 to 0.95) NPV 0.88 (0.85 to 0.90) |
Lindbæk 200443 | Primary | Children and adults | None | 35.9 | Streptococcal selective agar | 306 | 106 | 4 | 27 | 169 |
Sensitivity 0.96 (0.91 to 0.99) Specificity 0.86 (0.80 to 0.91) PPV 0.80 (0.72 to 0.86) NPV 0.98 (0.94 to 0.99) |
McIsaac 200446 | Primary | Children and adultsb | McIsaac score of ≥ 2 points | 29.0 | Blood agar | 787 | 189 | 39 | 5 | 554 |
Sensitivity 0.83 (0.77 to 0.88) Specificity 0.99 (0.98 to 1.00) PPV 0.97 (0.94 to 0.99) NPV 0.93 (0.91 to 0.95) |
Penney 201648 | Secondary | Children | None | 40.1 | Streptococcal selective agar | 147 | 45 | 14 | 0 | 88 |
Sensitivity 0.76 (0.65 to 0.87) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.90 to 1.00) NPV 0.86 (0.78 to 0.92) |
Rosenberg 200250 | Secondary | Children and adults | None | 25.4 | Blood agar | 126 | 24 | 8 | 1 | 93 |
Sensitivity 0.75 (0.56 to 0.88) Specificity 0.99 (0.93 to 1.00) PPV 0.96 (0.78 to 1.00) NPV 0.92 (0.85 to 0.96) |
Santos 200351 | Secondary | Children | None | 30.6 | Blood agar | 49 | 11 | 4 | 2 | 32 |
Sensitivity 0.73 (0.45 to 0.91) Specificity 0.94 (0.79 to 0.99) PPV 0.85 (0.54 to 0.97) NPV 0.89 (0.73 to 0.96) |
Valverde 201858 | Secondary | Children and adults | None | 40.0 | Blood agar | 580 | 181 | 16 | 27 | 356 |
Sensitivity 0.92 (0.87 to 0.95) Specificity 0.93 (0.90 to 0.95) PPV 0.87 (0.82 to 0.91) NPV 0.96 (0.93 to 0.97) |
bioNexia Strep A Dipstick – bioMérieux | |||||||||||
Pauchard 201357 | Secondary | Children | None | 36.8 | NR | 193 | 60 | 11 | 11 | 111 |
Sensitivity 0.85 (0.74 to 0.92) Specificity 0.91 (0.84 to 0.95) PPV 0.85 (0.76 to 0.93) NPV 0.91 (0.86 to 0.96) |
Sofia Strep A FIA – Quidel | |||||||||||
Lacroix 201823 | Secondary | Children | McIsaac score of ≥ 2 points | 35.7 | Blood agar | 1002 | 305 | 53 | 31 | 613 |
Sensitivity 0.85 (0.81 to 0.89) Specificity 0.95 (0.93 to 0.97) PPV 0.91 (0.87 to 0.94) NPV 0.92 (0.90 to 0.94) |
Alere i Strep A – Abbott Laboratories | |||||||||||
Berry 201820 | Secondary | Children | None | 19.5 | Blood agar | 215 | 42 | 0 | 15 | 158 |
Sensitivity 1.00 (0.90 to 1.00) Specificity 0.91 (0.86 to 0.95) PPV 0.74 (0.60 to 0.84) NPV 1.00 (0.97 to 1.00) |
Cohen 201537 | Secondary | Children and adultsb | None | 30.3 | Blood agar | 481 | 141 | 6 | 18 | 316 |
Sensitivity 0.96 (0.91 to 0.98) Specificity 0.95 (0.91 to 0.97) PPV 0.89 (0.82 to 0.93) NPV 0.98 (0.96 to 0.99) |
Weinzierl 201854 | Secondary | Children | None | 38.1 | Blood agar | 160 | 60 | 1 | 0 | 99 |
Sensitivity 0.98 (0.90 to 1.00) Specificity 1.00 (0.95 to 1.00) PPV 1.00 (0.93 to 1.00) NPV 0.99 (0.94 to 1.00) |
cobas Liat Strep A Assay – Roche Diagnostics | |||||||||||
Wang 201724 | Primary | Children | Centor score of ≥ 1 point | 30.2 | NR | 427 | 126 | 3 | 20 | 278 |
Sensitivity 0.98 (0.93 to 0.99) Specificity 0.93 (0.90 to 0.96) PPV 0.86 (0.79 to 0.91) NPV 0.99 (0.97 to 1.00) |
Clearview Exact Strep A Cassette and Clearview Exact Strep A Dipstick (Abbott Laboratories)
Only data in a hospital setting were available for the Clearview Exact Strep A Cassette and Dipstick tests, and were provided by Andersen et al. ,56 who did not distinguish between the cassette and the dipstick varieties. Andersen et al. 56 reported a sensitivity of 0.68 (95% CI 0.55 to 0.81) and a specificity of 0.95 (95% CI 0.93 to 0.98).
BD Veritor Plus System (Becton Dickinson)
Azrad et al. 34 and Berry et al. 20 both presented results for the BD Veritor Plus System in a hospital setting. The sensitivities of the test were 0.80 (95% CI 0.59 to 0.92) and 0.76 (95% CI 0.60 to 0.87), and the specificities were 0.79 (95% CI 0.67 to 0.87) and 0.94 (95% CI 0.89 to 0.97), for Azrad et al. 34 and Berry et al. ,20 respectively.
NADAL Strep A Strip, NADAL Strep A Cassette, NADAL Strep A Plus Cassette, NADAL Strep A Plus Strip and NADAL Strep A Scan (nal von minden GmbH)
Only evidence from a secondary care setting was available for the NADAL tests, which did not distinguish between any of the varieties. The manufacturer reported a sensitivity of 0.98 (95% CI 0.91 to 1.00) and a specificity of 0.98 (95% CI 0.93 to 0.99).
Strep A Rapid Test Cassette (Biopanda Reagents)
The data provided by Biopanda Reagents for the Strep A Rapid Test were reportedly from a primary care setting. The sensitivity was 0.95 (95% CI 0.0.89 to 0.98) and the specificity was 0.98 (95% CI 0.96 to 0.99).
OSOM Strep A Strip (Sekisui Diagnostics)
Three studies presented data for the OSOM Strep A Strip in a primary care setting. 36,44,45 Bura et al. ,36 Llor et al. 44 and Llor et al. 45 reported respective sensitivities of 0.96 (95% CI 0.76 to 1.00), 0.95 (95% CI 0.85 to 0.99) and 0.90 (95% CI 0.78 to 0.97), and specificities of 0.97 (95% CI 0.90 to 1.00), 0.92 (95% CI 0.86 to 0.95) and 0.94 (95% CI 0.90 to 0.97). Rogo et al. 49 and Weinzierl et al. 54 both used the test in a hospital setting, and estimated sensitivities of 0.98 (95% CI 0.91 to 1.00) and 0.89 (95% CI 0.77 to 0.95), and specificities of 0.99 (95% CI 0.96 to 1.00) and 0.91 (95% CI 0.83 to 0.96), respectively. 49,54
QuikRead Go Strep A Kit (Orion Diagnostica)
Azrad et al. 34 compared the performance of the QuikRead Go Strep A Kit with culture in a hospital setting, and reported a sensitivity of 0.80 (95% CI 0.59 to 0.92) and a specificity of 0.73 (95% CI 0.62 to 0.83). Stefaniuk et al. 52 looked at a primary care setting, and reported a sensitivity 0.91 (95% CI 0.78 to 0.97) and a specificity 0.85 (95% CI 0.72 to 0.93). The data provided by Orion Diagnostica were also reported as being from a primary care setting, and estimated a sensitivity of 0.83 (95% CI 0.73 to 0.90) and a specificity of 0.97 (95% CI 0.93 to 0.99).
Alere TestPack Plus Cassette (Abbott Laboratories)
There were seven published studies that compared the performance of the Alere TestPack Plus Cassette with culture in a secondary care setting. One study did not report sufficient data to complete a 2 × 2 table. 38 Rosenberg et al. 50 and Valverde et al. 58 both examined a combination of children and adults, estimating sensitivities of 0.75 (95% CI 0.56 to 0.88) and 0.92 (95% CI 0.87 to 0.95), and specificities of 0.99 (95% CI 0.93 to 1.00) and 0.93 (95% CI 0.90 to 0.95), respectively. The four remaining studies included only children. 23,42,48,51 The sensitivities ranged from 0.73 (95% CI 0.45 to 0.91)51 to 0.80 (95% CI 0.71 to 0.90)42 and the specificities ranged from 0.93 (95% CI 0.89 to 0.97)42 to 1.00 (95% CI 0.95 to 1.00). 48
Five studies reported the accuracy of the Alere TestPack Plus Cassette in a primary care setting. One did not present complete 2 × 2 data. 41 One reported for adult populations: Humair et al. 39 estimated a sensitivity of 0.91 (95% CI 0.86 to 0.95) and a specificity of 0.95 (95% CI 0.92 to 0.98). Lindbæk et al. ,43 Johansson et al. 40 and McIsaac et al. 46 combined adults and children, and reported respective sensitivities of 0.94 (95% CI 0.90 to 0.99), 0.87 (95% CI 0.74 to 0.94) and 0.83 (95% CI 0.77 to 0.87) alongside specificities of 0.86 (95% CI 0.80 to 0.91), 0.96 (95% CI 0.89 to 0.99) and 0.99 (95% CI 0.98 to 1.00).
bioNexia Strep A Dipstick (bioMérieux)
Only data from a hospital setting were available for bioMérieux’s bioNexia Strep A Dipstick. Pauchard et al. 57 estimated a sensitivity of 0.85 (95% CI 0.74 to 0.92) and a specificity of 0.91 (95% CI 0.84 to 0.95).
Sofia Strep A fluorescent immunoassay (Quidel)
One study compared Sofia Strep A FIA with culture in a hospital setting, with no GP data available. Lacroix et al. 23 reported a sensitivity of 0.85 (95% CI 0.81 to 0.89) and a specificity of 0.95 (95% CI 0.93 to 0.97).
Alere i Strep A (Abbott Laboratories)
Three studies compared the Alere i Strep A test with culture in a hospital setting. Berry et al. 20 and Weinzierl et al. 54 looked only at children, and estimated respective sensitivities of 1.00 (95% CI 0.90 to 1.00) and 0.98 (95% CI 0.90 to 1.00), and specificities of 0.91 (95% CI 0.86 to 0.95) and 1.00 (95% CI 0.95 to 1.00). Cohen et al. 37 examined both children and adults, and produced respective estimates of sensitivity and specificity of 0.96 (95% CI 0.91 to 0.98) and 0.95 (95% CI 0.91 to 0.97).
cobas Liat Strep A Assay (Roche Diagnostics)
Only data from a GP setting were available for the cobas Liat Strep A Assay. Wang et al. 24 reported that the test had a sensitivity of 0.98 (95% CI 0.93 to 0.99) and a specificity of 0.93 (95% CI 0.90 to 0.96).
Meta-analyses were conducted to indirectly compare the accuracy estimates of the child and adult populations for both the OSOM Strep A Strip and the Alere TestPack Plus tests, as these were the only tests with sufficient data.
Fitted to data for the TestPack Plus test, univariate models estimated a sensitivity of 0.90 (95% CI 0.83 to 0.96) and a specificity of 0.95 (95% CI 0.88 to 0.99) in a primary setting, compared with a sensitivity of 0.80 (95% CI 0.71 to 0.88) and a specificity of 0.97 (95% CI 0.94 to 0.99) in a secondary care setting.
The OSOM test also had sufficient studies to conduct univariate meta-analyses. However, the dichotomisation of studies into primary and secondary care settings was identical to the age dichotomisation. Univariate models fitted to the child/secondary care data estimated a sensitivity of 0.95 (95% CI 0.90 to 0.98) and a specificity of 0.97 (95% CI 0.95 to 0.99). Models fitted to the adult/primary care data estimated a sensitivity of 0.93 (95% CI 0.88 to 0.97) and a specificity of 0.94 (95% CI 0.91 to 0.97).
Conclusion
Test performance may vary depending on the care setting in which the test is being used. Further evidence is required.
Estimates of test accuracy for cost-effectiveness modelling
Having established that a number of factors may influence test accuracy, we sought to provide estimates for each test to be used in the cost-effectiveness modelling. This is consistent with the findings of Leeflang et al. 72 Ideally, estimates would have come from a meta-analysis of several studies specific to the scope population, by age group and setting. However, the evidence base was not sufficient to do this. In total, there were 21 tests × 3 age groups × 3 settings = 189 pairs of sensitivity and specificity estimates required. However, no data were available that were specific to the elderly or the pharmacy setting, or for three of the tests, meaning that there were just 18 × 2 × 2 = 72 potential pairs of estimates. Each estimate came from a combination of five studies or fewer. Factoring in the observed variation in test accuracy between studies alongside the scant evidence base, there is a significant likelihood that the final estimates may not be representative of the tests’ true accuracy. There is a significant risk that a test with a larger evidence base published in peer-reviewed journal articles may be disadvantaged in comparison with a test in which there is only unpublished manufacturer information at high risk of bias.
We prioritised information from published studies [i.e. not those in manufacturer (submitted directly to NICE in response to a request for information) and FDA documents] in which data were available for patients restricted by throat score as per the scope. This provided accuracy data for one pair of estimates and relaxing the age group restriction provided another pair of estimates. It was necessary to relax the throat score restriction to obtain further estimates. An additional 13 pairs of estimates were obtained from studies that matched the age and care setting of the test. One further pair of estimates was obtained by using estimates from a mixed age population for an adult population. Relaxing the care setting and age restrictions allowed estimation of 24 pairs of test accuracy estimates for child and adult populations. Where there were multiple options for considering relaxing either age group or setting differences between studies and target population, factors such as sample size and number of studies were also considered. Studies in manufacturer responses to NICE and in FDA documents were included only if no other evidence was available for a specific test. Where these data are used, we consider the analysis to be at extremely high risk of bias and we do not recommend that these are sufficient to underpin any clinical decisions. The data from neither of these two sources matched a subgroup of interest or were restricted by throat score, but relaxing the ages and care settings provided estimates for a further 32 pairs.
A summary of the studies providing evidence for each estimate can be found in Table 15.
Test | Primary care | Secondary care | ||
---|---|---|---|---|
Children | Adult | Children | Adult | |
Clearview Exact Strep A cassette | One abstract (Andersen et al.,56 n = 353) – wrong setting, right age, wrong score restriction | One abstract (Andersen et al.,56 n = 353) – wrong setting, wrong age, wrong score restriction | One abstract (Andersen et al.,56 n = 353) – right setting, right age, wrong score restriction | One abstract (Andersen et al.,56 n = 353) – right setting, wrong age, wrong score restriction |
Clearview Exact Strep A dipstick – test strip | One abstract (Andersen et al.,56 n = 353) – wrong setting, right age, wrong score restriction | One abstract (Andersen et al.,56 n = 353) – wrong setting, wrong age, wrong score restriction | One abstract (Andersen et al.56) – right setting, right age, wrong score restriction | One abstract (Andersen et al.,56 n = 353) – right setting, wrong age, wrong score restriction |
BD Veritor Plus system group A Strep Assay – cassette | One study (Berry et al.,20 n = 215) – wrong setting, right age, wrong score restriction | Two studies (Berry et al.,20 n = 215; Azrad et al.,34 n = 100) – wrong setting, wrong age, wrong score restriction | One study (Berry et al.,20 n = 215) – right setting, right age, wrong score restriction | Two studies (Berry et al.,20 n = 215; Azrad et al.,34 n = 100) – wrong setting, wrong age, wrong score restriction |
Strep A Rapid Test – cassette | One MFR response (Biopanda Reagents, n = 526) – right setting, right age, wrong score restriction | One MFR response (Biopanda Reagents, n = 526) – right setting, wrong age, wrong score restriction | One MFR response (Biopanda Reagents, n = 526) – wrong setting, right age, wrong score restriction | One MFR response (Biopanda Reagents, n = 526) – wrong setting, wrong age, wrong score restriction |
Strep A Rapid Test – test strip | No data | No data | No data | No data |
NADAL Strep A – test strip | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – right setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – right setting, wrong age, wrong score restriction |
NADAL Strep A – cassette | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction |
NADAL Strep A plus – cassette | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction |
NADAL Strep A plus – test strip | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction |
NADAL Strep A scan test – cassette | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction | One MFR response (nal von minden GmbH, n = 244) – wrong setting, wrong age, wrong score restriction |
OSOM Strep A test – test strip | One study (Llor et al.,45 n = 116) – right setting, wrong age, right score restriction | One study (Llor et al.,45 n = 116) – right setting, right age, right score restriction | Two studies (Rogo et al.,49 n = 228; Weinzierl et al.,54 n = 160) – right setting, right age, wrong score restriction | Five studies (Bura et al.,36 n = 101; Llor et al.,44 n = 222; Llor et al.,45 n = 276, Rogo et al.,49 n = 228; Weinzierl et al.,54 n = 160) – wrong setting, wrong age, wrong score restriction |
QuikRead Go Strep A test kit | One study (Stefaniuk et al.,52 n = 43) – right setting, right age, wrong score restriction | One study (Stefaniuk et al.,52 n = 52) – right setting, right age, wrong score restriction | Two studies (Azrad et al.,34 n = 100; Stefaniuk et al.,52 n = 95) – wrong setting, wrong age, wrong score restriction | Two studies (Azrad et al.,34 n = 100; Stefaniuk et al.,52 n = 95) – wrong setting, wrong age, wrong score restriction |
Alere TestPack +Plus Strep A – cassette | One study (McIsaac et al.,46 n = 494) – right setting, right age, wrong score restriction | One study (Humair et al.,39 n = 224) – right setting, right age, right score restriction | Four studies (Kurtz et al.,42 n = 257; Lacroix et al.,23 n = 1002; Penney et al.,48 n = 147; Santos et al.,51 n = 49) – right setting, right age, wrong score restriction | One study and one abstract (Rosenberg et al.,50 n = 126; Valverde et al.,58 n = 580) – right setting, wrong age, wrong score restriction |
bioNexia Strep A plus – cassette | No data | No data | No data | No data |
bioNexia Strep A dipstick – test strip | One abstract (Pauchard et al.,57 n = 193) – wrong setting, right age, wrong score restriction | One abstract (Pauchard et al.,57 n = 193) – wrong setting, wrong age, wrong score restriction | One abstract (Pauchard et al.,57 n = 193) – right setting, right age, wrong score restriction | One abstract (Pauchard et al.,57 n = 193) – wrong setting, wrong age, wrong score restriction |
bioNexia Strep A – cassette | No data | No data | No data | No data |
Sofia Strep A FIA | One study (Lacroix et al.,23 n = 1002) – wrong setting, right age, wrong score restriction | One study (Lacroix et al.,23 n = 1002) – wrong setting, wrong age, wrong score restriction | One study (Lacroix et al.,23 n = 1002) – right setting, right age, wrong score restriction | One study (Lacroix et al.,23 n = 1002) – right setting, wrong age, wrong score restriction |
Alere i Strep A | Three studies (Berry et al.,20 n = 215; Cohen et al.,37 n = 355; Weinzierl et al.,54 n = 160) – wrong setting, right age, wrong score restriction | One study (Cohen et al.,37 n = 126) – wrong setting, right age, wrong score restriction | Three studies (Berry et al.,20 n = 215; Cohen et al.,37 n = 355; Weinzierl et al.,54 n = 160) – right setting, right age, wrong score restriction | One study (Cohen et al.,37 n = 126) – right setting, right age, wrong score restriction |
Alere i Strep A 2 | One FDA study (Alere, n = 981) – wrong setting, wrong age, wrong score restriction | One FDA study (Alere, n = 981) – wrong setting, wrong age, wrong score restriction | One FDA study (Alere, n = 981) – wrong setting, wrong age, wrong score restriction | One FDA study (Alere, n = 981) – wrong setting, wrong age, wrong score restriction |
cobas Liat Strep A Assay | One study (Wang et al.,24 n = 427) – right setting, right age, wrong score restriction | One study (Wang et al.,24 n = 427) – right setting, wrong age, wrong score restriction | One study (Wang et al.,24 n = 427) – wrong setting, right age, wrong score restriction | One study (Wang et al.,24 n = 427) – wrong setting, wrong age, wrong score restriction |
Xpert Xpress Strep A | One FDA report and one MFR response (Cepheid, n = 618 and 577) – wrong setting, wrong age, wrong score restriction | One FDA report and one MFR response (Cepheid, n = 618 and 577) – wrong setting, wrong age, wrong score restriction | One FDA report and one MFR response (Cepheid, n = 618 and 577) – wrong setting, wrong age, wrong score restriction | One FDA report and one MFR response (Cepheid, n = 618 and 577) – wrong setting, wrong age, wrong score restriction |
Accuracy of point-of-care tests using polymerase chain reaction to resolve discordant cases
Discordant results between point-of-care tests and culture were resolved using PCR in four studies. 20,24,37,73 All discrepant results between a point-of-care test and culture (point of care positive, culture negative and vice versa) were analysed in two of these studies. 20,24
All of the 20 samples that were cobas Liat Strep A positive but culture negative were confirmed as positive by PCR and bidirectional sequencing. All three samples that were cobas Liat Strep A negative and reference culture positive were confirmed as positive by PCR and bidirectional sequencing. 24 Wang et al. 24 also examined the discrepancies between the TestPack Plus Strep A test and culture. All discordant results, that is the 20 cases positive by the test and negative by culture and the three cases that were test negative and culture positive, were positive according to PCR.
In evaluating the accuracy of the BD Veritor system, Berry et al. 20 also identified 21 discordant results with throat culture, including 11 positive on the index test but not culture, and 10 positive on culture but not the index test. PCR detected strep A in 6 of the 11 results that were positive by the BD Veritor System but negative by culture. PCR detected strep A in all of the 11 samples that were negative by the BD Veritor system but positive by culture. In the same population, Berry et al. 20 found that 14 of the 15 results that were positive for the Alere i Strep A test and negative for culture were found to be positive according to PCR. There were no reported occasions when the Alere i Strep A test gave a negative result when culture gave a positive result.
Similarly, Cohen et al. 37 and Lacroix et al. 23 analysed only some of the discrepancies between a point-of-care test and culture. Cohen et al. 37 identified a total of 24 discordant results between the Alere i Strep A test and culture. There were 18 positive samples on the Alere i Strep A test and not on culture, 13 of which were confirmed as positive by PCR, whereas the other five results were PCR negative. Four of the six cases that were positive on culture but not on Alere i Strep A were confirmed as negative by PCR.
Lacroix et al. 23 found 84 discordant results between Sofia Strep A FIA and culture (31 false positives and 53 false negatives). Eleven of the 31 false-positive samples were missing; hence, PCR assays could not be conducted for these samples. Eleven of those with samples present were confirmed as positive by PCR and nine were negative by PCR. Lacroix et al. 23 also found 21 results that were positive by TestPack Plus Strep A but negative by culture, nine of which were confirmed as positive by PCR. Eight were confirmed as PCR negative, leaving four missing samples, which precluded additional PCR assays. Lacroix et al. 23 did not provide test-specific results for the cases that were negative by rapid test and positive by culture.
Table 16 summarises the key findings from these analyses.
Study (first author and year of publication) | Index test | 2 × 2 contingency tables | ||
---|---|---|---|---|
PCR + | PCR – | Total | ||
Berry 201820 | Alere i Strep A test +, culture – | 14 | 1 | 15 |
Alere i Strep A test –, culture + | 0 | 0 | 0 | |
Total | 14 | 1 | 15 | |
Berry 201820 | BD Veritor system +, culture – | 6 (Berry et al.20 also report 5) | 5 | 11 |
BD Veritor system –, culture + | 10 | 0 | 10 | |
Total | 16 | 5 | 21 | |
Wang 201724 | cobas Liat Strep A Assay +, culture – | 20 | 0 | 20 |
cobas Liat Strep A Assay –, culture + | 3 | 0 | 3 | |
Total | 23 | 0 | 23 | |
Cohen 201537 | Alere i Strep A +, culture – | 13 | 5 | 18 |
Alere i Strep A –, culture + | 2 | 4 | 6 | |
Total | 15 | 9 | 24 | |
Lacroix 201823 | Sofia Strep A FIA +, culture – | 11 | 9 | 31 (11 missing samples) |
Sofia Strep A FIA –, culture + | NR | NR | 53 | |
Total | NR | NR | 84 | |
Lacroix 201823 | TestPack Plus Strep A +, culture – | 9 | 8 | 21 (4 missing samples) |
TestPack Plus Strep A –, culture + | NR | NR | 87 | |
Total | NR | NR | 108 |
Interestingly, Lindbæk et al. 43 used a second culture medium (a liquid medium/broth) to resolve discrepant results between the TestPack Plus Strep A test (Abbott Laboratories) and microbiological culture (streptococcal selective agar). In this study, the second culture medium [colistin and oxolinic acid (COBA) + tryptic soy agar (TSA) sheep + sulfamethoxazole and trimethoprim (SXT) + Lim broth + first culture medium] detected strep A in 17 out of 27 (63%) patients who previously tested positive by the Alere TestPack +Plus Strep A test but negative by the first culture medium (Columbia agar + horse blood + COBA).
Direct comparison of point-of-care test accuracy with clinical scores
Six studies directly compared levels of test accuracy between point-of-care tests and clinical scores. 39,44–46,52,57 The results are summarised in Table 17. Sensitivity point estimates for clinical scores were higher for point-of-care tests than for rapid tests in two studies. 46,57 However, point estimates for sensitivity and particularly specificity of rapid tests (including TestPack Plus Strep A, OSOM Strep A and QuikRead Go Strep A tests) were generally higher. Sensitivity (0.829 to 0.946) and specificity (0.849 to 0.991) point estimates of point-of-care tests were consistently high when compared with point estimates for clinical scores (sensitivity 0.735 to 0.972; specificity 0.172 to 0.648).
Study (first author and year of publication) | Clinical score | Test accuracy statistics for clinical scores | Test accuracy statistics for index tests | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Culture + | Culture – | Total | Sensitivity (95% CI) | Specificity (95% CI) | Index test | Culture + | Culture – | Total | Sensitivity (95% CI) | Specificity (95% CI) | ||
Humair 200639 | Centor score of > 2 points | 105 | 119 | 224 | 0.750 (0.678 to 0.822) | 0.487 (0.423 to 0.551) | TestPack Plus Strep A+ | 128 | 11 | 139 | 0.914 (0.852 to 0.953) | 0.953 (0.914 to 0.947) |
Centor score of ≤ 2 points | 35 | 113 | 148 | TestPack Plus Strep A– | 12 | 221 | 233 | |||||
Total | 140 | 232 | 372 | Total | 140 | 232 | 372 | |||||
Llor 200944 | Centor score of > 2 points | 47 | 104 | 151 | 0.855 (0.761 to 0.948) | 0.377 (0.304 to 0.451) | OSOM Strep A+ | 52 | 14 | 66 | 0.946 (0.839 to 0.986) | 0.916 (0.861 to 0.952) |
Centor score of ≤ 2 points | 8 | 63 | 71 | OSOM Strep A– | 3 | 153 | 156 | |||||
Total | 55 | 167 | 222 | Total | 55 | 167 | 222 | |||||
Llor 201145 | Centor score of > 2 points | 36 | 80 | 116 | 0.735 (0.587 to 0.846) | 0.648 (0.581 to 0.709) | OSOM Strep A+ | 44 | 14 | 58 | 0.898 (0.770 to 0.962) | 0.938 (0.897 to 0.933) |
Centor score of ≤ 2 points | 13 | 147 | 160 | OSOM Strep A– | 5 | 213 | 218 | |||||
Total | 49 | 227 | 276 | Total | 49 | 227 | 276 | |||||
McIsaac 200446 | McIsaac score of > 2 points | 193 | 375 | 568 | 0.847 (0.792 to 0.889) | 0.329 (0.291 to 0.370) | TestPack Plus Strep A+ | 189 | 5 | 194 | 0.829 (0.772 to 0.874) | 0.991 (0.978 to 0.997) |
McIsaac score of ≤ 2 points | 35 | 184 | 219 | TestPack Plus Strep A– | 39 | 554 | 593 | |||||
Total | 228 | 559 | 787 | Total | 228 | 559 | 787 | |||||
Pauchard 201357 | McIsaac score of > 2 points | 69 | 101 | 170 | 0.972 (0.893 to 0.995) | 0.172 (0.112 to 0.253) | Strep A Rapid Test + | 60 | 11 | 71 | 0.845 (0.735 to 0.914) | 0.910 (0.841 to 0.952) |
McIsaac score of ≤ 2 points | 2 | 21 | 23 | Strep A rapid Test – | 11 | 111 | 122 | |||||
Total | 71 | 122 | 193 | Total | 71 | 122 | 193 | |||||
Stefaniuk 201752 | Centor score of > 2 points | 37 | 39 | 76 | 0.861 (0.714 to 0.942) | 0.250 (0.145 to 0.392) | QuikRead Go Strep A+ | 39 | 8 | 47 | 0.907 (0.770 to 0.970) | 0.846 (0.719 to 0.931) |
Centor score of ≤ 2 points | 6 | 13 | 19 | QuikRead Go Strep A– | 4 | 44 | 48 | |||||
Total | 43 | 52 | 95 | Total | 43 | 52 | 95 |
Test failure rate
Five studies reported on test failure rate. 23,37,38,43,54 These five studies reported on three different point-of-care tests (Alere i, Testpack Strep A Plus and Sofia FIA Strep A). For the Alere i test, the test failure rate ranged from 0.0%54 to 2.8%. 37 The TestPack Strep A Plus test failure rate ranged from 0.3%38 to 1.3%. 43 The Sofia FIA strep test failure rate was reported as 4.7%. 23 Differences could be a result of environmental factors, such as staff training, as opposed to issues with the tests.
Proposed pathway (combined strategy of clinical score and point-of-care tests)
Test accuracy of combined clinical score and point-of-care test with culture as reference standard
None of the included studies evaluated the accuracy of a combined strategy of a sore throat clinical score (at the recommended NICE cut-off points of Centor/McIsaac score of ≥ 3 points or FeverPAIN score of ≥ 4 points) with a point-of-care test. This would require the combination of the two methods into a single procedure, in which positive results are produced by individuals with both a high clinical score and a positive point-of-care test, and negative results are given either by patients with a low clinical score or by patients with a high score but a negative point-of-care test. As shown in Table 18, Rosenberg et al. 50 provide the only available evidence that attempts to match the proposed pathway, but not at the recommended Centor cut-off point.
Study (first author and year of publication) | Combined strategy | Test accuracy statistics for clinical scores | ||||
---|---|---|---|---|---|---|
Culture + | Culture – | Total | Sensitivity for patients with a Centor score of 2 or 3 points | Specificity for patients with a Centor score of 2 or 3 points | ||
Rosenberg 200250 | Centor score of 2 or 3 points AND TestPack Plus Strep A+ | 12 | 0 | 12 |
For patients with a Centor score of 2 or 3 points: 0.80 (95% CI 0.52 to 0.96) Overall: 0.88 (95% CI 0.71 to 0.96) |
For patients with a Centor score of 2 or 3 points: 1.00 (95% CI 0.92 to 1.00) Overall: 0.78 (95% CI 0.68 to 0.86) |
Centor score of 2 or 3 points AND TestPack Plus Strep A– | 3 | 44 | 47 | |||
Centor score of < 2 points AND no rapid test | 1 | 29 | 30 | |||
Centor score of 4 points or 5 points AND no rapid test | 16 | 21 | 37 | |||
Total | 32 | 94 | 126 |
Other outcomes
No information was found on the number of appointments required per episode, morbidity, mortality, onward transmission of infection, health-related quality of life, patient satisfaction with the test or health-care professional satisfaction with the test.
Twelve studies reported on antibiotic-prescribing behaviours. RCTs and before-and-after studies have been described in Appendix 5 (see Tables 37 and 38 and Figure 15). The remaining eight studies that included one-armed cohorts or hypothetical antibiotic management are briefly summarised in Antibiotic-prescribing behaviours: other study designs.
Antibiotic-prescribing behaviours: randomised controlled trial evidence
There were three RCTs that reported on antibiotic use. All three trials found higher antibiotic prescription rates or use in control arms with no point-of-care test than in those with a point-of-care test.
In the UK RCT in primary care by Little et al. ,6 patients (mean ages 29 and 31 years across arms, no age range provided) in a primary health-care setting were randomly assigned to a delayed antibiotics control arm, a clinical score arm or a RADT arm (IMI TestPack, later known as Alere i Strep A test). In the delayed antibiotics control arm, depending on the severity of their presentation, patients were given antibiotics, given no antibiotics or given a delayed prescription of antibiotics to collect after 3–5 days if symptoms did not improve or worsened. This control group was there to represent current UK practice at the time. In the clinical score arm, patients were assessed using the FeverPAIN clinical scoring tool. Patients with scores of 0 or 1 points were not offered antibiotics. Immediate antibiotics were offered for patients with scores of ≥ 4 points; for patients with scores of 2 or 3 points, delayed antibiotics were offered. In the RADT group, all patients also received the clinical scoring tool. Those with scores of 0 or 1 points were not offered antibiotics or RADT, those with a score of 2 points were offered delayed antibiotics and those with scores of ≥ 3 points were given a RADT. All those with negative RADT results were not offered antibiotics. There were 207 patients in the delayed prescribing arm, of whom 79% (164/207) received a delayed prescription, 10% (21/207) received no antibiotics and 10% (21/207) received immediate antibiotics. In the clinical score arm, 41% (87/211) received a delayed prescription, 41% (87/211) received no antibiotics and 16% (33/211) received immediate antibiotics. In the RADT arm, there were fewer delayed prescription decisions, with only 23% (48/213) of patients receiving a delayed prescription; 59% (126/213) of patients were offered no antibiotics and 18% (38/213) were given immediate antibiotics. Patients reported antibiotic use of 46% (75/164) in the delayed prescription arm, 37% (60/161) in the clinical score arm and 35% (58/164) in the clinical score plus RADT arm. The total numbers in each arm were considerably lower for antibiotic use, indicating significant loss to follow-up, so these numbers should be interpreted with caution. Likewise, symptom severity was worse in the control arm, so effect sizes may be overestimates. This was a UK-based trial based in a primary health-care setting. For this reason, it is likely to be generalisable to the UK population.
The second trial was by Llor et al. 45 They included patients aged > 14 years (mean age 31.7 years) visiting primary health-care centres across Spain. This was a cluster RCT with the centre as the unit of randomisation. This form of randomisation can be prone to imbalancing baseline characteristics of patients; however, the authors reported no significant differences in baseline characteristics (such as gender, mean age and by clinical symptoms) between the participants across the intervention and control arms. Patients were randomised to either a control arm, in which patients were assessed using only clinical criteria (Centor), or an intervention arm, in which patients were assessed with both a Centor score and a RADT (OSOM Strep A test). In total, 54% (291/543) of patients were prescribed antibiotics. Antibiotics were more likely to be prescribed in the clinical score only arm, with GPs prescribing antibiotics in 64% (168/262) of patients, compared with 44% (123/281) in the RADT arm. There was a correlation between Centor score and antibiotic prescription rates across both groups, with more antibiotics prescribed to those with higher scores [score of 4 points – 80% antibiotics (37/46) in intervention arm and 96% (43/35) in the control arm compared with 16% (4/70) in intervention arm and 33.% (20/61) in control arm]. In the subgroup of interest to the UK population (those with Centor scores of ≥ 3 points), 74% (90/122) were given antibiotics in the intervention arm, compared with 85% (100/119) in the control arm. Antibiotic appropriateness is also discussed in the trial. Ninety-eight per cent (59/60) of patients with a positive RADT result were given antibiotics and 31% (69/225) of those with a negative test result received antibiotics. The authors determined that treatment was inappropriate (based on culture results) in 43% of patients (226/526), with 210 unnecessary prescriptions and 16 untreated cases. A total of 153 of these cases occurred in the control arm and 73 were in the RADT arm; however, the category of inappropriate decision (overprescribing or underprescribing) is not reported by trial arm.
The third trial was a four-armed cluster randomised trial in Canada by Worrall et al. 55 The trial included 40 physicians who were asked to consecutively recruit adult patients (aged ≥ 19 years, no further details reported). There was a control arm using usual clinical practice, an intervention arm using sore throat decision rules (STDR) (modified Centor), an intervention arm using a rapid test (RADT) and an intervention arm using both STDR and RADT. In the STDR group, for clinical scores of ≤ 1 point no antibiotics were recommended, for scores of 3 or 4 points antibiotics were recommended and for scores of 2 points the prescribing decision lay with clinicians. In the combined STDR and RADT group, RADT was used only for patients with scores of 2. It is implied, although not explicitly stated, that all those in the RADT arm received a RADT. The authors found that 47% (247/533) of patients received antibiotics. By arm, 58% (82/141) of patients received antibiotics in the usual practice arm, compared with 55% (94/170) with Centor score alone, 27% (32/120) with rapid antigen testing alone and 38% (39/102) with combined rapid antigen testing and Centor score. As this was a cluster randomised trial and each arm included only 8–10 doctors, differences could be a result of differences between doctors rather than between strategies. In addition, they may be owing to differences in patients across arms. The study reports on the characteristics of the physicians only; we have no baseline patient data. Finally, the Canadian medical system differs to the UK system, so the results may not be generalisable.
There was no RCT evidence on molecular technologies and antibiotic-prescribing rates.
Antibiotic-prescribing behaviours: before-and-after studies
There was one study that was a before-and-after study assessing antibiotic-prescribing rates. The study by Bird et al. 35 analysed children (aged 6 months to 16 years) presenting to a UK paediatric emergency department with a sore throat. The study compared baseline data from October and November 2014 with prescribing rates in the following 2 years (August to November 2015 and September to November 2016) following the implementation of using both a McIsaac score and a RADT. Baseline data were collected retrospectively from a departmental audit, when it is implied that the method of diagnosis was just clinician examination, with the aim to assess the impact of a clinical scoring system and rapid test on prescribing rates. A rapid test could be requested only if there was a McIsaac score of ≥ 3 points. Following implementation, antibiotic-prescribing rates fell steeply, from 79% (166/210) at baseline to 24% (51/214) in year 1 and 28% (51/181) for the second year. However, seasonality may be a confounding factor, with higher prescribing rates over the later autumn months (October and November) than in late summer (August and September). Likewise, there may be some regression to the mean, as the high initial prescribing rates may have prompted the study but may be subject to fluctuations.
There were no two-armed cohort studies analysing molecular technologies and antibiotic-prescribing rates.
Antibiotic-prescribing behaviours: other study designs
There were an additional eight studies that reported antibiotic-prescribing behaviours in single-arm cohorts. 20,36,39,40,46,50,52,53 No comparative data were possible within these study designs, only hypothetical comparisons, so all of the results in this section should be interpreted with caution and considered less informative than the RCT results. In these cohort studies, patients received the same intervention; however, authors also determined hypothetical management scenarios and compared how this would have affected antibiotic-prescribing rates. Of the eight studies, three provided hypothetical rules, which does not help inform us on real-world behaviour. These three studies39,46,53 have been included and briefly summarised but were not quality appraised. Five studies20,36,40,50,52 reported on either what happened in the real world or what clinicians reported they would do. These studies suggested that using a rapid test would decrease antibiotic use by as little as 9% up to 74%.
Two of the five single-arm cohorts reported on real-world behaviour. The first study, by Stefaniuk et al. ,52 examined children and adults in a primary care setting in Poland. Forty-six per cent (44/96) of the study group were children aged 3–14 years and 25% (24/96) were adults aged 31–35 years (overall mean age was not provided). Ninety-eight per cent (46/47) of patients with a positive QuikRead Go Strep A test result received antibiotics and 24% (12/49) of patients with both a negative rapid test and a negative culture were treated with antibiotics.
The second study reporting on real-world behaviour, by Berry et al. ,20 compared BD Veritor testing with Alere i testing and a chart review to determine hypothetical impact of results on antibiotic use. The study took place in paediatric outpatient clinics (mean age not reported) in the USA. Prescribing decisions were made with knowledge of the BD Veritor test results, but not of the Alere i test or culture. The authors found that 34% (73/215) of patients were prescribed antibiotics; of these, 25 patients were prescribed antibiotics at a clinic visit and antibiotics were later deemed to be inappropriate treatment (on the basis of culture results). Of these, 20 out of 25 (80%) patients were negative on BD Veritor, Alere i and culture, and five were positive with the BD Veritor only. Of the 215 who did not receive antibiotics, 13 BD Veritor-negative cases were identified by the authors as potential missed cases on the basis of PCR and Alere i positive results, of which six received antibiotics within 6 days of the original appointment. These analyses provide descriptive behaviour data using the BD Veritor test, but cannot be used to compare Alere and BD Veritor for appropriateness of prescribing behaviour as decisions were made using the BD Veritor and not the Alere i. This study using Alere i was the only study to use a molecular technology, and no prescribing behaviours were based on it; hence, there is no evidence on molecular technologies and antibiotic-prescribing rates.
Three of the single-arm cohort studies reported on hypothetical scenarios based around clinicians’ decisions. 36,40,50 Bura et al. 36 examined a cohort of adults (median age 26 years, range 18–44 years) in primary care in Poland with Centor scores of > 2 points (this was a case–control design for test accuracy outcomes, but cohort for prescribing behaviour). All patients and controls were given a rapid test and culture. GPs could then choose whether or not to give antibiotic therapy. It is stated that this choice was not influenced by the research team; however, we cannot be certain of this as they were aware of the rapid test result. Clinicians were aware of the Centor score at the time of antibiotic prescribing. They found that 58% (59/101) of patients received an antibiotic. All RADT-positive patients received treatment, including two who were culture negative. In addition, 46% (35/77) of test-negative patients received antibiotics. They determined that 40% (23/59) of cases received an unnecessary antibiotic prescription. Unnecessary has been defined here as being culture negative. The authors also gave hypothetical management scenarios based on different Centor scores and scenarios. Antibiotics would be prescribed to 29% (11/38) of patients with a Centor score of 2 points, 62% (23/37) of patients with a Centor score of 3 points and 96% (25/26) of patients with a score of 4 points. They surmised that 23% (23/101) would have been treated using positive culture results alone and 24% (24/101) would have been treated using a rapid test, meaning that one person was mistakenly given antibiotics. However, 54% of those given antibiotics were treated for non-strep A. From the control group, one person would have been treated with antibiotics; additionally, other forms of streptococci were identified in 13 people from this group.
The study by Rosenberg et al. 50 was a one-armed prospective observational cohort in which all patients were given a clinical score (Centor), rapid test and culture. The study included patients older than 3 years [47% (59/126) aged 3–14 years, 50% (63/126) aged 15–44 years and 3% (4/126) aged ≥ 45 years] presenting to an emergency department in Canada. The authors also report physicians’ clinical impressions and their hypothetical management. Authors report on score alone, physician examination alone, rapid test alone or rapid test for clinical scores of > 3 points. They found that physicians prescribed antibiotics to 37% (46/126) of patients, after obtaining the results of the rapid test; of these, 18 had negative culture results. They hypothesised that 20% (25/126) would have received antibiotics in the rapid test group, compared with 29% (37/126) in the clinical score group.
The last study, by Johansson et al. ,40 was a prospective observational single-armed cohort in which all patients received both a rapid test and culture and these results were compared with hypothetical management suggestions made by physicians. It included adult patients (aged 25–44 years, mean age not reported) reporting to primary health-care centres in Sweden. Physicians also clinically assessed patients, and gave hypothetical management suggestions based on their level of certainty for strep A (absolutely positive, positive, possibly positive, possibly negative, negative and absolutely negative). No results are clearly provided; however, 26% (24/94) of patients with a negative rapid test received treatment, but it is unclear how many of these were culture positive.
There were three additional studies39,46,53 that reported on hypothetical prescribing decisions based on assumptions about doctors’ behaviour; however, no real-world decisions were reported and doctors were not asked about behaviour.
Summary of the clinical effectiveness findings and implications for the health economic model
Overall, the findings reveal wide variations in the sensitivity (0.679 to 1.00) and specificity (0.733 to 1.00) estimates of point-of-care tests. These estimates were 0.829 to 0.946 for sensitivity and 0.849 to 0.991 for specificity in high-risk populations, including patients with Centor/McIsaac scores of > 2 points, which represents the population of interest. These estimates do not account for any of the unpublished manufacturer submissions.
Clinical scoring tools (FeverPAIN and Centor) have been proposed as a method by which clinicians can identify which patients are most likely to benefit from antibiotic use for sore throat. 8 These tools were developed to predict strep A (Centor and FeverPAIN), strep C (FeverPAIN) and strep G (FeverPAIN). Most studies making direct comparisons between sore throat clinical scoring tools and point-of-care tests indicated that sensitivity estimates were higher for the point-of-care tests, and that specificity was generally comparable between the two approaches.
A methodological limitation of the clinical scoring tools concerns the varying way in which they have been implemented across the included studies. For instance, different studies apply different clinical score cut-off points when recruiting patients. None of these studies matched the proposed pathway of care and treatment for patients with acute streptococcal pharyngitis, which would entail evaluating the test accuracy of a combined strategy of sore throat clinical scores at the recommended NICE thresholds (Centor/McIsaac score of ≥ 3 points or FeverPAIN score of ≥ 4 points) and point-of-care tests. This limitation potentially holds important economic implications, as attempts to model this proposed pathway may not be informed by the availability of empirical data. In addition, the over-representation of the TestPack Plus Strep A test relative to other point-of-care tests, as well as the overlap of patients across different age groups, potentially raises applicability concerns in the economic model.
Investigation of discordant results between the index tests and the reference standard of culture was available for several studies using PCR or culture. This analysis indicated that using culture as the reference standard may have resulted in underestimating sensitivity (specificity estimates derived using PCR were too variable to draw conclusions about potential overestimation/underestimation by culture). However, PCR can detect indolent strep A so the extent of this is unclear.
Data for test accuracy were sparse for each combination of test, population and setting. There were very few head-to-head (direct) comparison studies between index tests. There was heterogeneity between studies, the cause of which is unclear owing to a lack of direct comparison data of different age groups, settings or tests within the same study.
Test accuracy point estimates in manufacturers’ submissions may be systematically higher than in the peer-reviewed literature, and the study characteristics are often unclear. Therefore, there is a risk of making inappropriate comparisons between tests in the economic model where one test has a range of peer-reviewed publications and another has manufacturer data only.
With the exception of a single study using the Sofia Strep A FIA test (failure rate 4.7%),23 failure rates for point-of-care tests were generally low (0% to 2.8%) and unlikely to hold any major implications for the economic model, especially as the data for this outcome have not been reported in most of the included studies.
No evidence was found on time to antimicrobial prescribing decision, number of appointments required per episode and onward transmission of infection.
The findings suggest that RADT may help to reduce antibiotic prescription rates in patients who receive these tests compared with patients assessed using only a clinical scoring tool. The three RCTs addressing this question all found that up to 30% fewer antibiotics were prescribed following the administration of a RADT. No studies were identified that assessed the use of molecular technologies and antibiotic prescription rates.
Chapter 4 Cost-effectiveness
Systematic review of existing cost-effectiveness studies
Introduction
This chapter explores and reviews all published cost-effectiveness studies, including any existing economic models of the use of different rapid antigen detection or molecular tests (as listed in the final scope and protocol for detection of strep A in detail). Studies providing resource use, costs, utilities and probabilities that were useful to inform economic modelling were also identified.
Methods
Search strategy
A comprehensive search of the literature for published economic evaluations (including any existing models), cost studies and quality-of-life (utility) studies was carried out. The systematic search included searching the following electronic databases during January 2019 (on 22, 29 and 30 January 2019), and an updated search was conducted on all databases during March 2019 (on 7 and 13 March 2019):
-
MEDLINE and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions (via OvidSP)
-
Excerpta Medica database (EMBASE) (via OvidSP)
-
NHS Economic Evaluation Database (NHS EED) and HTA database (via CRD)
-
Science Citation Index and Conference Proceedings Citation Index – Science (via the Web of Science)
-
Cost-Effectiveness Analysis (CEA) registry
-
EconPapers [Research Papers in Economics (RePEc)]
-
School of Health and Related Research Health Utilities Database (ScHARRHUD).
The search terms included economic and quality-of-life terms combined with either ‘sore throat’ or ‘strep A’. No date limits were applied and databases were searched from inception. The search strategy was developed by an experienced information specialist, based on the clinical effectiveness review and with input from a health economist. Details of the full search strategies are provided in Appendix 6. In addition to these searches, any relevant cost-effectiveness studies identified during the clinical effectiveness review were brought to the attention of the reviewers and assessed for eligibility alongside the results of this review.
Assessment of eligibility
Citations and abstracts from the electronic online databases were exported into a citation software package (EndNote X7) and duplicate records were identified and removed. Two reviewers independently reviewed titles and abstracts to identify potentially relevant papers for inclusion. Any discrepancies were resolved by discussion.
Inclusion criteria
Only studies meeting the following inclusion criteria were included in the review:
-
study type – fully published economic evaluations (including any economic models)
-
population – people aged ≥ 5 years presenting to health-care providers in a primary care (GP surgeries, community pharmacies and walk-in centres) or secondary care (urgent care/walk-in centres and emergency departments) setting with symptoms of an acute sore throat
-
intervention – 17 RADTs or four molecular tests (as described in Chapter 1, Comparative technical overview of the point-of-care tests for group A Streptococcus)
-
comparator – antibiotic-prescribing decisions using clinical judgement and a clinical scoring tool such as FeverPAIN or Centor
-
outcomes – cost–benefit or cost–consequences or cost-effectiveness or cost–utility studies reporting outcomes as cost–consequence measures or clinical effectiveness measures or utility measures [utility, EuroQol-5 Dimensions (EQ-5D) or Short Form questionnaire-6 Dimensions score or quality-adjusted life-years (QALYs)].
Exclusion criteria
Studies meeting the following exclusion criteria were excluded from the review:
-
non-English-language publications
-
studies not in humans
-
studies not in strep A or sore throat
-
studies with the wrong test or no specified test
-
studies that were not full economic evaluations (incremental costs and incremental benefits).
Studies that provided useful information for the economic model, such as resource use, costs, utilities and probabilities, were retained but were not included in this review.
Data extraction
Data extraction was carried out by one reviewer using standardised data extraction sheets and was then checked by a second reviewer. Data extracted included the following information:
-
study details – study title, author names, source of publication, language and publication type
-
baseline characteristics – population (and subgroups), intervention, comparators, outcomes, study design, setting and location and type of economic evaluation
-
methods – study perspective, time horizon, discount rate, measurement of effectiveness, measurement and valuation of preference-based outcomes, resource use and costs, currency, price date and conversion, model type, assumptions and analytical methods
-
results – study parameters, incremental costs and outcomes and reporting of uncertainty
-
discussion – study findings, limitations, generalisability and conclusions
-
other – sources of funding, conflicts of interest and any comments.
Data synthesis
Information extracted from the included studies was summarised and tabulated. Findings from individual studies were compared narratively.
Quality assessment
The quality of full economic evaluation studies that were identified was assessed using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist by one reviewer and cross-checked by a second reviewer. The CHEERS checklist comprises six dimensions (title and abstract, introduction, methods, results, discussion and other); under these dimensions, a series of questions check whether or not the criteria have been clearly reported. 74 If the studies included any model-based economic evaluations, they were further critically appraised using the framework on quality assessment for economic modelling developed by Phillips et al. 75 The framework assesses models under the dimensions of structure, data and consistency and whether or not the criteria have been clearly reported.
Results
Search results
The literature search identified 6980 records through electronic database searches and other sources. After removing duplicates, 2756 records were screened for inclusion. One article was found via our clinical effectiveness search. Based on title and abstract sift only, 2737 records were excluded. The remaining 19 records were included for full-text screening. A further 16 articles were excluded at the full-text stage, as these studies did not contain a full economic evaluation or specify the right test (see Appendix 7 for further comments).
The literature search identified three studies that had evidence pertaining to incremental costs and outcomes: Bura et al. ,36 Humair et al. 39 and Little et al. 76 (Figure 11).
The economic information from the first two studies has been summarised below, as there was not enough information for a full data extraction (see Appendix 8). These two studies did not explicitly state the following: study perspective, time horizon, type of economic evaluation, measurement of effectiveness or analytical methods. Bura et al. 36 was a prospective case–control study consisting of 101 adults (aged 18–44 years) who went to GP clinics in Poland because of sore throat lasting no longer than 7 days. Control participants (n = 101) were volunteers from the same area, who were matched to cases according to their age and sex. The study was conducted over 1 year. The OSOM Strep A test (Sekisui Diagnostics) in conjunction with throat culture was compared with Centor and throat culture to confirm presence of strep A. The costs of diagnosing and treating strep A included symptomatic treatment, test cost (€1.39), a single culture to identify strep A, antibiotic therapy and antimicrobial medications. Economic analysis of five strategies were compared for treating patients with strep A in terms of cost per patient with appropriate strep A treatment ranged from €2.89 (for treat only RADT positive cases) to €6.93 [for treat only strep A + (culture-positive) cases]. The authors concluded that the use of the rapid test significantly increases the number of people with strep A-related pharyngitis to be treated with antibiotics.
Humair et al. 39 was a prospective cohort study consisting of 372 adults (aged 15–65 years) who were treated at a GP clinic in Switzerland. The Alere TestPack +Plus Strep A (Abbott Laboratories) was compared with throat culture. A decision tree model was used to compare antibiotic prescription for five strategies. Information used in the decision model included antibiotic rate for appropriate use, overuse in patients without strep A, underuse in patients with strep A, appropriate treatment for patients with strep A and without treatment in patients without strep A. The model did not consider quality of life, complications or adverse drug effects. Costs were in US$ in 2002 prices. Costs included a 10-day course of penicillin, a test cost of $5.00 and $18.00 for throat culture. The authors found that systematic throat culture had the highest rates of appropriate treatment, whereas empirical treatment in patients with clinical scores of 3 or 4 resulted in the most antibiotic overuse. The cost per case appropriately treated ranged from $15.30 (systematic rapid test) to $32.40 (systematic throat culture). Sensitivity analyses were conducted to check the robustness of results. The authors concluded that the rapid test is a valid test for diagnosis of strep A.
Little et al. 76 conducted an economic analysis alongside a RCT in the UK, which included both adults and children with acute sore throat who were seen in primary care clinics (see Appendix 8). They compared randomised patients with targeted antibiotic use according to (1) delayed antibiotics (control group), (2) clinical score using FeverPAIN or (3) RADT – Alere TestPack +Plus Strep A (Abbott Laboratories) used according to clinical score. The analysis was from an NHS perspective and the time horizon was short (14 and 28 days); hence, long-term effects were not captured. Health-related quality of life was evaluated using the EQ-5D. QALYs were adjusted for baseline differences and were calculated using mean EQ-5D scores obtained from the 14-day diary records. It was assumed that the health-related quality of life changes linearly over time. The analysis included a cost-effectiveness analysis (cost per change in symptom severity) and a cost–utility analysis (cost per QALY). Cost-effectiveness acceptability curves (CEACs) were generated using bootstrapping with 5000 samples.
The mean symptom scores were adjusted for baseline differences. For the cost-effectiveness analysis, the clinical score group dominated both the delayed antibiotic group and the RADT group, as it was more clinically effective (lower symptom score) and less costly. However, the point estimate of symptom score and the corresponding 95% CIs for clinical score and RADT groups were quite close. The CEAC showed that if the value of a 1-point change in the symptom score was varied between £0 and £500, and it was found that over the entire range the clinical score group was most likely to be cost-effective. In the cost–utility analysis, the delayed group was dominated by the clinical score group for both time frames. The incremental cost-effectiveness ratio (ICER) for the RADT group compared with clinical score group was £74,286 for the 14-day time frame and £24,528 for the 28-day time frame.
Quality assessment
The quality of the reporting of the economic analysis of the three studies was assessed using the 25-point CHEERS checklist74 and is provided in Table 19. The Little et al. 76 article was comprehensively reported: 22 of the 25 statements (88.0%) were a ‘yes’, one statement (4.0%) was not completed and two statements (8.0%) did not apply.
Assessment | Bura et al.36 | Humair et al.39 | Little et al.76 |
---|---|---|---|
Title | N | N | Y |
Abstract | Y | Y | Y |
Introduction | |||
Background and objectives | Y | Y | Y |
Methods | |||
Target population and subgroups | Y | Y | Y |
Setting and location | Y | Y | Y |
Study perspective | N | N | Y |
Comparators | Y | Y | Y |
Time horizon | N | N | Y |
Discount rate | NA | N | NA |
Choice of health outcomes | P | P | Y |
Measurement of effectiveness | N | N | Y |
Measurement and valuation of preference-based outcomes | N | N | Y |
Estimating resources and costs | P | P | Y |
Currency, price date and conversion | N | Y | Y |
Choice of model | NA | N | NA |
Assumptions | NA | N | Y |
Analytical methods | P | N | Y |
Results | |||
Study parameters | N | N | Y |
Incremental costs and outcomes | Y | Y | Y |
Characterising uncertainty | N | P | Y |
Discussion | |||
Study findings | Y | Y | Y |
Limitations | Y | Y | Y |
Generalisability | N | N | Y |
Other | |||
Source of funding | Y | N | Y |
Conflicts of interest | Y | N | N |
Summary
The cost-effectiveness search highlighted three studies that used the RADTs as identified in the NICE scope and were classed as economic evaluations. Of these three studies, only one allowed a full data extraction and was classed as a high-quality economic evaluation when checked against the CHEERS reporting tool. In Chapter 5, we build a de novo economic model comparing the different tests identified in the NICE scope for the various settings for patients with strep A.
Cost-effectiveness methods and results
Modelled population
The population of interest is people aged ≥ 5 years presenting to health-care providers in a primary (GP surgeries and walk-in centres), secondary (urgent care/walk-in centres and emergency departments) or pharmacy care setting with symptoms of an acute sore throat identified as most likely to benefit from antibiotic treatment on the basis of clinical scoring algorithm (FeverPAIN score of 4 or 5 points, or a Centor score of 3 or 4 points). Potential subgroups identified in the NICE scope included children (aged 5–14 years), adults (aged 15–75 years) and the elderly (aged > 75 years). However, the analyses have been restricted to adults and children owing to a lack of evidence on test accuracy among the elderly patient population.
Model structure
A decision tree model from the perspective of the UK NHS and Personal Social Services was developed to estimate the costs and QALYs associated with point-of-care testing in conjunction with clinical scoring tools, such as the Centor and FeverPAIN score for strep A, compared with clinical assessment incorporating clinical scoring tools alone (usual care). 77
The model structure, as depicted in Figures 12–14, makes use of a decision tree to model potential care pathways associated with a suspected strep A infection/sore throat presentation under the intervention (point-of-care testing and clinical scoring tools) and usual-care (clinical scoring tools alone) conditions.
Previous economic evaluations of management strategies for streptococcal pharyngitis have estimated that up to 76.5 quality-adjusted life-days could be lost as a result of rare but serious complications of the infection, such as acute rheumatic fever. 78–80 Thus, for this economic model we have assumed a 1-year time horizon in which we model only one episode of strep A per patient and we have assumed that this time horizon is sufficient to capture the impact of rare but serious complications of the infection on economic costs and outcomes. This differs from the stated time horizon of 14 days originally conceived in the EAG protocol for this self-limiting illness for which the majority of cases would be expected to resolve satisfactorily.
The model takes account of the prevalence of disease in the modelled population, the test accuracy of clinical scoring algorithms and point-of-care tests, the proportion of patients treated with immediate and delayed antibiotics who are given a positive or negative clinical score and/or test result (prescribing behaviour of treating clinicians) and the probability of developing important but rare complications of the infection (i.e. suppurative complications, such as peritonsillar abscess and quinsy,81 and non-suppurative complications, such as acute rheumatic fever). 82 Penicillin-induced rash and anaphylactic complications of penicillin are incorporated as adverse effects of treatment. 81,83
The model estimated costs in 2017/18 prices. Economic costs accrued over the modelled time horizon are from resource use associated with simulated care pathways. They include the costs of the point-of-care tests (including additional cost of confirmatory throat culture for a negative test result), GP consultations, antimicrobial therapy and treatment for strep A-related complications and the unwanted effects of penicillin. QALYs are calculated as a weighted sum of the difference between the utility decrements associated with strep A infection and related complications and the general UK population utility norms, weighted by the modelled time horizon in years. No discounting was applied to costs and benefits owing to the 1-year time horizon.
The base-case analysis assumes that patients presenting with suspected strep A in the usual-care arm receive immediate or delayed antimicrobial treatment based on clinical assessment and outcome of clinical scoring algorithm indicating possible strep A infection. We assumed a score of ≥ 3 points on the Centor (or FeverPAIN score of ≥ 4 points) as the threshold for commencing immediate antibiotics (or testing for those in the intervention arm), as shown in Figure 1 and in line with recent NICE guidance on antimicrobial prescribing for acute sore throat infections. 8
We explored the impact of alternative thresholds (Centor score of ≥ 2 points and ≥ 1 point) for commencing antibiotic treatment and on testing. These alternative thresholds have differing performance (sensitivity and specificity) to the Centor score of ≥ 3 points; hence, they could be considered as assessing an alternative performance of the Centor tool. For the intervention arm, we assumed that patients presenting with suspected strep A will be screened first using a clinical scoring tool for signs and symptoms of the infection. Those screening positive (i.e. Centor score of ≥ 3 points or FeverPAIN score of ≥ 4 points) are offered a point-of-care test followed by immediate antibiotics if testing indicates positive strep A infection. Those screening negative according to a clinical scoring algorithm or test are offered delayed antibiotic prescription with a probability of 0.49 and 0.29 in the usual-care and test arms, respectively, based on the PRImary care Streptococcal Management (PRISM) trial data. 6
Over the 1-year time horizon, patients with suspected strep A infection receiving either immediate antibiotics or delayed antibiotics can make a complete recovery or go on to develop complications requiring a period of hospital stay. The risk of developing serious complications related to strep A is modelled as a function of antimicrobial treatment so that those patients who are correctly diagnosed and appropriately treated present a lower risk of serious strep A complications than do those who are incorrectly diagnosed who receive no antimicrobial treatment. Separate models (each with the same underlying structure depicted in Figures 12–14) are specified for adults and children in primary and secondary care settings.
Details of the methodology used to derive parameter inputs and the data sources used to inform estimates are discussed in the following sections.
Effectiveness evidence used in the economic model
Accuracy of clinical scoring algorithms (all models)
Accuracy in the usual-care arm was based on estimates of sensitivity and specificity of the Centor score taken from a published meta-analysis of 12 studies by Aalbers et al. 84 Table 20 summarises the reported estimates of sensitivity and specificity of the Centor score at cut-off points of ≥ 1, ≥ 2, ≥ 3 and 4 points for positive strep A infection. The base-case model used the estimates at a cut-off point of ≥ 3 points for a positive result and < 3 points for a negative result. At this threshold, the Centor score has a sensitivity of 0.49 (95% CI 0.38 to 0.60) and a specificity of 0.82 (95% CI 0.72 to 0.88). Alternative thresholds on the Centor score were explored in sensitivity analyses. 84 However, we were unable to evaluate the FeverPAIN clinical score owing to a lack of accuracy estimates in a format suitable for the economic model (i.e. sensitivity and specificity of the FeverPAIN at a cut-off point of ≥ 4 points).
Centor threshold (points) for positive strep A infection | Sensitivity (95% CI) | Specificity (95% CI) | Number of primary studies included in the meta-analysis | Distributional form in model |
---|---|---|---|---|
≥ 1 | 0.95 (0.91 to 0.97) | 0.18 (0.12 to 0.26) | 11 | Normal (logit scale) |
≥ 2 | 0.79 (0.71 to 0.86) | 0.55 (0.45 to 0.65) | 12 | Normal (logit scale) |
≥ 3 | 0.49 (0.38 to 0.60) | 0.82 (0.72 to 0.88) | 11 | Normal (logit scale) |
4 | 0.18 (0.12 to 0.27) | 0.95 (0.92 to 0.97) | 11 | Normal (logit scale) |
Accuracy of point-of-care tests
Estimates of test accuracy for the point-of-care tests obtained from our systematic review with and without meta-analyses are summarised in Table 21 by test, clinical setting (primary care) and patient population (adults and children). When no studies reporting accuracy data were identified in our systematic review, we obtained the estimates from either the manufacturer website or the manufacturer submissions (submitted directly to NICE in response to a request for information). Test accuracy data were available for six (28.6%) of the 21 tests from published sources identified in the clinical effectiveness review; a further four tests (19%) had accuracy data from both published sources and manufacturer submissions, six tests (27.6%) had only manufacturer data and two tests (9.5%) had FDA data. Test accuracy data were not available for the three (14.3%) remaining tests (Biopanda Reagents’ Strep A Rapid Test strip, bioNexia Strep A cassette and bioNexia Strep A plus cassette). Two of the three tests (bioNexia Strep A cassette and bioNexia Strep A plus cassette) were excluded from the economic modelling of individual tests owing to a lack of test accuracy data. The accuracy of Biopanda Reagents’ Strep A Rapid Test strip was assumed to be equal to that of the cassette version of this test, for which accuracy estimates were available. In general, estimates of sensitivity and specificity obtained from the published sources tended to be variable and lower than those provided by the manufacturer. For example, sensitivity of point-of-care testing in adults based on the published sources ranged from 0.68, for Abbott Laboratories’ Clearview Exact Strep A cassette, to 1.00, for the QuikRead Go Strep A test kit, whereas estimates provided in the manufacturer submission ranged from 0.95, for Biopanda Reagents’ Strep A Rapid Test cassette, to 0.98, for nal von minden GmbH’s NADAL Strep A test. A similar trend in specificity is observed, with the manufacturers’ estimates being generally much higher than estimates based on published data. Thus, the source of test accuracy data is likely to be an important driver of cost-effectiveness. The economic models presented here, which are based solely on manufacturers’ test accuracy data with no peer-reviewed published data, are likely to overestimate test accuracy and, therefore, the results of these models cannot be reliably interpreted.
Test ID | Test name | Manufacturer | Sensitivity (95% CI) | Specificity (95% CI) | Distribution | Data source (first author and year of publication) |
---|---|---|---|---|---|---|
Adults | ||||||
1 | Clearview Exact Strep A cassette | Abbott Laboratories | 0.68 (0.54 to 0.8) | 0.95 (0.92 to 0.97) | Normal (logit) | One abstract (Andersen 200356) |
2 | Clearview Exact Strep A dipstick – test strip | Abbott Laboratories | 0.68 (0.54 to 0.8) | 0.95 (0.92 to 0.97) | Normal (logit) | One abstract (Andersen 200356) |
3 | BD Veritor Plus system group A Strep Assay – cassette | Becton Dickinson | 0.78 (0.67 to 0.87) | 0.9 (0.86 to 0.93) | Normal (logit) | Two studies (Berry 201820 and Azrad 201934) |
4 | Strep A Rapid Test – cassette | Biopanda Reagents | 0.95 (0.9 to 0.98) | 0.98 (0.96 to 0.99) | Normal (logit) | One manufacturer response to NICE |
5 | Strep A Rapid Test – test strip | Biopanda Reagents | No data | |||
6 | NADAL Strep A – test strip | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
7 | NADAL Strep A – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
8 | NADAL Strep A plus – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
9 | NADAL Strep A plus – test strip | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
10 | NADAL Strep A scan test – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
11 | OSOM Strep A test – test strip | Sekisui Diagnostics | 0.92 (0.76 to 0.98) | 0.96 (0.89 to 0.99) | Normal (logit) | One study (Llor 201145) |
12 | QuikRead Go Strep A test kit | Orion Diagnostica | 1 (0.85 to 1) | 0.79 (0.6 to 0.92) | Normal (logit) | One study (Stefaniuk 201752) |
13 | Alere TestPack +Plus Strep A – cassette | Abbott Laboratories | 0.95 (0.89 to 0.98) | 0.94 (0.88 to 0.98) | Normal (logit) | One study (Humair 200639) |
14 | bioNexia Strep A plus – cassette | bioMérieux | No data | |||
15 | bioNexia Strep A dipstick – test strip | bioMérieux | 0.85 (0.74 to 0.92) | 0.91 (0.84 to 0.95) | Normal (logit) | One abstract (Pauchard 200357) |
16 | Biosynex Strep A – cassette | Biosynex | No data | |||
17 | Sofia Strep A FIA | Quidel | 0.85 (0.81 to 0.89) | 0.95 (0.93 to 0.97) | Normal (logit) | One study (Lacroix 201823) |
18 | Alere i Strep A | Abbott Laboratories | 0.95 (0.74 to 1) | 0.97 (0.92 to 0.99) | Normal (logit) | One study (Cohen 201537) |
19 | Alere i Strep A 2 | Abbott Laboratories | 0.98 (0.96 to 1) | 0.93 (0.91 to 0.95) | Normal (logit) | One FDA report |
20 | cobas Liat Strep A Assay | Roche Diagnostics | 0.98 (0.93 to 1) | 0.93 (0.9 to 0.96) | Normal (logit) | One study (Wang 201724) |
21 | Xpert Xpress Strep A | Cepheid | 1 (0.99 to 1) | 0.94 (0.92 to 0.96) | Normal (logit) | One manufacturer response to NICE and one FDA report |
Children | ||||||
1 | Clearview Exact Strep A cassette | Abbott Laboratories | 0.68 (0.54 to 0.8) | 0.95 (0.92 to 0.97) | Normal (logit) | One study (Andersen 200356) |
2 | Clearview Exact Strep A dipstick – test strip | Abbott Laboratories | 0.68 (0.54 to 0.8) | 0.95 (0.92 to 0.97) | Normal (logit) | One study (Andersen 200356) |
3 | BD Veritor Plus system group A Strep Assay – cassette | Becton Dickinson | 0.76 (0.61 to 0.88) | 0.94 (0.89 to 0.97) | Normal (logit) | One study (Berry 201820) |
4 | Strep A Rapid Test – cassette | Biopanda Reagents | 0.95 (0.9 to 0.98) | 0.98 (0.96 to 0.99) | Normal (logit) | One manufacturer response to NICE |
5 | Strep A Rapid Test – test strip | Biopanda Reagents | No data | |||
6 | NADAL Strep A – test strip | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
7 | NADAL Strep A – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
8 | NADAL Strep A plus – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
9 | NADAL Strep A plus – test strip | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
10 | NADAL Strep A scan test – cassette | nal von minden GmbH | 0.98 (0.92 to 1) | 0.98 (0.94 to 0.99) | Normal (logit) | One manufacturer response to NICE |
11 | OSOM Strep A test – test strip | Sekisui Diagnostics | 0.94 (0.89 to 0.98) | 0.95 (0.91 to 0.98) | Normal (logit) | One study (Llor 201145) |
12 | QuikRead Go Strep A test kit | Orion Diagnostica | 0.80 (0.56 to 0.94) | 0.91 (0.72 to 0.99) | Normal (logit) | One study (Stefaniuk 201752) |
13 | Alere TestPack +Plus Strep A – cassette | Abbott Laboratories | 0.86 (0.79 to 0.91) | 0.99 (0.97 to 1) | Normal (logit) | One study (McIsaac 200446) |
14 | bioNexia Strep A plus – cassette | bioMérieux | No data | |||
15 | bioNexia Strep A dipstick – test strip | bioMérieux | 0.85 (0.74 to 0.92) | 0.91 (0.84 to 0.95) | Normal (logit) | One abstract (Pauchard 201357) |
16 | Biosynex Strep A – cassette | Biosynex | No data | |||
17 | Sofia Strep A FIA | Quidel | 0.85 (0.81 to 0.89) | 0.95 (0.93 to 0.97) | Normal (logit) | One study (Lacroix 201823) |
18 | Alere i Strep A | Abbott Laboratories | 0.98 (0.95 to 1) | 0.96 (0.89 to 1) | Normal (logit) | Three studies (Berry 2018,20 Cohen 201537 and Weinzierl 201854) |
19 | Alere i Strep A 2 | Abbott Laboratories | 0.98 (0.96 to 1) | 0.93 (0.91 to 0.95) | Normal (logit) | One FDA report |
20 | cobas Liat Strep A Assay | Roche Diagnostics | 0.98 (0.93 to 1) | 0.93 (0.9 to 0.96) | Normal (logit) | One study (Wang 201724) |
21 | Xpert Xpress Strep A | Cepheid | 1 (0.99 to 1) | 0.94 (0.92 to 0.96) | Normal (logit) | One manufacturer response to NICE and one FDA report |
Prevalence of group A streptococcal infection in the modelled population
Data on the adult prevalence of strep A in the UK were available in one study out of the 38 published studies, abstracts and reports submitted by manufacturers to NICE included in our test accuracy effectiveness review. The study by Little et al. 6 from the review85 (with additional data from the full HTA report76) found a prevalence rate of 34% (95% CI 31% to 38%) for pathogenic streptococcal infection among 204 out of 597 patients aged ≥ 5 years presenting in UK primary care settings. Of these infections, 136 (66.7%) were strep A. This gives a strep A prevalence rate of 22.7% (136/597). This study did not consecutively recruit patients, meaning that there may be bias in the sample, which could have affected the true prevalence rate. As there were no UK adult studies in secondary care, this estimate was used across both primary settings and secondary settings (Table 22).
Systematic review data | Estimate used in model | |||||
---|---|---|---|---|---|---|
Patient population and clinical settings | Number of studies | Median prevalence, % (range) | Central estimate | SE | Distribution | Source |
Adults | ||||||
Primary and secondary care | One6 | 22.6 | 0.226 | 0.051 | Beta | Systematic review |
Children | ||||||
Primary and secondary care | Three3,46,52 | 30.2 (26.3–34.1) | 30.2 | 0.015 | Beta | Systematic review |
There were no clear UK estimates for prevalence in children from the systematic review; a median value from three non-UK studies of children in primary care only was calculated. 24,46,52 The median value was 30.2%.
Treatment-related probabilities and complication rates
Treatment-related probabilities and complication rates following strep A that were used in the economic model are presented in Table 23. The proportion of patients attending repeat consultations for sore throat infections (used to inform calculation of treatment costs) was obtained from Little et al. 86 In this large cohort study of UK patients presenting in primary care with a sore throat, a total of 889 (14.2%) repeat consultations for new or resolved symptoms were reported among 13,288 adults and adolescents.
Description of parameter | Mean | SEa | Distribution | Source (first author and year of publication) |
---|---|---|---|---|
General practice | ||||
Proportion attending repeat GP consultation following strep A infection | 0.142 | 0.007 | Beta | Little 201386 |
Antibiotic-prescribing probabilities | ||||
Probability of antibiotic use: Centor score of ≥ 3 points or positive test (immediate prescription) | 1 | Little 20136 | ||
Probability of antibiotic use: Centor score of < 3 points (delayed prescription, usual-care arm) | 0.51 | 0.026 | Beta | Little 20136 |
Probability of antibiotic use: negative test (delayed prescription, intervention arm) | 0.267 | 0.014 | Beta | Little 20136 |
Probability of antibiotic use: delayed prescription | 0.46 | 0.023 | Beta | Little 20136 |
Complication rates following strep A infection | ||||
Probability of complications: antibiotics (treated infection) | 0.013 | 0.0005 | Beta | Little 201386 |
Probability of complications: no antibiotics (untreated infection) | 0.015 | 0.0007 | Beta | Little 201386 |
Proportion of complications that are non-suppurative (i.e. rheumatic fever) | 0.0001 | Analyst assumption | ||
Adverse effects of penicillin | ||||
Penicillin-induced rash | 0.02 | Beta | Neuner 200378 | |
Penicillin-induced anaphylaxis/sepsis | 0.0001 | Beta | Neuner 200378 |
In base-case models, the probability of commencing antibiotic treatment given a positive clinical score (defined as Centor score of ≥ 3 points) in the usual-care arm or a positive clinical score and test result in the intervention arm was set to 1 based on the prescribing behaviour of GPs reported in the PRISM trial. 6 The probability of a delayed prescription given a negative clinical score (defined as Centor score of < 3 points in the base case) was set to 0.51 based on data suggesting that 91 out of 178 patients in the clinical score arm of the PRISM trial with a FeverPAIN score of < 4 points were offered a delayed prescription,6 with the assumption that a Centor score of < 3 points is equivalent to a FeverPAIN score of < 4 points. The probability of a delayed prescription given a negative test was set to 0.273 based on the PRISM data (48/174 patients in the clinical score plus test arm were given a delayed prescription). The probability of antibiotic use among those receiving a delayed prescription was set to 0.46 based on PRISM data showing reported antibiotic use among the 75 out of 164 patients in the control arm who were offered delayed prescription.
Complications for treated (i.e. antibiotics given) and untreated (no antibiotics given) strep A infections were also estimated based on another Little et al. 86 study: 78 and 75 complications (quinsy, sinusitis, otitis media and cellulitis) were reported among 5932 treated and 4974 untreated individuals, generating a complication rate of 1.3% and 1.5%, respectively. As this study did not report rates for rare but important non-suppurative sequalae of strep A sore throat, such as acute rheumatic fever,82 we assumed that the majority of complications were suppurative in nature, with only a tiny proportion of patients (no more than 0.01%) going on to develop non-suppurative sequelae. The impact of this assumption on the cost-effectiveness estimates was assessed by halving and doubling it in sensitivity analyses. We assumed that 2% of patients who were prescribed antibiotics (100% of those prescribed immediate antibiotics and 46% of those prescribed delayed antibiotics) will go on to develop penicillin-induced rash and 0.1% will develop penicillin-induced anaphylaxis/sepsis based on estimates reported in a previous economic evaluation of diagnostic and treatment strategies for adults with streptococcal pharyngitis. 78 Sensitivity analysis explored the impact of halving and doubling complications associated with penicillin use on the base-case cost-effectiveness.
Health utility and estimation of quality-adjusted life-year gains
Table 24 presents estimates of health utilities used to inform the economic model. A mean baseline utility of 0.863, equal to the mean utility norm for the general UK adult population,87 was assumed for the modelled adult population treated in primary and secondary care. For the child population models, we assume a mean utility of 0.94, equivalent to the mean UK utility norm for the aged < 25 years population,87 the closest age group to children. Utility decrements associated with strep A and related complications, such as development of peritonsillar abscess, rheumatic fever and anaphylactic complications of penicillin, were obtained from previously published economic evaluations of diagnostic and management strategies for adults with pharyngitis. 78,79 The two studies78,79 reported losses of 0.15 and 0.25 in quality-adjusted life-days for treated and untreated sore throat infections, and related complications, such as acute rheumatic fever, penicillin-induced anaphylaxis (sepsis), peritonsillar abscess and penicillin-induced rash, were associated with the greatest health impact, with estimates of 76.5, 9, 5 and 0.65 in quality-adjusted life-days lost, respectively. These estimates translate into utility decrements of 0.000411 (0.15/365) and 0.000685 (0.25/365) for treated and untreated strep A infection, respectively, 0.00178 (0.65/365) for penicillin-induced rash, 0.0037 (5/365) for peritonsillar abscess, 0.025 (9/365) for penicillin-induced sepsis and 0.209 (76.5/365) for rheumatic fever. QALYs were calculated at the end of each pathway in the model by subtracting from the baseline utility of 0.86 (or 0.94 for the child model) the utility decrements associated with all outcomes that occur in the modelled pathway (assuming that utility decrements are additive) weighted by modelled time horizon in years (i.e. 365/365 for the 1-year base-case time horizon). Disutility associated with unwanted effects of penicillin (rash and anaphylaxis) was added to care pathways associated with treated infection (immediate or delayed antibiotic use) weighted by the respective event probability (0.02 for penicillin-induced rash and 0.0001 for anaphylactic reaction). For example, the total number of QALYs accrued from uncomplicated strep A infection with complete resolution following immediate antibiotic treatment would be equal to (0.86 – 0.000410959 – 0.00003425 – 0.00000247) × 1 = 0.859552 QALYs over the 1-year time horizon considered in the base-case analysis for adults. Similarly, if this infection had resulted in a subsequent complication (e.g. an abscess), then the total QALY estimate would be slightly lower, at (0.86 – 0.000410959 – 0.01369863 – 0.00003425 – 0.00000247) × 1 = 0.84584.
Utility/disutility | Mean | SE | Distribution | Source (first author and year of publication) |
---|---|---|---|---|
Baseline (UK population norm, adults) | 0.863 | 0.044 | Beta | Kind 199887 |
Baseline (UK population norm, children) | 0.94 | 0.048 | Beta | Kind 199887 |
Utility decrement associated with untreated infection | 0.000685 | 0.00005 | Beta | Neuner 200378 |
Utility decrement associated with treated infection | 0.000411 | 0.00003 | Beta | Neuner 200378 |
Utility decrement associated with penicillin-induced rash | 0.0017 | 0.0001 | Beta | Neuner 200378 |
Utility decrement associated with abscess | 0.0137 | 0.0007 | Beta | Neuner 200378 |
Utility decrement associated with penicillin-induced anaphylaxis (sepsis) | 0.025 | 0.0013 | Beta | Neuner 200378 |
Utility decrement associated with rheumatic fever | 0.209 | 0.011 | Beta | Neuner 200378 |
Health and social care costs
Cost of tests
Table 25 presents the unit cost for each point-of-care test and estimates of resource use in terms of the additional GP time required to administer and process the test results. Cost data were available for 14 (66.7%) of the 21 tests considered in the NICE scope. The majority of the costs were provided by the manufacturers (submitted directly to NICE in response to a request for information) and ranged from £0.64 per test for Biopanda Reagents’ Strep A Rapid Test strip to £64.63 (2017/18 prices) for the cobas Liat Strep A Assay supplied by Roche Diagnostics. Unit costs for Abbott Laboratories’ Clearview Exact Strep A tests were obtained from the NHS supply chain catalogue at £1.92 per test for the Clearview Strep A dipstick – test strip and £2.72 for the cassette version. 88 The duration of additional GP time for processing test results was estimated based on information provided in the manufacturer submission and ranged from 5 to 12 minutes. Costs associated with additional GP time for processing test results are included in the base-case analysis. The costs of confirmatory swab culture following a negative test result are calculated as part of the costs associated with modelled pathways in the intervention arm except for the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories), Alere i Strep A 2 (Abbott Laboratories), cobas Liat Strep A Assay (Roche Diagnostics) and all five NADAL tests supplied by nal von minden GmbH. Details of costing methods are given in the next section.
Test ID | Test name | Cost (£) | Test process time (minutes) | Source |
---|---|---|---|---|
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 2.72 | 5 | NHS Supply chain catalogue (National Product Code = HHH2552)88 |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 1.92 | 5 | Medisave UK Ltd89 |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | Test cost not available | ||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 0.82 | 5 | Manufacturer’s submissiona |
5 | Strep A Rapid Test – test strip (Biopanda Reagents) | 0.64 | 5 | Manufacturer’s submissiona |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 1.20 | 5 | Manufacturer’s submissiona |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 1.40 | 5 | Manufacturer’s submissiona |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 1.50 | 5 | Manufacturer’s submissiona |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 1.30 | 5 | Manufacturer’s submissiona |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 1.96 | 5 | Manufacturer’s submissiona |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | Test cost not available | ||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 4.34 | 5 | Manufacturer’s submissiona |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 2.70 | 5 | Manufacturer’s submissiona |
14 | bioNexia Strep A plus – cassette (bioMérieux) | Test cost not available | ||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | Test cost not available | ||
16 | Biosynex Strep A – cassette (Biosynex) | Test cost not available | ||
17 | Sofia Strep A FIA (Quidel) | Test cost not available | ||
18 | Alere i Strep A (Abbott Laboratories) | Test cost not available | ||
19 | Alere i Strep A 2 (Abbott Laboratories) | 22.94 | 5 | Test cost not available |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 64.63 | 6 | Manufacturer’s submissiona |
21 | Xpert Xpress Strep A (Cepheid) | 4.25 | 12 | Manufacturer’s submissiona |
Treatment costs
Unit costs of health-care service use associated with modelled care pathways are summarised in Table 26. As described in Model structure, the base-case models incorporate three treatment options for patients presenting with suspected strep A infection in primary care and secondary care settings: immediate antibiotics (option 1), delayed antibiotics with reported use (option 2) and delayed antibiotics that have not been used or no antibiotics offered (option 3). All three options account for repeat GP consultations at 14.2%6 over the modelled time horizon, with a typical GP consultation lasting 9.22 minutes at an average cost of £4.02 per minute90 and pain relief (500 mg of paracetamol) costing £0.74 per 32-tablet pack,91 but the options differed in the way that antibiotics are prescribed.
Treatment costs | Mean cost (£) | SE (£) | Distribution | Source |
---|---|---|---|---|
GP consultation (9.22 minutes) | 37.4 | 1.91 | Gamma | PSSRU unit costs 201790 |
Antibiotic (phenoxymethylpenicillin 250 mg, 28-tablet pack) | 0.91 | 0.046 | Gamma | BNF 74 (2017)91 |
Pain relief (paracetamol 500 mg, 32-tablet pack) | 0.74 | 0.037 | Gamma | BNF 74 (2017)91 |
Throat culture (swab) | 8.00 | 0.41 | Gamma | NHS Reference Costs 2017/18 92 |
Penicillin-induced rash [switch to 500 mg of erythromycin (Erythrocin®; ADVANZ Pharma, London, UK)] | 10.00 | 0.51 | Gamma | BNF 74 (2017)91 |
Treatment costs: sepsis | 1744.64 | 89.01 | Gamma | Derived from data reported in Hex et al. (2017)93 |
Treatment modality costs (assumptions) | ||||
Treatment option 1 (usual-care and intervention arms) and option 2 (usual-care arm): assume immediate/delayed antibiotics (£0.91) at initial consultation (£37.43); 14.2% reconsultations during which patients get paracetamol (£5.42) plus weighted treatment costs penicillin side effects (£1.12 per patient) | 44.89 | Derived from other treatment costs | ||
Treatment option 2 (intervention arm): assume that antibiotics (£0.91) given at initial consultation (£37.43); 14.2% reconsultations during which patients get paracetamol (£5.42), weighted treatment costs penicillin side effects (£1.12 per patient) and confirmatory culture (£8) | 52.89 | Derived from other treatment costs | ||
Treatment option 3 (usual-care arm): assume paracetamol (£0.74) at initial consultation (£37.43) and delayed antibiotic use among the 14.2% attending repeat consultation (£5.60) | 43.78 | Derived from other treatment costs | ||
Treatment option 3 (intervention arm): assume paracetamol (£0.74) at initial consultation (£37.43), delayed antibiotic use among the 14.2% attending repeat consultation (£5.60) and confirmatory throat culture (£8) | 51.77 | Derived from other treatment costs | ||
Complication of strep A costs | ||||
Treatment costs: abscess | 1571.28 | 80 | Gamma | NHS Reference Costs 2017/1892 [tonsillectomy, 19 years and over (HRG code CA60A)] |
Treatment costs: acute rheumatic fever | 1772.44 | 90.43 | Gamma | NHS Reference Costs 2017/1892 [other acquired cardiac conditions with a CC score of6–8 (HRG code EB14C)] |
Under options 1 and 2, patients incur a cost of antibiotics at £0.91 per treatment course (phenoxymethylpenicillin 250 mg, 28-tablet pack)91 and costs associated with managing adverse effects of penicillin: penicillin-induced rash [assumed to be seen by GP at additional expense (£4.02 per minute) and switched to erythromycin 500 mg at £10 per treatment course91 weighted by 0.02, the probability of a rash] and penicillin-induced anaphylaxis [estimated at £1744 based on data reported in a 2017 cost of sepsis study93 (see Table 26) weighted by 0.0001, the probability of sepsis]. 78 No costs associated with antibiotic use are included under option 3 (delayed antibiotics prescription given but not used); however, we assume that 14.2% of patients attended a repeat consultation86 and will use the delayed antibiotics prescription under this option.
Confirmatory swab culture costing £892 was added to options 2 and 3 for patients with a negative test result (intervention only) but not to option 1, as patients with a positive test result receive immediate antibiotics. On average, the estimated treatment costs based on these assumptions and a repeat consultation rate of 14.2% were £44.89 (option 1: intervention and usual-care arms; option 2: usual-care arm), £52.89 (option 2: intervention arm including confirmatory culture costs), £43.78 (option 3: usual-care arm) and £51.77 (option 3: intervention arm including confirmatory culture costs) (see Table 26).
The cost of sepsis was estimated to be £1744 based on data reported in a study,93 which estimated that 93,973 adults would need treatment for sepsis in UK hospitals, at an annual total cost of £163,949,055 (see Table 26). The cost of treating strep A-related abscess was estimated at £1571 based on the NHS reference cost for a tonsillectomy in adults aged ≥ 19 years with a Healthcare Resource Group code of CA60A. 92 The cost of treating acute rheumatic fever was estimated at £1772.44 based on the NHS reference cost for other acquired cardiac conditions with a CC (complications and comorbidities) score of 6–8 and a Healthcare Resource Group code EB14C. 92
Probabilistic sensitivity analysis
A probabilistic sensitivity analysis (PSA) was conducted to explore the impact of parameter uncertainty on base-case cost-effectiveness of point-of-care testing for strep A infection. The PSA was implemented via Monte Carlo simulations involving 1000 draws for all model inputs except for the acquisition costs of the tests, which were entered as deterministic values. This enabled us to simulate 1000 replicates of the base-case ICER (displayed on cost-effectiveness planes) and calculate the probability of cost-effectiveness at threshold values ranging from £0 to £100,000 per QALY gained (CEACs). The sensitivity and specificity of the clinical scoring algorithm and individual point-of-care tests were assumed to be drawn from separate normal distributions on the logit scale, as the relatively small number of studies reporting test-specific accuracy data precluded joint synthesis of sensitivity and specificity and estimation of the between-study correlation (see Tables 20 and 21). Prevalence, probabilities and utility values (see Tables 22–24) were assumed to be drawn from a beta distribution reflecting scale of measurement for quantities constrained to lie in the interval 0–1. Costs were assigned a gamma distribution to reflect the distribution of health-care costs, which cannot be less than 0 and are typically highly skewed to the right due to a smaller number of individuals incurring high costs. Generally, the uncertainty surrounding input parameters [standard errors (SEs) and CIs] was not available; therefore, we assumed a 10% of the mean as equivalent to lower and upper 95% confidence limits and calculated the SEs assuming approximate normal distribution.
Base-case analyses
The main base-case model was based on the adult population in a primary care setting. This model was then adapted for adults in a secondary care setting, for children in a primary care setting and for children in a secondary care setting.
Adult primary care model: base-case analysis results
The base-case cost-effectiveness results for adults treated in primary care are presented in Table 27 for 14 of the 21 tests for which test accuracy and cost data were available. The rate at which incremental QALYs accrued over the 1-year modelled time horizon was small; thus, estimates of simulated costs and QALYs were multiplied by 1000 to aid clarity in presentation of incremental estimates in the result tables and texts. The mean simulated costs under base-case assumptions were £49,147 per 1000 individuals treated in primary care under usual-care practice, and ranged from £54,394 per 1000 individuals in the test group using the NADAL Strep A – test strip (nal von minden GmbH) to £71,277 per 1000 individuals using the cobas Liat Strep A Assay (Roche Diagnostics). The corresponding estimated mean QALYs were 859.825 per 1000 individuals under usual-care practice, and ranged between 859.821 QALYs per 1000 individuals in the intervention group using Abbott Laboratories’ Clearview Exact Strep A cassette or test strip to 859.829 QALYs per 1000 individuals using Cepheid’s Xpert Xpress Strep A tests. In terms of incremental cost-effectiveness, the base-case estimates suggest that usual care was cheaper and generated marginally more QALYs than (and therefore dominated) the cassette and strip versions of Abbott Laboratories’ Clearview Exact Strep A test. ICERs for the remaining 12 tests suggest that testing was more costly and more effective than usual care, with ICERs ranging from £1,353,677 per QALY gained for nal von minden GmbH’s NADAL Strep A test strip to £6,059,081 per QALY gained for Roche Diagnostics’ cobas Liat Strep A Assay compared with usual care.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental cost (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care |
---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 49,147 | 859.82458955 | 0 | 0.0000000 | – | |
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 56,180 | 859.82063008 | 7033 | –0.0039595 | Dominated |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 55,980 | 859.82063008 | 6833 | –0.0039595 | Dominated |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | |||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 55,442 | 859.82769587 | 6295 | 0.0031063 | 2,026,496 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 55,397 | 859.82769587 | 6250 | 0.0031063 | 2,012,006 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 54,394 | 859.82846603 | 5248 | 0.0038765 | 1,353,677 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 54,444 | 859.82846603 | 5298 | 0.0038765 | 1,366,577 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 54,469 | 859.82846603 | 5323 | 0.0038765 | 1,373,029 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 54,419 | 859.82846603 | 5273 | 0.0038765 | 1,360,126 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 54,584 | 859.82846603 | 5438 | 0.0038765 | 1,402,700 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | |||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 56,083 | 859.82810269 | 6936 | 0.0035131 | 1,974,319 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 54,781 | 859.82751669 | 5634 | 0.0029271 | 1,924,717 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | |||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | |||||
16 | Biosynex Strep A – cassette (Biosynex) | |||||
17 | Sofia Strep A FIA (Quidel) | |||||
18 | Alere i Strep A (Abbott Laboratories) | |||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 59,837 | 859.82824206 | 10,691 | 0.0036525 | 2,926,915 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 71,277 | 859.82824206 | 22,131 | 0.0036525 | 6,059,081 |
21 | Xpert Xpress Strep A (Cepheid) | 63,323 | 859.82854357 | 14,177 | 0.0039540 | 3,585,436 |
Adult primary care model: probabilistic sensitivity analyses
Table 28 presents probabilistic estimates for adults presenting in primary care. The probabilistic estimates were very similar to the deterministic base-case results, with ICERs indicating that usual care dominated two (the Clearview Exact Strep A cassette and the Clearview Exact Strep A dipstick – test strip supplied by Abbott Laboratories) of the 14 tests considered in the economic modelling. Base-case probabilistic ICERs for the remaining 11 tests ranged from £1,495,402 per QALY gained for NADAL Strep A plus – test strip supplied by nal von minden GmbH to £6,498,666 per QALY gained for cobas Liat Strep A Assay supplied by Roche Diagnostics. The probability for testing to be cost-effective was zero at a cost-effectiveness threshold of £20,000 per QALY gained under the base-case assumptions and model inputs regardless of the point-of-care test used in comparison with usual care.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental costs (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care | Probability of cost-effectiveness at £20,000 per QALY |
---|---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 49,295 | 861.0209476 | 0 | 0.0000000 | 1 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 56,387 | 861.0168718 | 7092 | –0.0040758 | Dominated | 0 |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 56,183 | 861.0169652 | 6888 | –0.0039824 | Dominated | 0 |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | ||||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 55,636 | 861.0239908 | 6341 | 0.0030432 | 2,083,738 | 0 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 55,590 | 861.0239997 | 6295 | 0.0030521 | 2,062,510 | 0 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 54,582 | 861.0244537 | 5288 | 0.0035061 | 1,508,134 | 0 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 54,634 | 861.0243797 | 5339 | 0.0034321 | 1,555,613 | 0 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 54,658 | 861.0244709 | 5363 | 0.0035233 | 1,522,258 | 0 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 54,607 | 861.0245002 | 5313 | 0.0035526 | 1,495,402 | 0 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 54,775 | 861.0244341 | 5480 | 0.0034865 | 1,571,686 | 0 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | ||||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 56,273 | 861.0244144 | 6978 | 0.0034668 | 2,012,942 | 0 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 54,968 | 861.0237881 | 5673 | 0.0028405 | 1,997,326 | 0 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | ||||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | ||||||
16 | Biosynex Strep A – cassette (Biosynex) | ||||||
17 | Sofia Strep A FIA (Quidel) | ||||||
18 | Alere i Strep A (Abbott Laboratories) | ||||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 60,056 | 861.0244146 | 10,761 | 0.0034670 | 3,103,806 | 0 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 71,565 | 861.0243746 | 22,271 | 0.0034270 | 6,498,666 | 0 |
21 | Xpert Xpress Strep A (Cepheid) | 63,581 | 861.0248804 | 14,286 | 0.0039328 | 3,632,549 | 0 |
Adult primary care model: exploratory sensitivity analyses
Exploratory analyses were conducted to test the robustness of the economic base-case estimates for adults presenting in primary care with suspected strep A. The base-case ICERs are highly sensitive to various modelling assumptions and input values. In the sections that follow, sensitivity analysis results are presented for only those tests for which the ICER is sensitive to the alternative modelling assumptions and parameter inputs (as indicated by changes in the direction of incremental costs or incremental QALYs compared with usual care). See Appendix 9 for more detail.
Adult secondary care model: base-case analysis results
The primary care adult model (see Model structure) was adapted to model adult patients presenting with suspected strep A infection in secondary care settings (urgent care/walk-in centres and emergency departments). The modelled pathways remain the same as the adult primary care model depicted in Figures 12–14. Sensitivity and specificity of the clinical score at the specified Centor score of ≥ 3 points for a positive strep A infection were left unchanged as in the adult primary care model (see Table 20), as were the modelled pathway probabilities (see Table 23) and health state utility values (see Table 24). However, the two models differ in the way that treatment and testing costs are calculated. The secondary care model assumes that the care pathways associated with suspected cases of strep A infections are presenting for the first time in secondary care and have not received any treatment in primary care. The cost of the initial GP consultation included in the adult primary care model is, therefore, excluded from the cost. However, the model does account for patients attending a GP consultation (and the associated costs) following hospital discharge at a rate equal to the proportion attending repeat GP consultations in the primary care model (14.2% based on figures reported in Little et al. 86). In addition, we assume that point-of-care testing within secondary care settings can be carried out within the standard allocated time for most hospital-based appointments, such that no additional time is required for administering and processing test results.
Test accuracy estimates were obtained from our systematic review and remained broadly the same as those used to inform the adult primary care model (see Table 21) except for three tests (OSOM Strep A test strip, QuikRead Go Strep A test kit and the Alere TestPack +Plus Strep A – cassette). Table 29 presents test accuracy estimates used in the adult secondary care model for these three point-of-care tests. Estimates of sensitivity changed from 0.92 in primary care to 0.94 in secondary care for the OSOM Strep A test strip, from 1.00 in primary care to 0.87 in secondary care for the QuikRead Go Strep A test kit and from 0.95 in primary care to 0.90 in secondary care for the Alere TestPack +Plus Strep A – cassette. However, estimates of specificity for the three tests remain broadly unchanged across primary and secondary care settings.
Test name | Sensitivity (95% CI) | Specificity (95% CI) | Assumed distribution | Data source (first author and year of publication) |
---|---|---|---|---|
OSOM Strep A test – test strip (Sekisui Diagnostics) | 0.94 (0.89 to 0.98) | 0.95 (0.91 to 0.98) | Normal (logit) | Five studies (Bura 2017,36 Llor 2009,44 Llor 2011,45 Rogo 201149 and Weinzierl 201854) |
QuikRead Go Strep A test kit (Orion Diagnostica) | 0.87 (0.78 to 0.95) | 0.78 (0.71 to 0.85) | Normal (logit) | Two studies (Azrad 201934 and Stefaniuk 201752) |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 0.90 (0.86 to 0.94) | 0.95 (0.92 to 0.96) | Normal (logit) | One study (Rosenberg 200250) and one abstract (Valverde 201858) |
Table 30 presents the cost-effectiveness results for adults in a secondary care setting. As with the adult primary care model, only 14 of the 21 tests that have test accuracy and costs data have been included in this analysis. The pattern and direction of cost-effectiveness in the secondary care adult model are similar to what has been observed in the adult primary care model, although the ICERs were generally lower in the secondary care model.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental cost (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care |
---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 49,147 | 859.82458955 | 0 | 0.0000000 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 51,103 | 859.82063008 | 1957 | –0.0039595 | Dominated |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 50,903 | 859.82063008 | 1757 | –0.0039595 | Dominated |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | |||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 50,365 | 859.82769587 | 1219 | 0.0031063 | 392,342 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 50,320 | 859.82769587 | 1174 | 0.0031063 | 377,852 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 49,318 | 859.82846603 | 171 | 0.0038765 | 44,184 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 49,368 | 859.82846603 | 221 | 0.0038765 | 57,085 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 49,393 | 859.82846603 | 246 | 0.0038765 | 63,537 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 49,343 | 859.82846603 | 196 | 0.0038765 | 50,636 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 49,508 | 859.82846603 | 361 | 0.0038765 | 93,211 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | |||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 51,136 | 859.82474622 | 1990 | 0.0001567 | 12,700,432 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 49,713 | 859.82627789 | 566 | 0.0016883 | 335,358 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | |||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | |||||
16 | Biosynex Strep A – cassette (Biosynex) | |||||
17 | Sofia Strep A FIA (Quidel) | |||||
18 | Alere i Strep A (Abbott Laboratories) | |||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 54,761 | 859.82824206 | 5614 | 0.0036525 | 1,537,126 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 65,186 | 859.82824206 | 16,039 | 0.0036525 | 4,391,332 |
21 | Xpert Xpress Strep A (Cepheid) | 51,141 | 859.82854357 | 1994 | 0.0039540 | 504,287 |
Two tests (Abbotts Laboratories’ Clearview Exact Strep A cassette and Clearview Exact Strep A dipstick – test trip) generated fewer QALYs than usual care and produced ICERs indicating being dominated by usual care (i.e. were less effective and more costly). The remaining 12 tests all generated marginally more QALYs than usual care. The ICERs ranged from £44,184 per QALY gained for NADAL Strep A – test strip (nal von minden GmbH) to £12,700,432 per QALY gained for the QuikRead Go Strep A test kit supplied by Orion Diagnostica.
Adult secondary care model: probabilistic sensitivity analyses
Probabilistic results for the adult secondary care model mirrored the adult primary care PSA model. Results are shown in Table 31.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental cost (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care | Probability of cost-effectiveness at £20,000 per QALY |
---|---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 49,182 | 860.0288998 | 0 | 0.0000000 | 1 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 51,128 | 860.0249274 | 1947 | –0.0039724 | Dominated | 0 |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 50,924 | 860.024949 | 1743 | –0.0039508 | Dominated | 0 |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | ||||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 50,416 | 860.0319673 | 1234 | 0.0030675 | 402,358 | 0 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 50,370 | 860.0319912 | 1188 | 0.0030914 | 384,360 | 0 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 49,358 | 860.0323456 | 177 | 0.0034458 | 51,324 | 0 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 49,408 | 860.0324406 | 226 | 0.0035408 | 63,963 | 0 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 49,433 | 860.0324859 | 251 | 0.0035860 | 70,042 | 0 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 49,382 | 860.032473 | 201 | 0.0035731 | 56,186 | 0 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 49,550 | 860.0324781 | 368 | 0.0035783 | 102,876 | 0 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | ||||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 51,187 | 860.0289465 | 2005 | 0.0000467 | 42,951,995 | 0 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 49,754 | 860.0306084 | 573 | 0.0017086 | 335,098 | 0 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | ||||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | ||||||
16 | Biosynex Strep A – cassette (Biosynex) | ||||||
17 | Sofia Strep A FIA (Quidel) | ||||||
18 | Alere i Strep A (Abbott Laboratories) | ||||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 54,870 | 860.0323487 | 5688 | 0.0034488 | 1,649,300 | 0 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 65,430 | 860.0322608 | 16,248 | 0.0033610 | 4,834,450 | 0 |
21 | Xpert Xpress Strep A (Cepheid) | 51,204 | 860.0328714 | 2022 | 0.0039715 | 509,167 | 0 |
Adult secondary care model: exploratory sensitivity analyses
Exploratory analyses were conducted to test the robustness of the economic base-case estimates for adults presenting in secondary care with suspected strep A infection. The base-case ICERs are highly sensitive to various modelling assumptions and input values. In the sections that follow, sensitivity analysis results are presented for only those tests for which the ICER is sensitive to the alternative modelling assumptions and parameter inputs (as indicated by changes in the direction of incremental costs or incremental QALYs compared with usual care). See Appendix 10 for more detail.
Children’s primary care model: base-case results
The primary care adult model (see Model structure) was adapted to model children presenting with suspected strep A infection in a primary care setting. The modelled pathways remain the same as the adult primary care model depicted in Figures 12–14. The prevalence of strep A changed from 22.6% in the adult primary care model to 30.2%, the median prevalence in our systematic review of test accuracy studies among children in primary care settings (see Table 22). Sensitivity and specificity of the clinical score at the specified Centor score of ≥ 3 points for a positive strep A infection were left unchanged (see estimates displayed in Table 20), as well as the modelled pathway probabilities (see Table 23) and health-state utility values (see Table 24). Test accuracy estimates were obtained from our systematic review and remained broadly the same as those used to inform the adults in primary care model (see Table 21) except for five tests (BD Veritor Plus system group A Strep Assay – cassette supplied by Becton Dickinson, OSOM Strep A test – test strip supplied by Sekisui Diagnostics, QuikRead Go Strep A test kit by Orion Diagnostica and Alere TestPack +Plus Strep A – cassette and Alere i Strep A both supplied by Abbott Laboratories). See Table 21 for further details.
Treatment costs for peritonsillar abscess and related complications of strep A infection in children were estimated at £1420.50 (tonsillectomy, aged ≤ 18 years, with Healthcare Resource Group code CA60B);92 this is slightly lower than the estimate used in the adult primary care model for these complications (£1571.28 for tonsillectomy, aged ≥ 19 years, with Healthcare Resource Group code CA60A). 92 Treatment costs for penicillin-induced rash were left unchanged as in the adult models at £10 (assuming that treatment switched to another antibiotic, e.g. 500 mg of erythromycin), and anaphylaxis was £1744.64. 93
Overall, 14 of the 21 tests were included in the child primary care model. Cost-effectiveness estimates for these tests compared with usual care are presented in Table 32. Simulated mean costs and QALYs were multiplied by 1000 to aid clarity in presentation because of the small number of QALYs accrued over a 1-year time horizon. The base-case cost-effectiveness for children presenting in primary care largely mirrored that for the adult population. However, because of the slightly higher prevalence of strep A in children (30.2%) than in adults (22.6%), simulated costs over the 1-year time horizon were generally higher in the children’s model than those in the adult primary care model.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 10,000 individuals | Incremental cost (£) per 10,000 individuals | Incremental QALYs per 10,000 individuals | ICER (£) vs. usual care |
---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 50,185 | 939.77019917 | 0 | 0.0000000 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 57,773 | 939.76305927 | 7588 | –0.0071399 | Dominated |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 57,554 | 939.76305927 | 7369 | –0.0071399 | Dominated |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | |||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 56,899 | 939.77244279 | 6715 | 0.0022436 | 2,992,743 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 56,850 | 939.77244279 | 6665 | 0.0022436 | 2,970,792 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 55,952 | 939.77347194 | 5768 | 0.0032728 | 1,762,306 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 56,007 | 939.77347194 | 5822 | 0.0032728 | 1,779,026 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 56,035 | 939.77347194 | 5850 | 0.0032728 | 1,787,386 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 55,980 | 939.77347194 | 5795 | 0.0032728 | 1,770,666 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 56,160 | 939.77347194 | 5976 | 0.0032728 | 1,825,846 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | |||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 58,012 | 939.76701428 | 7827 | –0.0031849 | Dominated |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 56,389 | 939.76939575 | 6204 | –0.0008034 | Dominated |
14 | bioNexia Strep A plus – cassette (bioMérieux) | |||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | |||||
16 | Biosynex Strep A – cassette (Biosynex) | |||||
17 | Sofia Strep A FIA (Quidel) | |||||
18 | Alere i Strep A (Abbott Laboratories) | |||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 61,907 | 939.77326996 | 11,722 | 0.0030708 | 3,817,336 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 74,425 | 939.77326996 | 24,240 | 0.0030708 | 7,893,857 |
21 | Xpert Xpress Strep A (Cepheid) | 65,521 | 939.77368771 | 15,336 | 0.0034885 | 4,396,205 |
The mean costs simulated under base-case assumptions were £50,185 (£49,147 in the adult primary care model) per 1000 children treated in primary care under usual-care practice and ranged from £55,952 (£54,394 in the adult primary care model) per 1000 children for NADAL Strep A – test strip (nal von minden GmbH) to £74,425 (£71,277 adult primary care model) per 1000 children treated in primary care for cobas Liat Strep A Assay supplied by Roche Diagnostics. Simulated QALYs were also higher for children treated in primary care than for adults because of the higher baseline utility in children (0.94), compared with a utility norm of 0.863 for adults in the UK. Simulated mean QALYs were 939.7702 (859.8246 in the adult primary care model) for children treated in primary care under usual-care practice and ranged from 939.7631 (859.8206 adult primary care model) for Abbott Laboratories’ Clearview Exact Strep A test cassette and strip to 939.7737 (859.8285 in the adult primary care model) for the other tests.
In terms of incremental cost-effectiveness, the base-case estimates suggest that usual care was cheaper and generated marginally more QALYs than (and therefore dominated) the QuikRead Go Strep A test kit (Orion Diagnostica), the cassette and strip versions of the Clearview Exact Strep A test cassette supplied by Abbott Laboratories and the Alere TestPack +Plus Strep A – cassette also supplied by Abbott Laboratories. ICERs for the remaining 10 tests suggest that testing for children in primary care under base-case assumptions produced ICERs ranging from £1,762,306 per QALY gained for NADAL Strep A – test strip (nal von minden GmbH) to £7,893,857 per QALY gained for the Xpert Xpress Strep A by Cepheid compared with usual care.
Children’s primary care model: probabilistic sensitivity analyses
Probabilistic results for the children’s primary care model are shown in Table 33 and are in line with the deterministic results for the children’s primary care model.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental cost (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care | Probability of cost-effectiveness at £20,000 per QALY |
---|---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 50,204 | 940.0932608 | 0 | 0.0000000 | 1 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 57,887 | 940.0862286 | 7683 | –0.0070321 | Dominated | 0 |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 57,668 | 940.0862027 | 7464 | –0.0070580 | Dominated | 0 |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | ||||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 57,016 | 940.0954301 | 6812 | 0.0021694 | 3,140,063 | 0 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 56,964 | 940.0955022 | 6760 | 0.0022415 | 3,015,747 | 0 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 56,046 | 940.0960689 | 5841 | 0.0028082 | 2,080,115 | 0 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 56,101 | 940.0960859 | 5897 | 0.0028251 | 2,087,246 | 0 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 56,128 | 940.0961173 | 5924 | 0.0028566 | 2,073,823 | 0 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 56,073 | 940.0961534 | 5869 | 0.0028927 | 2,028,782 | 0 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 56,255 | 940.0961639 | 6051 | 0.0029031 | 2,084,258 | 0 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | ||||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 58,149 | 940.0895996 | 7944 | –0.0036612 | Dominated | 0 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 56,482 | 940.0924978 | 6278 | –0.0007629 | Dominated | 0 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | ||||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | ||||||
16 | Biosynex Strep A – cassette (Biosynex) | ||||||
17 | Sofia Strep A FIA (Quidel) | ||||||
18 | Alere i Strep A (Abbott Laboratories) | ||||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 62,058 | 940.0960474 | 11,854 | 0.0027866 | 4,253,800 | 0 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 74,704 | 940.0960301 | 24,500 | 0.0027693 | 8,846,880 | 0 |
21 | Xpert Xpress Strep A (Cepheid) | 65,741 | 940.096771 | 15,536 | 0.0035102 | 4,426,070 | 0 |
Children’s primary care model: exploratory sensitivity analyses
Exploratory analyses were conducted to test the robustness of economic base-case estimates for children presenting in primary care with suspected strep A infection. The base-case ICERs are highly sensitive to various modelling assumptions and input values. In the sections that follow, sensitivity analysis results are presented for only those tests for which the ICER is sensitive to the alternative modelling assumptions and parameter inputs (as indicated by changes in the direction of incremental costs or incremental QALYs compared with usual care). See Appendix 11 for more detail.
Children in secondary care: base-case analysis results
The models for adults in secondary care (see Adult secondary care model: base-case analysis results) and children in primary care (see Children’s primary care model: base-case results) were adapted to model suspected strep A infection among children in secondary care settings (urgent care/walk-in centres and emergency departments). The modelled pathways remain the same as depicted in Figures 12–14. The prevalence rate was maintained at 30.2%, as in the children’s primary care model. Test accuracy estimates obtained from our systematic review remained broadly the same as those used to inform the primary care models except for six tests [BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson), OSOM Strep A test – test strip (Sekisui Diagnostics), QuikRead Go Strep A test kit (Orion Diagnostica), Alere TestPack +Plus Strep A – cassette (Abbott Laboratories), Alere i Strep A (Abbott Laboratories) and Xpert Xpress Strep A (Cepheid)]. Table 34 presents test accuracy estimates used in the children’s secondary care model for these tests.
Test name | Sensitivity (95% CI) | Specificity (95% CI) | Assumed distribution | Data source (first author and year) |
---|---|---|---|---|
OSOM Strep A test – test strip (Sekisui Diagnostics) | 0.94 (0.89 to 0.98) | 0.97 (0.95 to 0.99) | Normal (logit) | Two studies (Rogo 201149 and Weinzierl 201854) |
QuikRead Go Strep A test kit (Orion Diagnostica) | 0.87 (0.78 to 0.95) | 0.78 (0.71 to 0.85) | Normal (logit) | Two studies (Azrad 201934 and Stefaniuk 201752) |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 0.77 (0.73 to 0.8) | 0.97 (0.93 to 0.99) | Normal (logit) | Four studies (Kurtz 2000,42 Lacroix 2018,23 Penney 201648 and Santos 200351) |
Table 35 presents cost-effectiveness estimates for children treated in secondary care. As with the adult primary care model, only 14 of the 21 tests that have test accuracy and costs data are included in this analysis. The base-case estimates suggest that usual care was cheaper and generated marginally more QALYs than (and therefore dominated) four tests [Clearview Exact Strep A cassette (Abbott Laboratories), Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories)]. ICERs for the remaining tests suggest that testing was more costly and more effective than usual care, with ICERs ranging from £65,122 per QALY gained for the NADAL Strep A – test strip (nal von minden GmbH) to £5,723,279 per QALY gained for cobas Liat Strep A Assay (Roche Diagnostics) compared with usual care.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 10,000 individuals | Incremental cost (£) per 10,000 individuals | Incremental QALYs per 10,000 individuals | ICER (£) vs. usual care |
---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 50,185 | 939.77019917 | 0 | 0.0000000 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 52,219 | 939.76305927 | 2034 | –0.0071399 | Dominated |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 52,000 | 939.76305927 | 1815 | –0.0071399 | Dominated |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | |||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 51,345 | 939.77244279 | 1160 | 0.0022436 | 517,066 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 51,296 | 939.77244279 | 1111 | 0.0022436 | 495,115 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 50,398 | 939.77347194 | 213 | 0.0032728 | 65,122 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 50,453 | 939.77347194 | 268 | 0.0032728 | 81,845 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 50,480 | 939.77347194 | 295 | 0.0032728 | 90,205 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 50,425 | 939.77347194 | 240 | 0.0032728 | 73,482 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 50,606 | 939.77347194 | 421 | 0.0032728 | 128,662 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | |||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 52,457 | 939.76701428 | 2273 | –0.0031849 | Dominated |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 50,834 | 939.76939575 | 649 | –0.0008034 | Dominated |
14 | bioNexia Strep A plus – cassette (bioMérieux) | |||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | |||||
16 | Biosynex Strep A – cassette (Biosynex) | |||||
17 | Sofia Strep A FIA (Quidel) | |||||
18 | Alere i Strep A (Abbott Laboratories) | |||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 56,353 | 939.77326996 | 6168 | 0.0030708 | 2,008,522 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 67,760 | 939.77326996 | 17,575 | 0.0030708 | 5,723,279 |
21 | Xpert Xpress Strep A (Cepheid) | 52,190 | 939.77368771 | 2006 | 0.0034885 | 574,900 |
Children’s secondary care model: probabilistic sensitivity analyses
Probabilistic results for the children’s secondary care model mirrored the children primary care PSA model. Results are shown in Table 36.
Test ID | Test name | Mean cost (£) per 1000 individuals | Mean QALYs per 1000 individuals | Incremental cost (£) per 1000 individuals | Incremental QALYs per 1000 individuals | ICER (£) vs. usual care | Probability of cost-effectiveness at £20,000 per QALY |
---|---|---|---|---|---|---|---|
Usual care (clinical scoring based on Centor ≥ 3 points plus clinical assessment) | 50,000 | 940.2967868 | 0 | 0.0000000 | 1 | ||
1 | Clearview Exact Strep A cassette (Abbott Laboratories) | 51,783 | 940.2897117 | 1783 | –0.0070750 | Dominated | 0 |
2 | Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | 51,563 | 940.2897075 | 1563 | –0.0070792 | Dominated | 0 |
3 | BD Veritor Plus system group A Strep Assay – cassette (Becton Dickinson) | ||||||
4 | Strep A Rapid Test – cassette (Biopanda Reagents) | 51,138 | 940.2989313 | 1138 | 0.0021445 | 530,655 | 0 |
5 | Strep A Rapid Test – test strip (Biopanda Reagents)a | 51,088 | 940.2989679 | 1088 | 0.0021812 | 498,896 | 0 |
6 | NADAL Strep A – test strip (nal von minden GmbH) | 50,217 | 940.2996263 | 217 | 0.0028395 | 76,288 | 0 |
7 | NADAL Strep A – cassette (nal von minden GmbH) | 50,272 | 940.299611 | 272 | 0.0028243 | 96,309 | 0 |
8 | NADAL Strep A plus – cassette (nal von minden GmbH) | 50,299 | 940.2996798 | 299 | 0.0028931 | 103,485 | 0 |
9 | NADAL Strep A plus – test strip (nal von minden GmbH) | 50,245 | 940.2995732 | 245 | 0.0027865 | 87,781 | 0 |
10 | NADAL Strep A scan test – cassette (nal von minden GmbH) | 50,427 | 940.2995398 | 427 | 0.0027531 | 154,933 | 0 |
11 | OSOM Strep A test – test strip (Sekisui Diagnostics) | ||||||
12 | QuikRead Go Strep A test kit (Orion Diagnostica) | 52,132 | 940.2930891 | 2132 | –0.0036976 | Dominated | 0 |
13 | Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | 50,652 | 940.2959919 | 652 | –0.0007948 | Dominated | 0 |
14 | bioNexia Strep A plus – cassette (bioMérieux) | ||||||
15 | bioNexia Strep A dipstick – test strip (bioMérieux) | ||||||
16 | Biosynex Strep A – cassette (Biosynex) | ||||||
17 | Sofia Strep A FIA (Quidel) | ||||||
18 | Alere i Strep A (Abbott Laboratories) | ||||||
19 | Alere i Strep A 2 (Abbott Laboratories) | 56,210 | 940.2995831 | 6210 | 0.0027963 | 2,220,667 | 0 |
20 | cobas Liat Strep A Assay (Roche Diagnostics) | 67,693 | 940.2994655 | 17,693 | 0.0026787 | 6,605,137 | 0 |
21 | Xpert Xpress Strep A (Cepheid) | 52,023 | 940.3002769 | 2023 | 0.0034902 | 579,711 | 0 |
Children’s secondary care model: exploratory sensitivity analyses
Exploratory analyses were conducted to test the robustness of economic base-case estimates for children presenting in secondary care with suspected strep A infection. The base-case ICERs are highly sensitive to various modelling assumptions and input values. In the sections that follow, sensitivity analysis results are presented for only those tests for which the ICER is sensitive to the alternative modelling assumptions and parameter inputs (as indicated by changes in the direction of incremental costs or incremental QALYs compared with usual care). See Appendix 12 for more detail.
Additional sensitivity analyses
Appendix 13, Table 64, displays the list of the 39 deterministic sensitivity analyses conducted to explore the impact of alternative modelling assumptions and parameter inputs on base-case ICERs. In the majority of cases, the ICERs were robust to the implemented changes in the majority of the analyses implemented, and the base-case cost-effectiveness conclusions remain unchanged. In particular, assuming a shorter 14-day time horizon (sensitivity analysis 3) that is consistent with the typical duration and resolution of symptoms of strep A sore throat infection favoured usual care but the ICERs did not change substantially to suggest a different interpretation of the base-case cost-effectiveness. Assuming that the treating primary care health-care professional in both the intervention arm and the usual-care arm is a nurse or a pharmacist (sensitivity analysis 19) rather than a GP favoured testing, only if the test cannot be carried out within the allocated consultation time. In this instance, the costs associated with the additional clinician time taken to administer and process test results are much lower if seen by a nurse or pharmacist than if the treating clinician is a GP. Similarly, excluding the cost of the additional clinician time required to process test results (sensitivity analysis 22) favoured testing only where testing cannot be done within allocated primary care consultation time.
Summary of economic modelling
We undertook a systematic search for economic evaluation studies of the use of point-of-care tests as listed in the NICE scope for patients with suspected strep A infection. We did not identify any relevant economic models that could be adapted. Hence, a de novo decision tree model was built to compare point-of-care testing in conjunction with clinical scoring tools with clinical scoring tools alone for children and adults presenting with strep A infection in primary and secondary care settings.
The model took account of the presenting prevalence of disease in the modelled population, accuracy of clinical scoring and testing, the prescribing behaviour of treating clinicians and complications of the infection and treatment. In the base-case analysis, costs were calculated from a UK NHS/Personal Social Services perspective over the 1-year time horizon. The health impact of intervention was expressed in QALYs captured through application of disutilities associated with treated and untreated infection and related complications over the modelled time horizon.
The scope of the appraisal had called for 21 tests to be evaluated in comparison with usual-care practice; however, difficulties in obtaining reliable test accuracy and cost data for all tests meant that we were able to include only 14 of the 21 tests for which relevant data were available in final economic modelling. Under the base-case model assumptions for adults presenting with suspected strep A in primary care, the ICER suggests that usual care dominated two tests (Clearview Exact Strep A cassette and Clearview Exact Strep A dipstick – test strip, both supplied by Abbott Laboratories). For the remaining 12 tests, testing was marginally more effective and more costly than usual care, with ICERs ranging from £1,353,677 per QALY gained for NADAL Strep A test strip (nal von minden GmbH) to £6,059,081 per QALY for Roche Diagnostics’ cobas Liat Strep A Assay compared with usual care.
Probabilistic analyses based on 1000 Monte Carlo simulations of the ICER assessed parameter uncertainty and generated probability statements about the cost-effectiveness of point-of-care testing across a range of willingness-to-pay thresholds. Probabilistic ICERs produced results similar to the deterministic base-case ICERs, and suggested that testing was associated with zero probability of cost-effectiveness at willingness-to-pay thresholds of £0 to £100,000 per QALY gained under base-case assumptions. Similar cost-effectiveness results were obtained in the base-case models for adults presenting in secondary care, and in primary and secondary care models for children.
Extensive exploratory deterministic sensitivity analyses of the base-case inputs and assumptions were conducted to understand key model drivers. The findings suggest that the ICER is highly sensitive to (1) parameter inputs and assumptions that increase the cost of testing (acquisition cost of test, additional clinician time for administering and processing test results, and cost of confirmatory throat culture for those testing negative) and (2) the penalty for antibiotic overprescription/unnecessary antibiotic use (acquisition cost of antibiotic and probabilities for penicillin-induced anaphylaxis and rash). Factoring in costs associated with additional clinician time (at £4 per minute of GP time) for administering tests and £8 for a confirmatory throat culture given a negative test in the base case both favour usual care, as these costs can be substantially higher than the actual cost of the test and are applied to the intervention arm only. In contrast, the model predicts lower antibiotic use with testing than with usual care; however, the cost of antibiotic treatment, at £0.91 per course of penicillin, the treatment of choice for strep A infection, is considerably cheaper (than the acquisition costs for the majority of the test kits), such that the penalty for supplying antibiotics to those who do not need it is negligible compared with the cost of testing.
The base case incorporates serious adverse effects of penicillin, such as penicillin-induced anaphylaxis, with associated high treatment costs and disutility; however, the modelled rate of 0.01%78 used in the base case suggests that anaphylaxis is very rare and its impact is, therefore, minimal on the cost-effectiveness of testing. Sensitivity analyses increasing the rate of anaphylaxis to 0.64% based on another economic evaluation of strep A pharyngitis79 favoured testing: the ICER for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changed from being dominated by usual care in the base case to £288,702 and £299,305 per QALY gained compared with usual care, respectively.
Cost-effectiveness estimates were also sensitive to the prevalence of strep A infection (higher prevalence favouring usual care and lower prevalence favouring testing), the disutility for untreated infection (lower values favoured testing and doubling the decrement associated with untreated infection favoured usual care) and the disutility for treated strep A infection (doubling the disutility favours testing).
Points for discussion regarding the economic modelling
A number of limitations apply to the economic model:
-
Although the economic model represented the clinical care pathway in the NHS, practice and management will vary from site to site (within and across primary and secondary care settings).
-
The majority of inputs for test performance came from data on populations that were not directly relevant to the location of the tests in our modelled health-care pathway (e.g. were not specific to patients with Centor scores of ≥ 3 points). Hence, the true performance of the tests could differ from the current models.
-
We could compare point-of-care testing for only 14 of the 21 tests listed in the NICE scope, as we did not have test accuracy and/or cost data for the other seven point-of-care tests.
-
There was not enough information on test accuracy data to model strep A infection in the pharmacy setting or for the elderly population.
-
Inputs (except for the sensitivity and specificity data from our effectiveness review) were generally available as point estimates without associated measures of uncertainty, such as CIs and SEs, required for probabilistic modelling. Thus, we have had to follow the common practice of assuming ± 10% around the central estimate to incorporate uncertainty in our modelling. This approach to probabilistic analysis is itself associated with degree of uncertainty, as it may underestimate or overestimate the true uncertainty in the evidence.
-
Our protocol had specified a time horizon of 14 days, as the evidence suggests that strep A infection is a self-limiting illness with the majority of patients making a complete recovery within 2 weeks of the infection. 86 However, we extended the time horizon to 1 year in the base-case model to accommodate the impact of rare complications of strep A, such as acute rheumatic fever, where we found evidence to suggest that these complications could be associated with as many as 75 quality-adjusted life-days lost. 78–80 However, this longer time horizon required further assumptions to keep the modelling feasible and supported by appropriate evidence. In particular, we assumed only one episode of strep A infection (the initial index episode) per patient with no possibility of a recurrent infection within the 1-year time horizon, which is unlikely to represent a true reflection of sore throat infections in the community (these point-of-care tests would not be used in people with recurrent sore throats and this was excluded from the scope of work). Extending the time horizon to 1 year may also not adequately capture all costs and consequences associated with infection. For example, there is evidence suggesting that increased risk of death from rheumatic heart disease is associated with complications of strep A,82,83 but this cannot be fully incorporated within the 1-year time horizon considered in our base case.
-
We did not explore the impact of a lifetime horizon on the cost-effectiveness point-of-care testing because (1) strep A infection is a self-limiting illness (see point above) and (2) the decision tree structure is not suitable for economic models with lifetime modelling. The model does, however, account for rare but serious complications of the infection, such as acute rheumatic fever and anaphylactic reactions to penicillin, both of which can have a long-lasting impact. For these, we assume that the 1-year time horizon considered in the base case is sufficient to capture the costs and consequences associated with such complications.
-
Sensitivity analyses assuming a shorter time horizon of 2 weeks (14 days), corresponding to the expected time for symptom resolution, did not alter the conclusions of the base-case cost-effectiveness results.
-
Although the model captures the unwanted effects of antibiotic treatment, such as penicillin-induced rash and anaphylaxis, through incorporating appropriate costs and disutility for these events, resistance to antimicrobial therapy was not explicitly modelled because of evidence suggesting that strep A is highly susceptible to penicillin83,94 (the treatment of choice for strep A infection).
-
The model captures suppurative and non-suppurative complications of strep A infection and the unwanted effects of penicillin use. The probability estimates for suppurative complications were derived from combining data for all such complications (quinsy, sinusitis, otitis media and cellulitis) reported in a large UK cohort study by Little et al. 86 As the Little et al. 86 data include figures for non-suppurative complications of the infection, such as the acute rheumatic fever, all such complications were included in the modelling under the assumption that majority of complications of strep A infection were suppurative, with no more than 0.01% being non-suppurative. Sensitivity analysis suggests that this assumption has minimal impact on the base-case cost-effectiveness results. Other complications of the strep A infection not included in the economic modelling include mortality outcomes and scarlet fever in the children’s models because of a lack of data informing probability estimates for these events; hence, the costs may be underestimated and outcomes overestimated in the models.
-
Transmissions between infected and susceptible individuals are not modelled because of a lack of evidence to inform transmission rates in dynamic disease modelling. There is also evidence to suggest seasonality effect (e.g. an increased presentation of strep A infection during the winter months and around Easter time), but this was not explicitly modelled. However, we carried out exploratory analysis in which we varied the prevalence of disease, which can be taken as proxy for seasonality effect. These exploratory analyses suggest that increasing prevalence of disease among adults and children in primary care generally favoured usual care, but the ICERs did not change substantially to suggest a different conclusion from the base-case cost-effectiveness results. In contrast, lowering the prevalence favoured testing, but, again, the ICERs did not change substantially to alter conclusions of the base-case analyses.
-
The modelling may have underestimated the costs as we did not take into account the contribution to antimicrobial stewardship owing to the lack of evidence.
-
The model has not accounted for certain high-risk populations, such as immunosuppressed patients or pregnant women, as these patients would all be offered antibiotics.
-
We have not taken into account that some of these point-of-care tests may also detect other strains of strep infections, such as strep C and strep G, in addition to strep A.
-
The modelling may have underestimated the costs as we did not take into account the different strains of strep A that may have influenced test performance and disease characteristics, potentially altering the profile of complications.
-
We did not consider the impact that introducing routine point-of-care testing might have on patient presentation with sore throat, which could influence the cost-effectiveness results.
-
We did not place any monetary value on the impact a point-of-care test might have in including the patient in the treatment decision-making process.
-
We have not taken into account any broader societal costs, such as lost productivity or time off work, owing to suspected strep A infection.
-
Finally, modelled changes in costs and QALYs are simulations and have not been observed. Findings should be verified through properly designed and conducted research.
Chapter 5 Discussion
Decision problem and objectives
The overall objective was to undertake a clinical effectiveness and cost-effectiveness analysis of rapid antigen detection and molecular tests in those with high clinical scores, compared with the use of clinical scoring tools alone, for increasing the diagnostic confidence of suspected group A streptococcal infection in people who present with an acute sore throat in primary and secondary care settings. The literature informing clinical effectiveness and cost-effectiveness was systematically reviewed and summarised. A de novo economic model was developed to assess the cost-effectiveness of rapid antigen detection and molecular tests in conjunction with clinical scoring tools compared with clinical scoring tools alone in England and Wales.
Summary of methods and findings
Clinical effectiveness
We searched a number of databases including MEDLINE, EMBASE, Web of Science and The Cochrane Library. We found 3309 unique records, of which 38 were included [26 full-text articles, three abstracts, five manufacturers’ submissions (submitted to NICE in response to a request for information) and four FDA documents]. There were 26 studies that reported on test accuracy data. In general, the methodological quality of the included studies was poor. In particular, in 65.4% (17/26) of studies it was unclear whether the sample was consecutive or convenience. Convenience samples may not provide a true representation of the prevalence of strep A. There was judged to be a high level of bias surrounding the subjective reading of some of the point-of-care tests and through lack of adherence to manufacturer’s guidance by using the same swab to streak the microbiological culture and then conduct the point-of-care test. In addition, microbiological culture is unlikely to be 100% accurate and may vary with different culture media.
Overall, the findings reveal wide variations in the point estimates for the sensitivity (67.9% to 100%) and specificity (73.3% to 100%) of the different point-of-care tests. These estimates were 82.9% to 94.6% for sensitivity and 84.9% to 99.1% for specificity in high-risk populations, including patients with Centor/McIsaac scores of > 2 points, representing the population of interest. These estimates do not account for any of the unpublished manufacturer submissions.
Clinical scoring tools (FeverPAIN and Centor) have been proposed as a method by which clinicians can identify which patients are most likely to benefit from antibiotic use for sore throat. 8 These tools were developed to predict strep A (Centor and FeverPAIN), strep C (FeverPAIN) and strep G (FeverPAIN). Direct comparison between sore throat clinical scoring tools and point-of-care tests indicated that specificity estimates were higher for the point-of-care tests, and that sensitivity was generally comparable between the two approaches. Direct comparison with sore throat clinical scoring tools revealed that point-of-care tests were generally more specific. However, one methodological limitation concerns the varying way that clinical scoring tools have been implemented across the included studies. For instance, different studies apply different clinical cut-off points when recruiting patients. No studies were identified that matched the proposed pathway of care and treatment for patients with acute streptococcal pharyngitis, which would entail evaluating the test accuracy of a combined strategy of sore throat clinical scores at the recommended NICE thresholds (Centor/McIsaac score of ≥ 3 points or FeverPAIN score of ≥ 4 points) and point-of-care tests. No evidence was identified for the elderly population or in a pharmacy setting. Likewise, data for test accuracy were sparse for each combination of test, population and setting. There were very few head-to-head (direct) comparison studies between index tests.
It was not possible to identify which test is the most accurate owing to the lack of evidence. The large degree of heterogeneity among results for studies using the same rapid test suggests that it is unlikely that any single study will accurately capture a test’s true performance. The apparent accuracy of a test may be penalised for having more studies, compared with tests with a single study, particularly those in which the manufacturer has conducted that study. The heterogeneity introduced by the differing characteristics of the studies further confounded attempts to produce meaningful estimates of test performance, such as care setting, age group, throat score restriction and disease prevalence. Owing to the potential heterogeneity, estimates for the sensitivity and specificity of each test were stratified by age group, throat score and care setting, although a lack of evidence meant that generalisations had to be made for the majority of estimates.
There is some RCT evidence to suggest that the use of RADTs may help to reduce antibiotic-prescribing rates, but there was no evidence on the effect of using molecular technologies. If a test was proven to be extremely accurate, then it is plausible that clinical staff would trust the outcomes. No evidence was found on time to antimicrobial prescribing decision, number of appointments required per episode and onward transmission of infection.
Cost-effectiveness
The systematic review of cost-effectiveness studies identified three studies that used the RADTs as identified in the NICE scope and were classed as economic evaluations. Two studies had some notable limitations and could not be fully data extracted. The one study that allowed a full data extraction was classed as a high-quality economic evaluation when checked against the CHEERS reporting tool.
Fourteen of the 21 tests listed in the NICE scope had data on test accuracy and costs relevant to be included in the final economic modelling. In the base-case analysis, which included adult patients seen in primary care with suspected strep A infection, the economic model found point-of-care testing to not be cost-effective compared with usual care for suspected strep A infection. This finding was also seen in the other economic models, which were adapted for the different patient groups and settings (adult patients seen in the hospital, children seen in primary care and children seen in the hospital). Important uncertainties in the model include parameter inputs and assumptions that increase (1) the cost of testing (acquisition cost of test, additional clinician time for administering and processing test results, and cost of confirmatory throat culture for those testing negative) and (2) the penalty for antibiotic overprescription/unnecessary antibiotic use (acquisition cost of antibiotic and probabilities for penicillin-induced anaphylaxis and rash).
Strengths and limitations
We used a rigorous and exhaustive search to conduct a comprehensive systematic review (literature search, data extraction and analysis) and locate primary studies. All relevant studies were systematically reviewed and agreement between the two reviewers was very high. We also built a de novo decision tree model to assess cost-effectiveness of point-of-care testing. The economic model provides a representation of the clinical care pathway in primary and secondary care settings. The decision tree was populated with probabilities and test accuracy values from the clinical evidence review, published studies and clinical expert opinion.
No studies of point-of-care test use in a pharmacy setting or in the elderly population were retrieved. In addition, no study matched the proposed pathway of care and treatment for patients with acute streptococcal pharyngitis, which would entail evaluating the test accuracy of a combined strategy of sore throat clinical scores at the recommended NICE thresholds (Centor/McIsaac score of ≥ 3 points or FeverPAIN score of ≥ 4 points) and point-of-care tests in the age groups defined in the scope.
Children aged < 5 years were not explicitly considered in this review. Although they may benefit from a point-of-care test, following advice from health-care professionals, we understood that their diagnostic pathway is likely to differ from older age groups, and they were considered beyond the scope of this review.
For the purpose of this review, we classified GP surgeries, health-care centres, family practices and primary care clinics as primary care. Secondary care comprised emergency departments, private paediatric clinics, outpatient clinics, urgent care clinics and walk-in centres. In practice, other countries may define primary and secondary care differently; for example, paediatric clinics could be part of primary care. However, given that it is unclear if test accuracy differs by setting, we do not know the impact this could have on the cost-effectiveness estimates.
We included only English-language studies and studies directly matching the test name, unless we had confirmation that a test had been taken over by another manufacturer (e.g. in the case of IMI testpack becoming Alere). We did not include studies where it was unclear whether or not later iterations of the test were different. During the EAG write-up of the final report, Abbott Laboratories notified NICE that the Alere i Strep A test was no longer available. The Alere i Strep A 2 has been rebranded as the ID NOW Strep A 2. Our previously excluded studies were rescreened by test name, and none used the ID NOW Strep A 2 test. It is the EAG’s understanding that the information in this report relevant to the Alere i Strep A 2 is transferable to the ID NOW Strep A 2. In addition, the Clearview Exact Strep A tests (both cassette and dipstick) were replaced with new Clearview Exact Strep A 2 editions. The manufacturers supplied NICE with information that there are procedural differences between previous Clearview tests and the new A 2 editions. Therefore, the results for the Clearview Exact Strep A tests in this review may not be generalisable to the current Clearview products on the market. Furthermore, our previously excluded studies were rescreened by test name, and none used the Clearview Exact Strep A 2 editions.
We did not explore the effect of culture medium on test accuracy in the review. One of the included test accuracy studies found that using different culture media was showing strep A positivity on samples that were initially negative. 42 This could indicate possible differences in accuracy of different culture media.
Test accuracy may also vary greatly based on the quality of the swabbing. It is unclear how the level of training of clinical staff involved in these studies compares with routine care, which could limit the generalisability of these results.
The evidence informing the test accuracy estimates was not sufficient to produce reliable or robust estimates that we could be confident actually reflected the tests’ true performance in any particular patient group. The main concern is that the patients’ clinical scores used in the studies of test accuracy did not match the scores of patients anticipated to benefit from using the tests, as modelled in our economic analysis. The health-care setting and age group were also potential variables that may affect test performance. This concern extends to the economic modelling, which used the estimates for each test.
The studies in this review determined antibiotic appropriateness to be based on strep A positivity in the culture. However, culture may detect strep A carriage as opposed to disease. PCR was a potential alternative reference standard, but was less widely used, and encounters the same issue of carriage detection.
Although the economic model represented the clinical care pathway in the NHS, practice and management will vary from site to site (within and across both primary care and secondary care settings). There was not enough information on test accuracy data to model strep A for the pharmacy setting or for the elderly population. Furthermore, we could compare only 14 of the 21 point-of-care tests as listed in the NICE scope, as we did not have test accuracy and/or cost data for the other seven point-of-care tests. The modelling may have underestimated the costs as we did not take into account the different strains of strep A that may have influenced test performance and altered the profile of complications, seasonality of strep A infection, resistance to antimicrobial therapy, the onward transmission of the infection and the broader societal costs.
Chapter 6 Conclusions
The systematic review and cost-effectiveness model identify uncertainties around the adoption of point-of-care tests in primary and secondary care settings in England and Wales. The available evidence is heterogeneous in populations studied, design, methods and analysis. Although sensitivity and specificity estimates are promising, we have little information on the best point-of-care test to use. Although there is potential for the point-of-care tests to be cost-effectiveness in both primary care and secondary care settings, key parameter inputs and modelling assumptions need to be confirmed and model findings remain uncertain.
Recommendations for future research
Further research is needed to understand the test accuracy of point-of-care tests in the proposed NHS pathway and in comparable settings and patient groups. There was a considerable lack of evidence for the performance of the tests based on real-world use in patients with a Centor score of ≥ 3 points or a FeverPAIN score of ≥ 4 points. Future work that considers head-to-head test accuracy studies or RCTs using multiple point-of-care tests in relevant patient populations and health-care settings considered in the NICE scope would provide relevant comparator information and help to determine the value of point-of-care testing. Results broken down by relevant subgroups, such as age and clinical score threshold, would also be useful. Further research on the establishment of a gold reference standard with which tests are compared would also reduce uncertainty and potential bias in reviews such as this.
Acknowledgements
The authors would like to thank Dr Mitul Patel, Professor Michael Moore, Dr Derren Ready and Mr Mohammed Rafiq for their expert clinical advice on searching the literature. We are also grateful to Dr Mitul Patel for expert clinical advice during the clinical effectiveness reviewing and data extraction stages and to Dr Derren Ready for comments on the draft report. The authors would like to thank Karoline Munro for helping with the co-ordinating of the report.
Contributions of authors
Rachel Court (https://orcid.org/0000-0002-4567-2586) (Information Specialist) developed the search strategy and undertook searches.
Hannah Fraser (https://orcid.org/0000-0002-7050-9684) (Research Associate), Sian Taylor-Phillips (https://orcid.org/0000-0002-1841-4346) (Associate Professor), Chidozie Nduka (https://orcid.org/0000-0001-7031-5444) (Senior Research Fellow), Chris Stinton (https://orcid.org/0000-0001-9054-1940) (Senior Research Fellow) and Rebecca Willans (https://orcid.org/0000-0001-8084-6951) (Academic Foundation 2 Doctor) conducted the clinical effectiveness systematic review. This included screening and retrieving papers, assessing against the inclusion criteria, appraising the quality of papers and abstracting data from papers for synthesis.
Daniel Gallacher (https://orcid.org/0000-0003-0506-9384) (Research Fellow) conducted the meta-analyses and contributed to the clinical effectiveness and cost-effectiveness sections.
Felix Achana (https://orcid.org/0000-0002-8727-9125) (Senior Research Fellow) contributed to the cost-effectiveness review and undertook the health economic modelling.
Paramjit Gill (https://orcid.org/0000-0001-8756-6813) (Professor of General Practice) provided clinical guidance and helped to develop the model structures.
Hema Mistry (https://orcid.org/0000-0002-5023-1160) (Associate Professor) provided project management, conducted the cost-effectiveness review and supervised the economic analysis.
All authors were involved in writing draft and final versions of the report.
Data-sharing statement
All data requests should be submitted to the corresponding author for consideration. Please note that exclusive use will be retained until the publication of major outputs.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HTA programme or the Department of Health and Social Care. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HTA programme or the Department of Health and Social Care.
References
- Hannaford PC, Simpson JA, Bisset AF, Davis A, McKerrow W, Mills R. The prevalence of ear, nose and throat problems in the community: results from a national cross-sectional postal survey in Scotland. Fam Pract 2005;22:227-33. https://doi.org/10.1093/fampra/cmi004.
- Smith S, Smith GE, Heatlie H, Bashford JN, Ashcroft DM, Verlander NQ, et al. Reducing variation in antibacterial prescribing rates for ‘cough/cold’ and sore throat between 1993 and 2001: regional analyses using the general practice research database. Public Health 2006;120:752-9. https://doi.org/10.1016/j.puhe.2006.02.007.
- Clinical Knowledge Summaries . Sore Throat – Acute: Background Information 2018. https://cks.nice.org.uk/sore-throat-acute#!background (accessed 5 April 2019).
- Martin E. Concise Medical Dictionary. New York, NY: Oxford University Press; 2015.
- Ashworth M, Charlton J, Ballard K, Latinovic R, Gulliford M. Variations in antibiotic prescribing and consultation rates for acute respiratory infection in UK general practices 1995–2000. Br J Gen Pract 2005;55:603-8.
- Little P, Hobbs FD, Moore M, Mant D, Williamson I, McNulty C, et al. Clinical score and rapid antigen detection test to guide antibiotic use for sore throats: randomised controlled trial of PRISM (primary care streptococcal management). BMJ 2013;347. https://doi.org/10.1136/bmj.f5806.
- Centor RM, Witherspoon JM, Dalton HP, Brody CE, Link K. The diagnosis of strep throat in adults in the emergency room. Med Decis Making 1981;1:239-46. https://doi.org/10.1177/0272989X8100100304.
- National Institute for Health and Care Excellence (NICE) . Sore Throat (Acute): Antimicrobial Prescribing: NICE Guideline 2018. www.nice.org.uk/guidance/ng84 (accessed 2 April 2019).
- Linder JA, Stafford RS. Antibiotic treatment of adults with sore throat by community primary care physicians: a national survey, 1989-1999. JAMA 2001;286:1181-6. https://doi.org/10.1001/jama.286.10.1181.
- Gulliford MC, Dregan A, Moore MV, Ashworth M, Staa TV, McCann G, et al. Continued high rates of antibiotic prescribing to adults with respiratory tract infection: survey of 568 UK general practices. BMJ Open 2014;4. https://doi.org/10.1136/bmjopen-2014-006245.
- National Institute for Health and Care Excellence (NICE) . Rapid Tests for Group A Streptococcal Infections in People With a Sore Throat: Final Scope 2018. www.nice.org.uk/guidance/gid-dg10025/documents/final-scope (accessed 10 April 2019).
- Public Health England (PHE) . Third Report on Seasonal Activity of Group A Streptococcal Infections in 2017 18 2018. www.gov.uk/government/publications/group-a-streptococcal-infectionsactivity-during-the-2017-to-2018-season (accessed 7 April 2019).
- Petersen I, Johnson AM, Islam A, Duckworth G, Livermore DM, Hayward AC. Protective effect of antibiotics against serious complications of common respiratory tract infections: retrospective cohort study with the UK General Practice Research Database. BMJ 2007;335. https://doi.org/10.1136/bmj.39345.405243.BE.
- Public Health England (PHE) . Invasive Group A Streptococcal Disease: Managing Close Contacts 2008. www.gov.uk/government/publications/invasive-group-a-streptococcal-disease-managing-community-contacts (accessed 5 April 2019).
- Corner M. ONS: Sickness Absence in the UK Labour Market: 2016. Newport: Office for National Statistics; 2017.
- Newcastle and York External Assessment Centre . Point-of-Care Diagnostic Testing in Primary Care for Strep A Infection in Sore Throat 2018. www.nice.org.uk/advice/mib145 (accessed 21 September 2018).
- McCormick A, Fleming D, Charlton J. Morbidity Statistics from General Practice: Fourth National Study 1991–1992. London: HMSO; 1995.
- Fine AM, Nizet V, Mandl KD. Large-scale validation of the Centor and McIsaac scores to predict group A streptococcal pharyngitis. Arch Intern Med 2012;172:847-52. https://doi.org/10.1001/archinternmed.2012.950.
- Bryant AE, Stevens DL, Bennett JE, Dolin R, Blaser MJ. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. Philadelphia, PA: Elsevier/Saunders; 2015.
- Berry GJ, Miller CR, Prats MM, Marquez C, Oladipo OO, Loeffelholz MJ, et al. Comparison of the Alere i Strep A Test and the BD Veritor System in the detection of group A Streptococcus and the hypothetical impact of results on antibiotic utilization. J Clin Microbiol 2018;56:e01310-17. https://doi.org/10.1128/JCM.01310-17.
- Uhl JR, Adamson SC, Vetter EA, Schleck CD, Harmsen WS, Iverson LK, et al. Comparison of LightCycler PCR, rapid antigen immunoassay, and culture for detection of group A streptococci from throat swabs. J Clin Microbiol 2003;41:242-9. https://doi.org/10.1128/JCM.41.1.242-249.2003.
- Kocoglu E, Karabay O, Yilmaz F, Ekerbicer H. The impact of incubating the throat culture for 72 h on the diagnosis of group A beta-hemolytic streptococci. Auris Nasus Larynx 2006;33:311-13. https://doi.org/10.1016/j.anl.2005.11.011.
- Lacroix L, Cherkaoui A, Schaller D, Manzano S, Galetto-Lacour A, Pfeifer U, et al. Improved diagnostic performance of an immunofluorescence-based rapid antigen detection test for group A streptococci in children with pharyngitis. Pediatr Infect Dis J 2018;37:206-11. https://doi.org/10.1097/INF.0000000000001825.
- Wang F, Tian Y, Chen L, Luo R, Sickler J, Liesenfeld O, et al. Accurate detection of Streptococcus pyogenes at the point of care using the cobas Liat Strep A Nucleic Acid Test. Clin Pediatr 2017;56:1128-34. https://doi.org/10.1177/0009922816684602.
- EUR-Lex . Document 31998L0079 n.d. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A31998L0079 (accessed May 2020).
- Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529-36. https://doi.org/10.7326/0003-4819-155-8-201110180-00009.
- National Institute for Health and Care Excellence (NICE) . Fever in Under 5s: Assessment and Initial Management: Clinical Guideline [CG160] 2013. nice.org.uk/guidance/cg160 (accessed 27 March 2019).
- Roper SM, Edwards R, Mpwo M, Mutandiro C, Devaraj S. Reducing errors in an emergency center setting using an automated fluorescence immunoassay for group A Streptococcus identification. Clin Pediatr 2017;56:675-7. https://doi.org/10.1177/0009922816678184.
- Public Health England (PHE) . UK Standards for Microbiology Investigations: Investigation of Throat Related Specimens 2015. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/423204/B_9i9.pdf (accessed 27 March 2019).
- Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011;343. https://doi.org/10.1136/bmj.d5928.
- Joanna Briggs Institute . The Joanna Briggs Institute Critical Appraisal Tools for Use in JBI Systematic Reviews: Checklist for Analytical Cross Sectional Studies 2017. http://joannabriggs.org/research/critical-appraisal-tools.html (accessed 1 May 2019).
- Macaskill P, Gatsonis C, Deeks JJ, Harbord RM, Takwoingi Y, Deeks JJ, et al. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. Version 1. 0. London: Cochrane Collaboration; 2010.
- Takwoingi Y, Guo B, Riley RD, Deeks JJ. Performance of methods for meta-analysis of diagnostic test accuracy with few studies or sparse data. Stat Methods Med Res 2017;26:1896-911. https://doi.org/10.1177/0962280215592269.
- Azrad M, Danilov E, Goshen S, Nitzan O, Peretz A. Detection of group A Streptococcus in pharyngitis by two rapid tests: comparison of the BD VeritorTM and the QuikRead go Strep A. Eur J Clin Microbiol Infect Dis 2019;38:1179-85. https://doi.org/10.1007/s10096-019-03527-w.
- Bird C, Winzor G, Lemon K, Moffat A, Newton T, Gray J. A pragmatic study to evaluate the use of a rapid diagnostic test to detect group A streptococcal pharyngitis in children with the aim of reducing antibiotic use in a UK emergency department. Pediatr Emerg Care 2018. https://doi.org/10.1097/PEC.0000000000001560 (accessed July 24 2018).
- Bura M, Michalak M, Chojnicki M, Padzik M, Mozer-Lisewska I. Moderate and severe pharyngitis in young adult inhabitants of Poznan, Western Poland. Fam Med Prim Care Rev 2017;19:12-7. https://doi.org/10.5114/fmpcr.2017.65084.
- Cohen DM, Russo ME, Jaggi P, Kline J, Gluckman W, Parekh A. Multicenter clinical evaluation of the novel Alere i Strep A isothermal nucleic acid amplification test. J Clin Microbiol 2015;53:2258-61. https://doi.org/10.1128/JCM.00490-15.
- Dimatteo LA, Lowenstein SR, Brimhall B, Reiquam W, Gonzales R. The relationship between the clinical features of pharyngitis and the sensitivity of a rapid antigen test: evidence of spectrum bias. Ann Emerg Med 2001;38:648-52. https://doi.org/10.1067/mem.2001.119850.
- Humair J-P, Revaz SA, Bovier P, Stalder H. Management of acute pharyngitis in adults. Arch Intern Med 2006;166. https://doi.org/10.1001/archinte.166.6.640.
- Johansson L, Månsson NO. Rapid test, throat culture and clinical assessment in the diagnosis of tonsillitis. Fam Pract 2003;20:108-11. https://doi.org/10.1093/fampra/20.2.108.
- Johnson DR, Kaplan EL. False-positive rapid antigen detection test results: reduced specificity in the absence of group A streptococci in the upper respiratory tract. J Infect Dis 2001;183:1135-7. https://doi.org/10.1086/319286.
- Kurtz B, Kurtz M, Roe M, Todd J. Importance of inoculum size and sampling effect in rapid antigen detection for diagnosis of Streptococcus pyogenes pharyngitis. J Clin Microbiol 2000;38:279-81.
- Lindbæk M, Høiby EA, Lermark G, Steinsholt IM, Hjortdahl P. Which is the best method to trace group A streptococci in sore throat patients: culture or GAS antigen test?. Scand J Prim Health Care 2004;22:233-8. https://doi.org/10.1080/02813430410006675.
- Llor C, Calviño O, Hernández S, Crispi S, Pérez-Bauer M, Fernández Y, et al. Repetition of the rapid antigen test in initially negative supposed streptococcal pharyngitis is not necessary in adults. Int J Clin Pract 2009;63:1340-4. https://doi.org/10.1111/j.1742-1241.2009.02048.x.
- Llor C, Madurell J, Balagué-Corbella M, Gómez M, Cots JM. Impact on antibiotic prescription of rapid antigen detection testing in acute pharyngitis in adults: a randomised clinical trial. Br J Gen Pract 2011;61:e244-51. https://doi.org/10.3399/bjgp11X572436.
- McIsaac WJ, Kellner JD, Aufricht P, Vanjaka A, Low DE. Empirical validation of guidelines for the management of pharyngitis in children and adults. JAMA 2004;291:1587-95. https://doi.org/10.1001/jama.291.13.1587.
- Nerbrand C, Jasir A, Schalén C. Are current rapid detection tests for Group A streptococci sensitive enough? Evaluation of 2 commercial kits. Scand J Infect Dis 2002;34:797-9. https://doi.org/10.1080/0036554021000026953.
- Penney C, Porter R, O’Brien M, Daley P. Operator influence on blinded diagnostic accuracy of point-of-care antigen testing for group A streptococcal pharyngitis. Can J Infect Dis Med Microbiol 2016;2016. https://doi.org/10.1155/2016/1710561.
- Rogo T, Schwartz RH, Ascher DP. Comparison of the Inverness Medical Acceava Strep A test with the Genzyme OSOM and Quidel QuickVue Strep A tests. Clin Pediatr 2011;50:294-6. https://doi.org/10.1177/0009922810385675.
- Rosenberg P, McIsaac W, Macintosh D, Kroll M. Diagnosing streptococcal pharyngitis in the emergency department: is a sore throat score approach better than rapid streptococcal antigen testing?. CJEM 2002;4:178-84. https://doi.org/10.1017/S1481803500006357.
- Santos O, Weckx LL, Pignatari AC, Pignatari SS. Detection of group A beta-hemolytic Streptococcus employing three different detection methods: culture, rapid antigen detecting test, and molecular assay. Braz J Infect Dis 2003;7:297-300. https://doi.org/10.1590/S1413-86702003000500003.
- Stefaniuk E, Bosacka K, Wanke-Rytt M, Hryniewicz W. The use of rapid test QuikRead go® Strep A in bacterial pharyngotonsillitis diagnosing and therapeutic decisions. Eur J Clin Microbiol Infect Dis 2017;36:1733-8. https://doi.org/10.1007/s10096-017-2986-8.
- Thornley T, Marshall G, Howard P, Wilson AP. A feasibility service evaluation of screening and treatment of group A streptococcal pharyngitis in community pharmacies. J Antimicrob Chemother 2016;71:3293-9. https://doi.org/10.1093/jac/dkw264.
- Weinzierl EP, Jerris RC, Gonzalez MD, Piccini JA, Rogers BB. Comparison of Alere i Strep A Rapid Molecular Assay with rapid antigen testing and culture in a pediatric outpatient setting. Am J Clin Pathol 2018. https://doi.org/10.1093/ajcp/aqy038 (accessed June 19 2018).
- Worrall G, Hutchinson J, Sherman G, Griffiths J. Diagnosing streptococcal sore throat in adults: randomized controlled trial of in-office aids. Can Fam Physician 2007;53:666-71.
- Andersen JB, Dahm TL, Nielsen CT, Frimodt-Møller N. Diagnosis of streptococcal tonsillitis in the pediatric department with the help of antigen detection test. Ugeskr Laeg 2003;165:2291-5.
- Pauchard JY, Verga ME, Bersier J, Durusell C, Gehri M, Vaudaux B. Performance of a rapid antigen detection test in group A beta-haemolytic streptococcal pharyngitis in comparison with three clinical decision rule in a tertiary paediatric emergency department. Swiss Med Wkly 2013;197.
- Valverde ED, Colmenarejo C, llescas S, Gonzalez JC. P0820: Evaluation of a Rapid Streptococcal Group A Antigen Test in Different Age Groups 2018.
- Food and Drug Administration (FDA) . FDA Decision Summary: Substantial Equivalence Determination for the BD Veritor™ System for Rapid Detection of Group A Streptococcus (Group A Strep). K122718 2013. www.accessdata.fda.gov/cdrh_docs/reviews/K122718.pdf (accessed 6 March 2019).
- Food and Drug Administration (FDA) . FDA Decision Summary: Substantial Equivalence Determination for the Sophia Strep A FIA Assay for Use With the Sophia Analyzer. K123793 2013. www.accessdata.fda.gov/cdrh_docs/reviews/K123793.pdf (accessed 6 March 2019).
- Food and Drug Administration (FDA) . FDA Decision Summary: Substantial Equivalence Determination for the Alere I Strep A 2 Performed on the Alere I Analyzer for the Detection of Streptococcus Pyogenes (Group A Streptococcus). K173653 2018. www.accessdata.fda.gov/cdrh_docs/reviews/K173653.pdf (accessed 6 March 2019).
- Food and Drug Administration . FDA Decision Summary: Substantial Equivalence Determination for Xpert Xpress Strep A Test Performed on the GeneXpert Xpress System. K173398 2018. www.accessdata.fda.gov/cdrh_docs/reviews/K173398.pdf (accessed 6 March 2019).
- Esposito S, Blasi F, Bosis S, Droghetti R, Faelli N, Lastrico A, et al. Aetiology of acute pharyngitis: the role of atypical bacteria. J Med Microbiol 2004;53:645-51. https://doi.org/10.1099/jmm.0.05487-0.
- Gieseker KE, Roe MH, MacKenzie T, Todd JK. Evaluating the American Academy of Pediatrics diagnostic standard for Streptococcus pyogenes pharyngitis: backup culture versus repeat rapid antigen testing. Pediatrics 2003;111:e666-70. https://doi.org/10.1542/peds.111.6.e666.
- Felsenstein S, Faddoul D, Sposto R, Batoon K, Polanco CM, Dien Bard J. Molecular and clinical diagnosis of group A streptococcal pharyngitis in children. J Clin Microbiol 2014;52:3884-9. https://doi.org/10.1128/JCM.01489-14.
- Thamlikitkul V, Rachata T, Popum S, Chinswangwatanakul P, Srisomnuek A, Seenama C, et al. Accuracy and utility of rapid antigen detection tests for group A beta-hemolytic Streptococcus on ambulatory adult patients with sore throat associated with acute respiratory infections at Siriraj hospital. J Med Assoc Thai 2018;101:441-9.
- Ramos JL, Fraile MT, Chanza M, Tormo N, Lurbe A, Gimeno C. Rapid detection of Streptococcus pyogenes in peripheral medical centres. A pilot custody assay. Clin Microbiol Infect 2011;17.
- Hoffmann K, Reichardt B, Zehetmayer S, Maier M. Evaluation of the implementation of a rapid streptococcal antigen test in a routine primary health care setting: from recommendations to practice. Wien Klin Wochenschr 2012;124:633-8. https://doi.org/10.1007/s00508-012-0225-y.
- Lean WL, Arnup S, Danchin M, Steer AC. Rapid diagnostic tests for group A streptococcal pharyngitis: a meta-analysis. Pediatrics 2014;134:771-81. https://doi.org/10.1542/peds.2014-1094.
- Roveta S, Marchese A, Debbia EA. Evaluation of the Uro-Quick, a new rapid automated system, for the detection of well-characterized antibiotic-resistant bacteria. J Chemother 2004;16:107-18. https://doi.org/10.1179/joc.2004.16.2.107.
- Ebrahimi S, Mohabatkar H, Behbahani M. Predicting promiscuous T cell epitopes for designing a vaccine against Streptococcus pyogenes. Appl Biochem Biotechnol 2018;11. https://doi.org/10.1007/s12010-018-2804-5.
- Leeflang MM, Bossuyt PM, Irwig L. Diagnostic test accuracy may vary with prevalence: implications for evidence-based diagnosis. J Clin Epidemiol 2009;62:5-12. https://doi.org/10.1016/j.jclinepi.2008.04.007.
- Gerber MA, Shulman ST. Rapid diagnosis of pharyngitis caused by group A streptococci. Clin Microbiol Rev 2004;17:571-80. https://doi.org/10.1128/CMR.17.3.571-580.2004.
- Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Cost Eff Resour Alloc 2013;11. https://doi.org/10.1186/1478-7547-11-6.
- Philips Z, Ginnelly L, Sculpher M, Claxton K, Golder S, Riemsma R, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004;8. https://doi.org/10.3310/hta8360.
- Little P, Hobbs FD, Moore M, Mant D, Williamson I, McNulty C, et al. PRImary care Streptococcal Management (PRISM) study: in vitro study, diagnostic cohorts and a pragmatic adaptive randomised controlled trial with nested qualitative study and cost-effectiveness study. Health Technol Assess 2014;18. https://doi.org/10.3310/hta18060.
- National Institute for Health and Care Excellence . Guide to the Methods of Technology Appraisal 2013. www.nice.org.uk/process/pmg9 (accessed 10 March 2019).
- Neuner JM, Hamel MB, Phillips RS, Bona K, Aronson MD. Diagnosis and management of adults with pharyngitis. A cost-effectiveness analysis. Ann Intern Med 2003;139:113-22. https://doi.org/10.7326/0003-4819-139-2-200307150-00011.
- Van Howe RS, Kusnier LP. Diagnosis and management of pharyngitis in a pediatric population based on cost-effectiveness and projected health outcomes. Pediatrics 2006;117:609-19. https://doi.org/10.1542/peds.2005-0879.
- Klepser DG, Bisanz SE, Klepser ME. Cost-effectiveness of pharmacist-provided treatment of adult pharyngitis. Am J Manag Care 2012;18:e145-54.
- National Center for Immunization and Respiratory Diseases, Division of Bacterial Diseases . Pharyngitis (Strep Throat) 2018. www.cdc.gov/groupastrep/diseases-hcp/strep-throat.html (accessed 17 April 2019).
- National Institute for Health Research . Point-of-Care Tests for Group A Streptococcus: Horizon Scanning Report 2015. www.community.healthcare.mic.nihr.ac.uk/reports-and-resources/horizon-scanning-reports/point-of-care-tests-for-group-a-streptococcus (accessed 25 April 2019).
- Wessels MR. Streptococcus pyogenes: Basic Biology to Clinical Manifestations. Bethesda, MD: National Center for Biotechnology Information; 2016.
- Aalbers J, O’Brien KK, Chan WS, Falk GA, Teljeur C, Dimitrov BD, et al. Predicting streptococcal pharyngitis in adults in primary care: a systematic review of the diagnostic accuracy of symptoms and signs and validation of the Centor score. BMC Med 2011;9. https://doi.org/10.1186/1741-7015-9-67.
- Ferrieri P, Nelson K, Thonen-Kerr E, Arbefeville S. Prospective evaluation of Xpert Xpress Strep A automated PCR assay vs Solana group A streptococcal NAAT vs conventional throat culture. Am J Pathol 2018;150. https://doi.org/10.1093/ajcp/aqy112.367.
- Little P, Stuart B, Hobbs FD, Butler CC, Hay AD, Campbell J, et al. Predictors of suppurative complications for acute sore throat in primary care: prospective clinical cohort study. BMJ 2013;347. https://doi.org/10.1136/bmj.f6867.
- Kind P, Dolan P, Gudex C, Williams A. Variations in population health status: results from a United Kingdom national questionnaire survey. BMJ 1998;316:736-41. https://doi.org/10.1136/bmj.316.7133.736.
- NHS . NHS Supply Chain Catalogue 2019. https://my.supplychain.nhs.uk/catalogue (accessed 25 April 2019).
- Medisave UK Ltd . Clearview Exact Strep A Dipstick X 25 n.d. www.medisave.co.uk/clearview-exact-strep-a-dipstick-x-25-test-kit-p-7660.html (accessed 17 April 2019).
- Curtis L, Burns A. Unit Costs of Health and Social Care. Canterbury: Personal Social Services Research Unit, University of Kent; 2017.
- Joint Formulary Committee . BNF 74: September 2017 2017.
- NHS Improvement . NHS Reference Costs 2017 18 2018. https://improvement.nhs.uk/resources/reference-costs/#rc1718 (accessed 17 April 2019).
- Hex N, Retzler J, Bartlett C, Arber M. The Cost of Sepsis Care in the UK: Final Report [YHEC] 2017. http://allcatsrgrey.org.uk/wp/wpfb-file/yhec-sepsis-report-17-02-17-final-pdf/ (accessed 17 April 2019).
- Centers for Disease Control and Prevention . Group A Streptococcal Disease (GAS) Disease – Pharyngitis (Strep Throat) 2018. www.cdc.gov/groupastrep/diseases-hcp/strep-throat.html (accessed 20 May 2019).
- Vachhani R, Patel T, Centor RM, Estrada CA. Sensitivity for diagnosing group A streptococcal pharyngitis from manufacturers is 10% higher than reported in peer-reviewed publications. South Med J 2017;110:59-64. https://doi.org/10.14423/SMJ.0000000000000597.
- Stewart EH, Davis B, Clemans-Taylor BL, Littenberg B, Estrada CA, Centor RM. Rapid antigen group A Streptococcus test to diagnose pharyngitis: a systematic review and meta-analysis. PLOS ONE 2014;9. https://doi.org/10.1371/journal.pone.0111727.
- Parviainen M, Koskela M, Ikäheimo I, Kelo E, Sirola H, . A novel strep A test for a rapid test reader compared with standard culture method and a commercial antigen assay. Eur Infect Dis 2011;5:143-5.
- Ruiz-Aragon J, Rodriguez Lopez R, Molina Linde JM. Evaluation of rapid methods for detecting Streptococcus pyogenes. Systematic review and meta-analysis. An Pediatr 2010;72:391-402. https://doi.org/10.1016/j.anpedi.2009.12.012.
- Cohen JF, Bertille N, Cohen R, Chalumeau M. Rapid antigen detection test for group A Streptococcus in children with pharyngitis. Cochrane Database Syst Rev 2016;7. https://doi.org/10.1002/14651858.CD010502.pub2.
- Mlejnek JR, Almulhem K, Spadafore S. Utility and cost effectiveness of throat culture in the treatment of patients with negative rapid strep screens. Acad Emerg Med 2014;21.
- Pauchard JY, Verga ME, Bersier J, Prod’Hom G, Gehri M, Vaudaux B. Performance of rapid antigen diagnostic test for group A β-haemolytic streptococcal pharyngitis in a tertiary paediatric emergency department. Swiss Med Wkly 2012;142.
- Schwartz RH. Evaluation of rapid streptococcal detection tests. Pediatr Infect Dis J 1997;16:1099-100. https://doi.org/10.1097/00006454-199711000-00028.
- Sedki M, Salama H, Salama E, Abdalla N, Ezz H. Rapid diagnostic test for streptococcal throat infection in Egyptian children. Med J Cairo Univ 2010;78:177-82.
- Banerjee S, Ford C. Rapid Tests for the Diagnosis of Group A Streptococcal Infection: A Review of Diagnostic Test Accuracy, Clinical Utility, Safety, and Cost-Effectiveness. Ottawa, ON: Canadian Agency for Drugs and Technologies in Health CADTH Rapid Response Reports; 2018.
- Kose E, Sirin Kose S, Akca D, Yildiz K, Elmas C, Baris M, et al. The effect of rapid antigen detection test on antibiotic prescription decision of clinicians and reducing antibiotic costs in children with acute pharyngitis. J Trop Pediatr 2016;62:308-15. https://doi.org/10.1093/tropej/fmw014.
- Benjamin JT. The costs of testing for streptococcal pharyngitis in the office laboratory. Arch Pediatr Adolesc Med 2000;154:93-4.
- Tsevat J, Kotagal UR. Management of sore throats in children: a cost-effectiveness analysis. Arch Pediatr Adolesc Med 1999;153:681-8. https://doi.org/10.1001/archpedi.153.7.681.
- Boyler PA, Humair J, Revaz SA, Stalder H. A cost-effectiveness analysis of recommended strategies for acute pharyngitis. J Gen Intern Med 2002;17:135-6.
- Ehrlich JE, Demopoulos BP, Daniel KR, Ricarte MC, Glied S. Cost-effectiveness of treatment options for prevention of rheumatic heart disease from group A streptococcal pharyngitis in a pediatric population. Prev Med 2002;35:250-7. https://doi.org/10.1006/pmed.2002.1062.
- Giraldez-Garcia C, Rubio B, Gallegos-Braun JF, Imaz I, Gonzalez-Enriquez J, Sarria-Santamera A. Diagnosis and management of acute pharyngitis in a paediatric population: a cost-effectiveness analysis. Eur J Pediatr 2011;170:1059-67. https://doi.org/10.1007/s00431-011-1410-0.
- Klepser D, Grismer SE, Klepser ME. Cost-effectiveness of pharmacist provided care for the treatment of adult pharyngitis. J Manag Care Pharm 2011;17.
- Komaroff AL, Pass TM, Pappius EM. A cost-effectiveness analysis of alternate strategies for management of sore throat. Clin Res 1983;31.
- Lathia N, Sullivan K, Tam K, Brna M, MacNeil P, Saltmarche D, et al. Cost-minimization analysis of community pharmacy-based point-of-care testing for strep throat in 5 Canadian provinces. Can Pharm J 2018;151:322-31. https://doi.org/10.1177/1715163518790993.
- Maizia A, Letrilliart L, Colin C. Diagnostic strategies for acute tonsillitis in France: a cost-effectiveness study. Presse Med 2012;41:e195-203. https://doi.org/10.1016/j.lpm.2011.10.021.
- Malecki M, Mazur A, Sobolewski M, Binkowska-Bury M, Marc M, Januszewicz P. Rapid strip tests as a decision-making tool about antibiotic treatment in children – a prospective study. Pediatr Pol 2017;92:149-55. https://doi.org/10.1016/j.pepo.2017.01.006.
- Meier FA, Howland J, Johnson J, Poisson R. Effects of a rapid antigen test for group A streptococcal pharyngitis on physician prescribing and antibiotic costs. Arch Intern Med 1990;150:1696-700. https://doi.org/10.1001/archinte.1990.00040031696018.
- Polisena J, Spry C. Point of Care Testing for Streptococcal Sore Throat: A Review of Diagnostic Accuracy, Cost-Effectiveness, and Guidelines. Ottawa, ON: Canadian Agency for Drugs and Technologies in Health; 2009.
- National Institute for Health and Care Excellence (NICE) . Resource Impact Report: Sepsis: The Recognition, Diagnosis and Early Management (NG51) 2016. www.nice.org.uk/guidance/ng51/resources/resource-impact-report-pdf-2549846269 (accessed 17 April 2019).
- Matthys J, De Meyere M, van Driel ML, De Sutter A. Differences among international pharyngitis guidelines: not just academic. Ann Fam Med 2007;5:436-43. https://doi.org/10.1370/afm.741.
- Gazzano V, Berger A, Benito Y, Freydiere A-M, Tristan A, Boisset S, et al. Reassessment of the role of rapid antigen detection tests in diagnosis of invasive group A streptococcal infections. J Clin Microbiol 2016;54. https://doi.org/10.1128/JCM.02516-15.
- Shallcross LJ, Davies SC. Antibiotic overuse: a key driver of antimicrobial resistance. Br J Gen Pract 2014;64:604-5. https://doi.org/10.3399/bjgp14X682561.
Appendix 1 Record of searches: clinical effectiveness
Bibliographic databases
Summary of bibliographic database searches
Database | Date of search | Number of records (+ number from update search) |
---|---|---|
MEDLINE (via OvidSP) | 26 November 2018 (updated 7 March 2019) | 1646 (+ 33) |
EMBASE (via OvidSP) | 27 November 2018 (updated 12 March 2019) | 2546 (+ 177) |
The Cochrane Library (via Wiley Online Library) | 29 November 2018 (updated 12 March 2019) | 118 (+ 1) |
Science Citation Index and Conference Proceedings Citation Index – Science (via the Web of Science) | 3 December 2018 (updated 12 March 2019) | 1275 (+ 67) |
DARE (via CRD) | 22 January 2019 (updated 12 March 2019) | 30 (+ 0) |
HTA (via CRD) | 22 January 2019 (updated 12 March 2019) | 2 (+ 0) |
MEDLINE (via OvidSP)
Databases: Ovid MEDLINE and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions.
Date searched: 26 November 2018 (updated on 7 March 2019; see the end of this search record).
Date range searched: 1946 to 21 November 2018.
Original search: 26 November 2018
Search strategy
-
exp Pharyngitis/ (15,049)
-
pharyngit*.ti,ab,kf. (5455)
-
(nasophyryngit* or rhinopharyngit* or epipharyngit*).ti,ab,kf. (177)
-
(tonsillit* or tonsilit*).ti,ab,kf. (5589)
-
((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*)).ti,ab,kf. (9903)
-
or/1-5 (25,137)
-
Streptococcal Infections/di, mi (13,347)
-
Streptococcus pyogenes/im, ip (5444)
-
7 or 8 (16,609)
-
((strep or streptococcal or group) adj2 A).ti,ab,kf. (558,959)
-
9 and 10 (4831)
-
(strep* adj5 (throat* or pharyn* or tonsil*)).ti,ab,kf. (3397)
-
streptoco* A.ti,ab,kf. (475)
-
(group A adj5 streptoco*).ti,ab,kf. (9481)
-
((streptococcus or strep) adj1 (pyogenes or pyogenic)).ti,ab,kf. (7683)
-
((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield)).ti,ab,kf. (237)
-
(s pyogenes or pyogenes s or micrococcus scarlatinae).ti,ab,kf. (2485)
-
lancefield group.ti,ab,kf. (475)
-
gabhs.ti,ab,kf. (392)
-
or/11-19 (18,796)
-
Point-of-Care Systems/ (11,122)
-
exp Reagent Kits, Diagnostic/ (19,326)
-
Antigens, Bacterial/an (7619)
-
(point-of-care or poc or poct or pocts).ti,ab,kf. (17,665)
-
((rapid* or bedside*1 or bed-side*1 or near-patient or nearpatient or extra-laboratory or extralaboratory or office*1) adj6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif* or antigen*1)).ti,ab,kf. (136,637)
-
(radt or radts or rdt or rdts).ti,ab,kf. (1813)
-
(antigen*1 adj6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif*)).ti,ab,kf. (100,724)
-
(clearview exact* or BD veritor* or strep A rapid test* or quikread go* or alere i* or cobas liat* or genexpert* or ((alere* or testpack* or test-pack* or bionexia* or bio-nexia* or biosynex* or veritor* or cobas* or quikread* or quik-read* or NADAL* or OSOM* or sofia* or xpert*) and (strep A or point of care or point-of-care or POC))).ti,ab,kf. (804)
-
((abbott or beckton dickinson or biopanda or nal von minden or sekisui or orion diagnostica or roche or cepheid or biomerieux or quidel) and (strep A or point of care or POC or rapid test* or rapid antigen or antigen test*)).ti,ab,kf,in. (618)
-
or/21-29 (269,698)
-
(6 or 20) and 30 (1759)
-
exp animals/ not humans/ (4,517,568)
-
31 not 32 (1646)
-
31 use medp,prem,mesx (114)
-
33 or 34 (1646).
Updated search: 7 March 2019
Search strategy
Re-ran above search with the following date limits:
-
limit 35 to ed=20181126-20190307 (17)
-
limit 35 to ep=20181126-20190307 (14)
-
(2018 11* or 2018 12* or 2019*).dt,ez. (243,915)
-
35 and 38 (13)
-
36 or 37 or 39 (33)
Total after removing duplicates with previous search: 16.
EMBASE (via OvidSP)
Databases: EMBASE Classic and EMBASE.
Date searched: 27 November 2018 (updated on 12 March 2019; see the end of this search record).
Date range searched: 1947 to 21 November 2018.
Original search: 27 November 2018
Search strategy
-
streptococcal pharyngitis/ or pharyngitis/ or rhinopharyngitis/ or sore throat/ or tonsillitis/ or chronic tonsillitis/ or palatine tonsillitis/ (51,206)
-
pharyngit*.ti,ab,kw. (7851)
-
(nasophyryngit* or rhinopharyngit* or epipharyngit*).ti,ab,kw. (379)
-
(tonsillit* or tonsilit*).ti,ab,kw. (8320)
-
((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*)).ti,ab,kw. (15,900)
-
or/1-5 (59,836)
-
Streptococcus infection/di (3821)
-
Streptococcus pyogenes/ or streptococcus group a/ or group A streptococcal infection/ (23,921)
-
7 or 8 (26,865)
-
((strep or streptococcal or group) adj2 A).ti,ab,kw. (792,961)
-
9 and 10 (9617)
-
(strep* adj5 (throat* or pharyn* or tonsil*)).ti,ab,kw. (4842)
-
streptoco* A.ti,ab,kw. (636)
-
(group A adj5 streptoco*).ti,ab,kw. (12,213)
-
((streptococcus or strep) adj1 (pyogenes or pyogenic)).ti,ab,kw. (9259)
-
((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield)).ti,ab,kw. (388)
-
(s pyogenes or pyogenes s or micrococcus scarlatinae).ti,ab,kw. (3223)
-
lancefield group.ti,ab,kw. (567)
-
gabhs.ti,ab,kw. (504)
-
or/11-19 (24,055)
-
point of care system/ or point of care testing/ (11,966)
-
rapid test/ or diagnostic kit/ (8892)
-
antigen detection/ or bacterial antigen/an or Streptococcus antigen/ (24,501)
-
(point-of-care or poc or poct or pocts).ti,ab,kw. (25,553)
-
((rapid* or bedside*1 or bed-side*1 or near-patient or nearpatient or extra-laboratory or extralaboratory or office*1) adj6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif* or antigen*1)).ti,ab,kw. (177,813)
-
(radt or radts or rdt or rdts).ti,ab,kw. (2974)
-
(antigen*1 adj6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif*)).ti,ab,kw. (130,835)
-
(clearview exact* or BD veritor* or strep A rapid test* or quikread go* or alere i* or cobas liat* or genexpert* or ((alere* or testpack* or test-pack* or bionexia* or bio-nexia* or biosynex* or veritor* or cobas* or quikread* or quik-read* or NADAL* or OSOM* or sofia* or xpert*) and (strep A or point of care or point-of-care or POC))).ti,ab,kw. (1633)
-
((abbott or beckton dickinson or biopanda or nal von minden or sekisui or orion diagnostica or roche or cepheid or biomerieux or quidel) and (strep A or point of care or POC or rapid test* or rapid antigen or antigen test*)).ti,ab,kw,in. (1404)
-
or/21-29 (345,022)
-
(6 or 20) and 30 (2856)
-
(exp animal/ or nonhuman/) not exp human/ (6,749,742)
-
31 not 32 (2546)
Updated search: 12 March 2019
Search strategy
Re-ran above search with the following date limits:
-
limit 33 to dd=20181127-20190312 (18)
-
limit 33 to em=201811-201903 (152)
-
34 or 35 (159)
-
limit 33 to dc=20181127-20190312 (41)
-
36 or 37 (177)
Total after removing duplicates with other update and previous searches: 25.
The Cochrane Library (including Cochrane Database of Systematic Reviews and Cochrane Central Register of Controlled Trials)
Date searched: 29 November 2018 (updated on 12 March 2019; see the end of this search record).
Original search: 29 November 2018
Search strategy
-
#1 MeSH descriptor: [Pharyngitis] explode all trees (1138)
-
#2 pharyngit*:ti,ab,kw (1916)
-
#3 (nasophyryngit* or rhinopharyngit* or epipharyngit*):ti,ab,kw (2597)
-
#4 (tonsillit* or tonsilit*):ti,ab,kw (826)
-
#5 ((sore or pain* or ache* or aching or inflam* or infect*) near/3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*)):ti,ab,kw (3198)
-
#6 #1 or #2 or #3 or #4 or #5 (7030)
-
#7 MeSH descriptor: [Streptococcal Infections] explode all trees and with qualifier(s): [diagnosis - DI, microbiology - MI] (306)
-
#8 MeSH descriptor: [Streptococcus pyogenes] explode all trees and with qualifier(s): [immunology - IM, isolation & purification - IP] (89)
-
#9 #7 or #8 (351)
-
#10 ((strep or streptococcal or group) near/2 A):ti,ab,kw (109,570)
-
#11 #9 and #10 (126)
-
#12 (strep* near/5 (throat* or pharyn* or tonsil*)):ti,ab,kw (499)
-
#13 streptoco* next A:ti,ab,kw (26)
-
#14 (group A near/5 streptoco*):ti,ab,kw (689)
-
#15 ((streptococcus or strep) near/1 (pyogenes or pyogenic)):ti,ab,kw (423)
-
#16 ((streptococcus or strep) near/1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield)):ti,ab,kw (1)
-
#17 (“s pyogenes” or “pyogenes s” or “micrococcus scarlatinae”):ti,ab,kw (60)
-
#18 “lancefield group”:ti,ab,kw (6)
-
#19 gabhs:ti,ab,kw (109)
-
#20 #11 or #12 or #13 or #14 or #15 or #16 or #17 or #18 or #19 (1123)
-
#21 MeSH descriptor: [Point-of-Care Systems] explode all trees (424)
-
#22 MeSH descriptor: [Reagent Kits, Diagnostic] explode all trees (267)
-
#23 MeSH descriptor: [Antigens, Bacterial] explode all trees and with qualifier(s): [analysis - AN] (63)
-
#24 (point-of-care or poc or poct or pocts):ti,ab,kw (2560)
-
#25 ((rapid* or bedside*1 or bed-side*1 or near-patient or nearpatient or extra-laboratory or extralaboratory or office*1) near/6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif* or antigen*1)):ti,ab,kw (3506)
-
#26 (radt or radts or rdt or rdts):ti,ab,kw (302)
-
#27 (antigen*1 near/6 (test or tests or testing or tested or detect* or diagnos* or screen* or kit or kits or assay* or immunoassay* or determin* or identif*)):ti,ab,kw (0)
-
#28 (clearview next exact* or BD next veritor* or “strep A rapid” next test* or quikread next go* or alere next i* or cobas next liat* or genexpert* or ((alere* or testpack* or test-pack* or bionexia* or bio-nexia* or biosynex* or veritor* or cobas* or quikread* or quik-read* or NADAL* or OSOM* or sofia* or xpert*) and (“strep A” or “point of care” or point-of-care or POC))):ti,ab,kw (114)
-
#29 ((abbott or “beckton dickinson” or biopanda or “nal von minden” or sekisui or “orion diagnostica” or roche or cepheid or biomerieux or quidel) and (“strep A” or “point of care” or POC or rapid next test* or rapid next antigen* or antigen next test*)):ti,ab,kw (47)
-
#30 #21 or #22 or #23 or #24 or #25 or #26 or #27 or #28 or #29 (6235)
-
#31 (#6 or #20) and #30 (118)
Total: 118.
-
Cochrane Database of Systematic Reviews – Reviews: 15.
-
Cochrane Database of Systematic Reviews – Protocols: 1.
-
CENTRAL: 102.
Updated search: 12 March 2019
Re-ran above search and sorted by date, with the newest first.
New since 29 November 2018:
-
Cochrane Database of Systematic Reviews – Reviews: 0
-
Cochrane Database of Systematic Reviews – Protocols: 0
-
CENTRAL: 1.
Total after removing duplicates with other update and previous searches: 0.
Science Citation Index and Conference Proceedings (via the Web of Science)
Date searched: 3 December 2018 (updated on 12 March 2019; see the end of this search record).
Original search: 3 December 2018
Note: search record reads from bottom to top.
Search strategy
Note: search record reads from bottom to top.
Set | Results | History |
---|---|---|
# 23 | 1275 |
(#5 OR #14) AND #22 Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 22 | 265,727 |
#15 OR #16 OR #17 OR #18 OR #19 OR #20 OR #21 Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 21 | 487 |
TS=((abbott OR “beckton dickinson” OR biopanda OR “nal von minden” OR sekisui OR “orion diagnostica” OR roche OR cepheid OR biomerieux OR quidel) AND (“strep A” OR “point* of care” OR poc OR poct OR pocts OR “rapid test*” OR “rapid antigen” OR “antigen test*”)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 20 | 849 |
TS=(“clearview exact*” OR “BD veritor*” OR “strep A rapid test*” OR “quikread go*” OR “alere i*” OR “cobas liat*” OR genexpert* OR ((alere* OR testpack* OR test-pack* OR bionexia* OR bio-nexia* OR biosynex* OR veritor* OR cobas* OR quikread* OR quik-read* OR NADAL* OR OSOM* OR sofia* OR xpert*) AND (“strep A” OR “point* of care” OR poc OR poct OR pocts))) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 19 | 86,024 |
TS=(antigen* NEAR/5 (test OR tests OR testing OR tested OR detect* OR diagnos* OR screen* OR kit OR kits OR assay* OR immunoassay* OR determin* OR identif*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 18 | 2261 |
TS=(radt OR radts OR rdt OR rdts) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 17 | 165,166 |
TS=((rapid* OR bedside* OR bed-side* OR near-patient OR nearpatient OR extra-laboratory OR extralaboratory OR office*) NEAR/5 (test OR tests OR testing OR tested OR detect* OR diagnos* OR screen* OR kit OR kits OR assay* OR immunoassay* OR determin* OR identif* OR antigen*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 16 | 22,883 |
TS=(“point* of care” OR poc OR poct OR pocts) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 15 | 219 |
TS=(diagnostic AND (reagent NEAR/0 (kit* OR strip*))) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 14 | 17,280 |
#6 OR #7 OR #8 OR #9 OR #10 OR #11 OR #12 OR #13 Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 13 | 308 |
TS=gabhs Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 12 | 444 |
TS="lancefield group" Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 11 | 2042 |
TS=(“s pyogenes” OR “pyogenes s” OR “micrococcus scarlatinae”) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 10 | 59 |
TS=((strep*) NEAR/0 (epidemicus OR erysipelatis OR erysipelatos OR hemolyticus OR haemolyticus OR scarlatinae OR lancefield)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 9 | 7107 |
TS=((strep*) NEAR/0 (pyogenes OR pyogenic)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 8 | 9638 |
TS=(“group A” NEAR/4 strep*) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 7 | 1156 |
TS="strep* A" Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 6 | 2875 |
TS=(strep* NEAR/4 (throat* OR pharyn* OR tonsil*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 5 | 12,426 |
#1 OR #2 OR #3 OR #4 Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 4 | 6980 |
TS=((sore OR pain* OR ache* OR aching OR inflam* OR infect*) NEAR/2 (pharyn* OR throat* OR tonsil* OR nasopharyn* OR rhinopharyn* OR epipharyn*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 3 | 2703 |
TS=(tonsillit* OR tonsilit*) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 2 | 96 |
TS=(nasophyryngit* OR rhinopharyngit* OR epipharyngit*) Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
# 1 | 4651 |
TS=pharyngit* Indexes=SCI-EXPANDED, CPCI-S Timespan=All years |
Updated search: 12 March 2019
Re-ran the above search with the following date limits:
Search strategy
# 23 | 67 |
(#5 OR #14) AND #22 Indexes=SCI-EXPANDED, CPCI-S Timespan=2018-2019 |
Total after removing duplicates with other update and previous searches: 4.
Database of Abstracts of Reviews of Effects (via Centre for Reviews and Dissemination) and Health Technology Assessment database (via Centre for Reviews and Dissemination)
Date searched: 22 January 2019. (Not updated because no new records have been added to DARE since 31 March 2015 or to the HTA database since 31 March 2018. The INAHTA website was checked in March 2019 to see if a new platform for the HTA database was available.)
Original search: 22 January 2019
Search strategy
-
MeSH DESCRIPTOR Pharyngitis EXPLODE ALL TREES IN DARE,NHSEED,HTA (73)
-
(pharyngit*) (85)
-
(nasophyryngit*) OR (rhinopharyngit*) OR (epipharyngit*) (5)
-
(tonsillit* or tonsilit*) (43)
-
(((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*))) (91)
-
#1 OR #2 OR #3 OR #4 OR #5 (163)
-
MeSH DESCRIPTOR Streptococcal Infections WITH QUALIFIERS DI, MI IN DARE,NHSEED,HTA (31)
-
MeSH DESCRIPTOR Streptococcus pyogenes WITH QUALIFIERS IM, IP IN DARE,NHSEED,HTA (13)
-
#7 OR #8 (36)
-
((strep or streptococcal or group) adj2 A)) (2025)
-
#9 AND #10 (17)
-
((strep* adj5 (throat* or pharyn* or tonsil*))) (39)
-
(streptoco* adj1 A) (10)
-
((group A adj5 streptoco*)) (27)
-
(((streptococcus or strep or staphylococcus) adj1 (pyogenes or pyogenic))) (25)
-
(((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield))) (0)
-
((s pyogenes or pyogenes s or micrococcus scarlatinae)) (1)
-
(lancefield group) (0)
-
(gabhs) (8)
-
#12 OR #13 OR #14 OR #15 OR #16 OR #17 OR #18 OR #19 (51)
-
#6 AND #20 (43)
-
(#21) IN DARE (30)
-
(#21) IN HTA (2)
PROSPERO (International Prospective Register of Systematic Reviews)
Date searched: 20 February 2019.
Search strategy
-
#1 MeSH DESCRIPTOR Pharyngitis EXPLODE ALL TREES (29)
-
#2 pharyngit* (48)
-
#3 nasophyryngit* OR rhinopharyngit* OR epipharyngit* (3)
-
#4 tonsillit* OR tonsilit* (35)
-
#5 (sore OR pain* OR ache* OR aching OR inflam* OR infect*) ADJ3 (pharyn* or throat* OR tonsil* OR nasopharyn* OR rhinopharyn* OR epipharyn*) (105)
-
#6 #1 OR #2 OR #3 OR #4 OR #5 (125)
-
#7 strep* ADJ5 (throat* or pharyn* or tonsil*) (8)
-
#8 #6 OR #7 (125)
Status of review: completed or published (17).
Browsed online by information specialist: none relevant.
Trials registers
ClinicalTrials.gov
Date searched: 20 February 2019.
Search strategy
Thirty-three studies were found for:
Active, not recruiting, Completed, Suspended, Terminated, Withdrawn, Unknown status Studies | “strep throat” OR ( strep OR streptococcus OR streptococcal OR “group a” OR gabhs ) AND ( throat OR pharynx OR tonsils ) OR pharyngitis OR rhinopharyngitis OR epipharyngitis OR tonsillitis OR tonsilitis OR “sore throat” | rapid OR antigen OR radt OR radts OR rdt OR rdts OR “point of care” OR poc OR poct OR pocts OR bedside OR bed-side OR near-patient OR nearpatient OR diagnostic OR diagnosis OR test OR tests OR testing OR kit OR kits OR clearview OR veritor OR quikread OR quik-read OR alere OR cobas OR genexpert OR testpack OR test-pack OR bionexia OR bio-nexia OR biosynex OR nadal OR osom OR sofia OR xpert OR abbott OR “beckton dickinson” OR biopanda OR “nal von minden” OR sekisui OR “orion diagnostica” OR roche OR cepheid OR biomerieux OR quidel.
Downloaded to Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA) and screened by an information specialist against inclusion criteria and with reference to included studies from database searches. No new studies were identified.
Conferences and professional organisations
Selected with advice from several advisors [Noel McCarthy (University of Warwick, Coventry, UK) and NICE specialist committee members].
Federation of Infection Societies conference
Date searched: 6 March 2019.
2019 November: not available yet.
2018: https://fis2018.co.uk/ (browsed abstracts > diagnostics) – 0 relevant.
2017: http://event.federationinfectionsocieties.com/ (browsed abstracts) – 0 relevant.
2016: www.journalofhospitalinfection.com/issue/S0195-6701(16)X0012-6 [searched abstracts (poster and oral presentations and invited speaker abstracts) one term at a time. Terms used: strep or group a or throat or pharyn or tonsil (search looked for these within words as well as whole words)] – 0 relevant.
2015: abstracts appear not to be available online.
2014: searched abstracts one term at a time. Terms used: strep or group a or throat or pharyn or tonsil (search looked for these within words as well as whole words) – 0 relevant.
The European Congress of Clinical Microbiology and Infectious
Date searched: 5 March 2019.
URL: www.eccmid.org/.
EMBASE indexes up to 22nd European Congress of Clinical Microbiology and Infectious Diseases, London, UK, 2012.
Older and more recent years available via the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) eLibrary: www.escmid.org/escmid_publications/escmid_elibrary/.
Searched ESCMID eLibrary on 5 March 2019 for the following terms, with no date limit.
Search strategy
strep – 25 results (three sent to reviewers).
“group a” AND streptococcus, limited to ‘Topics: Diagnostic Bacteriology & General Microbiology’ – 10 (one sent to reviewers).
“group a” AND streptococcal, limited to ‘Topics: Diagnostic Bacteriology & General Microbiology’ – 8 (one sent to reviewers).
American Society of Microbiology
American Society of Microbiology Microbes: www.asm.org/ (website restructured; past meeting abstracts unavailable).
British Society for Antimicrobial Chemotherapy
Date searched: 6 March 2019.
URL: www.bsac.org.uk.
British Society for Antimicrobial Chemotherapy spring meeting abstracts 2016–18 and general website searched one term at a time. Terms used: strep or group a or throat or pharyn or tonsil (search looked for these within words as well as whole words) – screened online; none relevant.
British Infection Association
Date searched: 6 March 2019.
URL: www.britishinfection.org/.
None relevant.
Note: British Society for Antimicrobial Chemotherapy spring conference – Thursday 21 and Friday 22 March 2019.
Public Health England Annual Conference and Public Health Research and Science Annual Conference
Date searched: 12 March 2019.
Search strategy
2016–18: https://phe.multilearning.com/phe/#!*menu=6*browseby=3*sortby=2. Searched one term at a time. Terms used: strep or streptococcal or streptococcus of group a or throat or pharyngitis or pharynx or tonsillitis or tonsilitis (search looked for whole words) – screened online; none relevant.
Streptococcal biology conference
Date searched: 12 March 2019.
URL: www.grc.org/streptococcal-biology-conference/2018/.
Search strategy
Searched one term at a time. Terms used: throat or pharyn or tonsil or rapid or point or diagnos (search looked for these within words as well as whole words) – 0 results.
Lancefield International Symposium on Streptococci and Streptococcal Diseases
Date searched: 12 March 2019.
Not indexed in EMBASE. Some abstracts indexed in Web of Science, but only up to 2009.
2017: http://lisssd2017.org/abstracts/ – website unavailable.
Not able to find a list of full abstracts for most recent 5 years online.
Microbiology Society Conference
Date searched: 12 March 2019.
2019 (abstract book for April 2019 available): https://microbiologysociety.org/event/annual-conference/annual-conference.html.
2018: https://microbiologysociety.org/event/annual-conference/annual-conference-2018.html#tab-2.
2017: https://microbiologysociety.org/events/annual-conference.html?eventYear=2017.
2016: https://microbiologysociety.org/event/annual-conference/annual-conference-2016.html.
2015: https://microbiologysociety.org/event/annual-conference/annual-conference-2015.html.
Searched one term at a time. Terms used: streptococcal or streptococcus of group a or throat or pharyn or tonsil (search looked for these within words as well as whole words) – screened online; none relevant.
Association of Clinical Biochemistry and Laboratory Medicine
Date searched: 12 March 2019.
2018 and 2019 searched.
URL: www.acb.org.uk/whatwedo/events/national_meetings.aspx.
URL: www.acb.org.uk/whatwedo/events/national_meetings/focus-2018/abstracts/posterabstracts.
Searched one term at a time. Terms used: streptococcal or streptococcus of group a or throat or pharyn or tonsil (search looked for these within words as well as whole words) – screened online; none relevant.
In addition, the website was searched using Google Advanced Search with www.acb.org.uk in the domain.
Search strategy
strep* OR throat OR tonsil* OR pharyn* OR “group a”
URL: www.acb.org.uk – seven results; screened online; none relevant.
Royal College of Pathologists
Date searched: 12 March 2019.
The website was searched using Google Advanced Search with www.acb.org.uk in the domain.
Search strategy
strep* OR throat OR tonsil* OR pharyn* OR “group a”.
URL: www.rcpath.org – 55 results; screened online; none relevant.
Included studies in relevant reviews
Reviews found in searches
Vachhani et al. 95 focuses on manufacturers’ package inserts. The report refers to the following systematic reviews in the background:
-
Lean et al. 69 cross-checked 14 articles that mention a test name within our scope, out of the 48 total included studies. All 14 have already been picked up and sifted.
-
Stewart et al. 96 cross-checked 58 included studies with database search results; 57 of the 58 have already been picked up and sifted. The one remaining (Parviainen et al. 97) includes tests not within scope [ReaScan Strep A test (Reagena International Ltd, Toivala, Finland) vs. standard culture vs. TestPack® Strep A test (Inverness Medical, Cranfield, UK)].
-
Ruiz-Aragon et al. 98 was not in English and older (published 2010).
Cohen et al. 99 included studies scanned for test names in the scope and cross-checked against the results of database searches.
Some were found that we excluded owing to not having the test name in the meeting abstract, where Cochrane reviewers contacted authors and were given more information. Sent to reviewers.
Mlejnek et al. :100
-
go to characteristics of included studies: www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD010502.pub2/references#characteristicStudies
-
and search using ctrl + f for ‘Mlejnek 2014’.
Pauchard et al. :57
-
go to the same link above and search using ctrl + f for ‘Pauchard 2013’.
Pauchard et al. :101
-
go to the same link above and search using ctrl + f for ‘Pauchard 2012’.
Not found in our searches:
Schwartz et al. :102
-
letter.
Schwartz et al. 102
Sedki et al. :103
-
checked full text – not a test in our scope (Streptatest®, Dektra Pharm, Strasbourg, France).
Sedki et al. 103
Also checked:
Banerjee and Ford. 104
References of our included studies
Searched in March 2019 for studies.
Manufacturers’ websites
Searched in January 2019 for studies and data.
Rachel Court also checked manufacturers’ submissions for mention of studies. Forwarded relevant details, abstracts, posters and package inserts to reviewers.
Regulatory bodies
Food and Drug Administration, Clinical Laboratory Improvement Amendments database
Targeted searches undertaken on 6 March 2019.
Test system/manufacturer: NADAL. None found.
Test system/manufacturer: nal von minden. None found.
Test system/manufacturer: Cepheid AND Analyte Name: Streptococcus, group A. Two found, most relevant one sent to reviewers.
Test system/manufacturer: Cobas AND Analyte Name: Streptococcus, group A. Two found; earliest one sent to reviewers.
Test system/manufacturer: Biopanda. None found.
Test system/manufacturer: Biomerieux AND Analyte Name: Streptococcus, group A. None found.
Test system/manufacturer: Bionexia AND Analyte Name: Streptococcus, group A. None found.
Test system/manufacturer: Alere i AND Analyte Name: Streptococcus, group A One found (Alere i) – sent to reviewers.
Test system/manufacturer: Abbott AND Analyte Name: Streptococcus, group A. One found (TestPack Plus) – sent to reviewers.
Test system/manufacturer: Clearview AND Analyte Name: Streptococcus, group A. None found for Abbott.
Test system/manufacturer: BD Veritor AND Analyte Name: Streptococcus, group A. One found – sent to reviewers.
Test system/manufacturer: OSOM AND Analyte Name: Streptococcus, group A. Several found, but no details of studies in summaries/statements.
Test system/manufacturer: Orion Diagnostica AND Analyte Name: Streptococcus, group A. None found for QuikRead Go.
Test system/manufacturer: QuikRead Go AND Analyte Name: Streptococcus, group A. None found.
Test system/manufacturer: Biosynex. None found.
Test system/manufacturer: Quidel AND Analyte Name: Streptococcus, group A. Several found; earliest one sent to reviewers.
European commission medical devices
Checked on 6 March 2019. NB Eudamed database not yet publicly available.
Health services research agencies
International Network of Agencies for Health Technology Assessment
Date searched: 8 March 2019. HTA database also searched (see Bibliographic databases).
Date searched: 27 February 2019.
Targeted search for NADAL Strep A scan using various terms. No study data found.
Appendix 2 Data extraction form for primary studies
Appendix 3 Excluded studies with reasons
Reference | Reason for exclusion |
---|---|
Uphoff TS, Buchan BW, Ledeboer NA, Granato PA, Daly JA, Marti TN. Multicenter evaluation of the solana group A Streptococcus assay: comparison with culture. J Clin Microbiol 2016;54:2388–90. https://doi.org/10.1128/JCM.01268-16 | Wrong test |
Abd El-Ghany SM, Abdelmaksoud AA, Saber SM, Abd El Hamid DH. Group A beta-hemolytic streptococcal pharyngitis and carriage rate among Egyptian children: a case-control study. Ann Saudi Med 2015;35:377–82. https://doi.org/10.5144/0256-4947.2015.377 | Wrong test |
Abu-Sabaah AH, Ghazi HO. Better diagnosis and treatment of throat infections caused by group A beta-haemolytic streptococci. Br J Biomed Sci 2006;63:155–8. https://doi.org/10.1080/09674845.2006.11732740 | Wrong test |
Agarwal M, Raghuwanshi SK, Asati DP. Antibiotic use in sore throat: are we judicious? Indian J Otolaryngol Head Neck Surg 2015;67:267–70. https://doi.org/10.1007/s12070-015-0864-1 | Wrong test |
Alper Z, Uncu Y, Akalin H, Ercan I, Sinirtas M, Bilgel NG. Diagnosis of acute tonsillopharyngitis in primary care: a new approach for low-resource settings. J Chemother 2013;25:148–55. https://doi.org/10.1179/1973947813Y.0000000071 | Wrong test |
Al-Tawfiq JA, Alawami AH. A multifaceted approach to decrease inappropriate antibiotic use in a pediatric outpatient clinic. Ann Thorac Med 2017;12:51–4. https://doi.org/10.4103/1817-1737.197779 | No specific RADT mentioned |
Amorim R, Filho AF, Abath A, Hatem T, Mourato F, Gomes R, Mattos S. Prevalence of positive rapid antigen group A Streptococcus test in children and adolescents in a state from Northeast Brazil. Cardiol Young 2017;27:s484 | Wrong test |
Anderson KB, Simasathien S, Watanaveeradej V, Weg AL, Ellison DW, Suwanpakdee D, et al. Clinical and laboratory predictors of influenza infection among individuals with influenza-like illness presenting to an urban Thai hospital over a five-year period. PLOS ONE 2018;13:e0193050. https://doi.org/10.1371/journal.pone.0193050 | Wrong test |
Anderson NW, Buchan BW, Mayne D, Mortensen JE, Mackey TL, Ledeboer NA. Multicenter clinical evaluation of the illumigene group A streptococcus DNA amplification assay for detection of group A streptococcus from pharyngeal swabs. J Clin Microbiol 2013;51:1474–7. https://doi.org/10.1128/JCM.00176-13 | Wrong test |
André M, Eriksson M, Mölstad S, Stålsbylundborg C, Jacobsson A, Odenholt I, Swedish Study Group on Antibiotic Use. The management of infections in children in general practice in Sweden: a repeated 1-week diagnosis-prescribing study in 5 counties in 2000 and 2002. Scand J Infect Dis 2005;37:863–9. https://doi.org/10.1080/00365540500335207 | No specific RADT mentioned |
Andrews D, Chetty Y, Cooper BS, Virk M, Glass SK, Letters A, et al. Multiplex PCR point of care testing versus routine, laboratory-based testing in the treatment of adults with respiratory tract infections: a quasi-randomised study assessing impact on length of stay and antimicrobial use. BMC Infect Dis 2017;17:671. https://doi.org/10.1186/s12879-017-2784-z | Wrong test |
Anonymous. Group A streptococcal pharyngitis: diagnosis and management. Drug Benefit Trends 2003;15:29–32 | Wrong test |
Aoki A, Ashizawa T, Ebata A, Nasu Y, Fujii T. Group A streptococcus pharyngitis outbreak among university students in a judo club. J Infect Chemother 2014;20:190–3. https://doi.org/10.1016/j.jiac.2013.10.004 | Wrong test |
Araujo Filho BC, Imamura R, Sennes LU, Sakae FA. Role of rapid antigen detection test for the diagnosis of group A beta-hemolytic Streptococcus in patients with pharyngotonsillitis. Braz J Otorhinolaryngol 2005;71:168–71 | Wrong test |
Araujo Filho BC, Imamura R, Sennes LU, Sakae FA. Role of rapid antigen detection test for the diagnosis of group-A beta-hemolytic Streptococcus in patients with pharyngotonsillitis. Braz J Otorhinolaryngol 2006;72:12–15 | Wrong test |
Arbefeville S, Nelson K, Thonen-Kerr E, Ferrieri P. Prospective postimplementation study of solana group A streptococcal nucleic acid amplification test vs conventional throat culture. Am J Clin Pathol 2018;150:333–7. https://doi.org/10.1093/ajcp/aqy051 | Wrong test |
Armengol CE, Hendley JO, Schlager TA. Could repetition of the rapid antigen detection test for group A streptococci on a second swab replace the backup throat culture? Pediatr Res 2004;55:341A | Meeting abstract could not be located |
Armengol CE, Schlager TA, Hendley JO. Sensitivity of a rapid antigen detection test for group A streptococci in a private pediatric office setting: answering the Red Book’s request for validation. Pediatrics 2004;113:924–6. https://doi.org/10.1542/peds.113.4.924 | Wrong test |
Atlas SJ, McDermott SM, Mannone C, Barry MJ. The role of point of care testing for patients with acute pharyngitis. J Gen Intern Med 2005;20:759–61 | Wrong test |
Ayanruoh S, Waseem M, Quee F, Humphrey A, Reynolds T. Impact of rapid streptococcal test on antibiotic use in a pediatric emergency department. Pediatr Emerg Care 2009;25:748–50. https://doi.org/10.1097/PEC.0b013e3181bec88c | Wrong test |
Balasubramanian S, Amperayani S, Dhanalakshmi K, Senthilnathan S, Chandramohan V. Rapid antigen diagnostic testing for the diagnosis of group A beta-haemolytic streptococci pharyngitis. Natl Med J India 2018;31:8–10. https://doi.org/10.4103/0970-258X.243433 | Wrong test |
Ba-Saddik IA, Munibari AA, Alhilali AM, Ismail SM, Murshed FM, Coulter JB, et al. Prevalence of Group A beta-haemolytic streptococcus isolated from children with acute pharyngotonsillitis in Aden, Yemen. Trop Med Int Health 2014;19:431–9. https://doi.org/10.1111/tmi.12264 | Wrong test |
Bergmark R, Bergmark B, Blander J, Fataki M, Janabi M. Burden of disease and barriers to the diagnosis and treatment of group A beta-hemolytic streptococcal pharyngitis for the prevention of rheumatic heart disease in Dar Es Salaam, Tanzania. Pediatr Infect Dis J 2010;29:1135–7. https://doi.org/10.1097/inf.0b013e3181edf475 | No specific RADT mentioned |
Bjerrum L, Cots JM, Llor C, Molist N, Munck A. Effect of intervention promoting a reduction in antibiotic prescribing by improvement of diagnostic procedures: a prospective, before and after study in general practice. Eur J Clin Pharmacol 2006;62:913–18. https://doi.org/10.1007/s00228-006-0187-y | No comparison with biological culture or clinical scores |
Brennan-Krohn T, Ozonoff A, Sandora TJ. Adherence to guidelines for testing and treatment of children with pharyngitis: a retrospective study. BMC Pediatr 2018;18:43. https://doi.org/10.1186/s12887-018-0988-z | No specific RADT mentioned |
Briel M, Young J, Tschudi P, Hersberger KE, Hugenschmidt C, Langewitz W, Bucher HC. Prevalence and influence of diagnostic tests for acute respiratory tract infections in primary care. Swiss Med Wkly 2006;136:248–53 | Wrong population |
Brittain-Long R, Westin J, Olofsson S, Lindh M, Andersson LM. Access to a polymerase chain reaction assay method targeting 13 respiratory viruses can reduce antibiotics: a randomised, controlled trial. BMC Med 2011;9:44. https://doi.org/10.1186/1741-7015-9-44 | No specific RADT mentioned |
Brook I, Gober AE. Concurrent influenza A and group A beta-hemolytic streptococcal pharyngotonsillitis. Ann Otol Rhinol Laryngol 2008;117:310–12. https://doi.org/10.1177/000348940811700412 | Wrong test |
Bursle E, Robson J. Non-culture methods for detecting infection. Aust Prescr 2016;39:171–5. https://doi.org/10.18773/austprescr.2016.059 | Review |
Camurdan AD, Camurdan OM, Ok I, Sahin F, Ilhan MN, Beyazova U. Diagnostic value of rapid antigen detection test for streptococcal pharyngitis in a pediatric population. Int J Pediatr Otorhinolaryngol 2008;72:1203–6. https://doi.org/10.1016/j.ijporl.2008.04.008 | Wrong test |
Cao C, Zhang F, Ji M, Pei F, Fan X, Shen H, et al. Development of a loop-mediated isothermal amplification method for rapid detection of streptococcal pyrogenic exotoxin B. Toxicon 2016;117:53–8. https://doi.org/10.1016/j.toxicon.2016.03.019 | No specific RADT mentioned |
Cardoso DM, Gilio AE, Hsin SH, Machado BM, de Paulis M, Lotufo JP, et al. Impact of the rapid antigen detection test in diagnosis and treatment of acute pharyngotonsillitis in a pediatric emergency room. Rev Paul Pediatr 2013;31:4–9 | No comparison with biological culture or clinical scores |
Chen FM. Culture confirmation of negative rapid strep test results. J Fam Pract 2000;49:371–2 | Wrong test |
Cheng C, Han B, Smoot B, Chen Y, Exner MM. Real-time PCR detection of group A Streptococcus using the 3M integrated cycler. J Mol Diagn 2010;12:883 | Wrong test |
Cohen JF, Chalumeau M, Levy C, Bidet P, Benani M, Koskas M, et al. Effect of clinical spectrum, inoculum size and physician characteristics on sensitivity of a rapid antigen detection test for group A streptococcal pharyngitis. Eur J Clin Microbiol Infect Dis 2013;32:787–93. https://doi.org/10.1007/s10096-012-1809-1 | Wrong test |
Cohen JF, Chalumeau M, Levy C, Bidet P, Thollot F, Wollner A, et al. Spectrum and inoculum size effect of a rapid antigen detection test for group A Streptococcus in children with pharyngitis. PLOS ONE 2012;7:e39085. https://doi.org/10.1371/journal.pone.0039085 | Wrong test |
Cohen JF, Cohen R, Bidet P, Elbez A, Levy C, Bossuyt PM, Chalumeau M. Efficiency of a clinical prediction model for selective rapid testing in children with pharyngitis: a prospective, multicenter study. PLOS ONE 2017;12:e0172871. https://doi.org/10.1371/journal.pone.0172871 | Wrong test |
Cohen JF, Cohen R, Bidet P, Levy C, Deberdt P, d’Humières C, et al. Rapid-antigen detection tests for group A streptococcal pharyngitis: revisiting false-positive results using polymerase chain reaction testing. J Pediatr 2013;162:1282–4, 1284.e1. https://doi.org/10.1016/j.jpeds.2013.01.050 | Wrong test |
Cohen R, Levy C, Ovetchkine P, Boucherat M, Weil-Olivier C, Gaudelus J, et al. Evaluation of streptococcal clinical scores, rapid antigen detection tests and cultures for childhood pharyngitis. Eur J Pediatr 2004;163:281–2. https://doi.org/10.1007/s00431-004-1416-y | Wrong test |
Dagnelie CF, Bartelink ML, van der Graaf Y, Goessens W, de Melker RA. Towards a better diagnosis of throat infections (with group A beta-haemolytic Streptococcus) in general practice. Br J Gen Pract 1998;48:959–62 | Wrong test |
Dale JC, Novak R, Higgens P, Wahl E. Testing for group A streptococci. Arch Pathol Lab Med 2002;126:1467–70. https://doi.org/10.1043/0003-9985(2002)126<1467:TFGAS>2.0.CO;2 | No specific RADT mentioned |
Dawson ED, Taylor AW, Smagala JA, Rowlen KL. Molecular detection of Streptococcus pyogenes and Streptococcus dysgalactiae subsp. equisimilis. Mol Biotechnol 2009;42:117–27. https://doi.org/10.1007/s12033-009-9143-2 | Wrong test |
Demoré B, Tebano G, Gravoulet J, Wilcke C, Ruspini E, Birge J, et al. ‘Rapid antigen test use for the management of group A streptococcal pharyngitis in community pharmacies.’ Eur J Clin Microbiol Infect Dis 2018;06:06 | Wrong test |
Deniz R, Aktaş E, Barış A, Bayraktar B. The use of rapid antigen testing and matrix-assisted laser desorption/ionization-time of flight mass spectrometry in the diagnosis of group A beta-hemolytic streptococci in throat swab samples Turk J Med Sci 2018;48:939–44. https://doi.org/10.3906/sag-1712-101 | Wrong test |
Dodd M, Adolphe A, Parada A, Brett M, Culbreath K, Mercier RC. Clinical impact of a rapid streptococcal antigen test on antibiotic use in adult patients. Diagn Microbiol Infect Dis 2018;91:339–44 | Wrong test |
Donato LJ, Myhre NK, Murray MA, McDonah MR, Myers JF, Maxson JA, et al. Assessment of test performance and potential for environmental contamination associated with a point-of-care molecular assay for group A Streptococcus in an end user setting. J Clin Microbiol 2019;57:e01629–18 | No comparison with culture or clinical scores |
Dulaney K, Hohmeier K, Fisher C, Cardosi L, Wasson M. Exploring pharmacists’ perceptions regarding influenza and streptococcal testing within a chain pharmacy. J Am Pharm Assoc 2018;58:438–41.e1 | No specific RADT mentioned. No comparison with culture or clinical scores |
Dut R, Kocagoz S. Use of streptococcal tonsillopharyngitis diagnostic tests in children. J Pediatric Infect Dis 2016;11:126–30 | Wrong test |
Dut R, Kocagöz S. Clinical signs and diagnostic tests in acute respiratory infections. Indian J Pediatr 2016;83:380–5. https://doi.org/10.1007/s12098-015-1943-8 | No specific RADT mentioned |
Edin A, Granholm S, Koskiniemi S, Allard A, Sjöstedt A, Johansson A. Development and laboratory evaluation of a real-time PCR assay for detecting viruses and bacteria of relevance for community-acquired pneumonia. J Mol Diagn 2015;17:315–24. https://doi.org/10.1016/j.jmoldx.2015.01.005 | No specific RADT mentioned |
Edmonson MB, Farwell KR. Relationship between the clinical likelihood of group A streptococcal pharyngitis and the sensitivity of a rapid antigen-detection test in a pediatric practice. Pediatrics 2005;115:280–5 | Wrong type of test |
Edmonson MB, Weix KR. Relationship of pre-test likelihood of group A streptococcal (GAS) pharyngitis and sensitivity of a rapid antigen detection test (RADT) in pediatric practice. Pediatr Res 2003;53:180A | Abstract could not be located |
Ehrlich JE, Demopoulos BP, Daniel KR, Ricarte MC, Glied S. Cost-effectiveness of treatment options for prevention of rheumatic heart disease from Group A streptococcal pharyngitis in a pediatric population. Prev Med 2002;35:250–7 | Wrong test name |
Ehsanipour F, Mirghorbani M, Masoumi Asl H, Harandi NV, Khanaliha K. Comparison of clinical findings and rapid streptococcal antigen detection test in the diagnosis of group A streptococcal (GAS) pharyngitis. Arch Clin Infect Dis 2016;11 | Wrong test |
Elf S, Olli J, Hirvonen S, Auvinen P, Eboigbodin KE. Molecular detection of Streptococcus pyogenes by strand invasion based amplification assay. Mol Diagn Ther 2018;22:595–602. https://doi.org/10.1007/s40291-018-0346-8 | Wrong type of test |
Elmas B, Köroğlu M, Terzi HA, Aslan FG, Menekşe E, Kösecik M, Altindiş M. Performance of clinical features, acute phase reactants and group A streptococcus rapid test in evaluation of the etiologic agents for tonsillopharyngitis in children. Clin Lab 2017;63:1223–31. https://doi.org/10.7754/Clin.Lab.2017.170124 | Wrong test |
Engström S, Mölstad S, Lindström K, Nilsson G, Borgquist L. Excessive use of rapid tests in respiratory tract infections in Swedish primary health care. Scand J Infect Dis 2004;36:213–18. https://doi.org/10.1080/00365540310018842 | No comparison with clinical score or throat culture |
Enright K, Taheri S, Beattie T. Emergency department testing for streptococcus in children with sore throats. Emerg Med J 2009;26:310. https://doi.org/10.1136/emj.2008.058628 | No specific RADT mentioned |
Fakih MG, Berschback J, Juzych NS, Massanari RM. Compliance of resident and staff physicians with IDSA guidelines for the diagnosis and treatment of streptococcal pharyngitis. Infect Dis Clin Pract 2006;14:84–8 | Wrong test |
FDA. FDA Decision Summary: Substantial Equivalence Determination for the Sofia® Strep A+ FIA. K141775. Silver Spring, MD: FDA; 2014. URL: www.accessdata.fda.gov/cdrh_docs/reviews/K141775.pdf (accessed 6 March 2019) | Wrong test |
FDA. FDA Decision Summary: Substantial Equivalence Determination for Cobas Liat Strep A. K141338. Silver Spring, MD: FDA; 2015. URL: www.accessdata.fda.gov/cdrh_docs/reviews/K141338.pdf (accessed 6 March 2019) | Data identical to those provided by the manufacturer in response to a request for information by NICE |
FDA. FDA Decision Summary: Substantial Equivalence Determination for TestPack Plus Strep A. K971522. Silver Spring, MD: FDA; 1997. URL: www.accessdata.fda.gov/cdrh_docs/pdf/K971522.pdf (accessed 6 March 2019) | Insufficient information |
Felsenstein S, Faddoul D, Sposto R, Batoon K, Polanco CM, Dien Bard J. Molecular and clinical diagnosis of group A streptococcal pharyngitis in children. J Clin Microbiol 2014;52:3884–9. https://doi.org/10.1128/JCM.01489-14 | Wrong test |
Fierro JL, Prasad PA, Localio AR, Grundmeier RW, Wasserman RC, Zaoutis TE, Gerber JS. Variability in the diagnosis and treatment of group A streptococcal pharyngitis by primary care pediatricians. Infect Control Hosp Epidemiol 2014;35(Suppl. 3):79–85. https://doi.org/10.1086/677820 | No specific RADT mentioned |
Fontes MJ, Bottrel FB, Fonseca MT, Lasmar LB, Diamante R, Camargos PA. Early diagnosis of streptococcal pharyngotonsillitis: assessment by latex particle agglutination test. J Pediatr 2007;83:465–70. https://doi.org/10.2223/JPED.1695 | Wrong type of test |
Forward K. Just the berries. Diagnosing and managing group A Streptococcus pharyngitis. Can Fam Physician 2002;48:47–8 | Review |
Forward KR, Haldane D, Webster D, Mills C, Brine C, Aylward D. A comparison between the Strep A Rapid Test Device and conventional culture for the diagnosis of streptococcal pharyngitis. Can J Infect Dis Med Microbiol 2006;17:221–3 | Wrong test |
Fox JW, Cohen DM, Marcon MJ, Cotton WH, Bonsu BK. Performance of rapid streptococcal antigen testing varies by personnel. J Clin Microbiol 2006;44:3918–22 | Wrong test |
Fox JW, Marcon MJ, Bonsu BK. Diagnosis of streptococcal pharyngitis by detection of Streptococcus pyogenes in posterior pharyngeal versus oral cavity specimens. J Clin Microbiol 2006;44:2593–4 | Wrong test |
Gazewood J. Negative antigen test misses < 5% of strep pharyngitis. J Fam Pract 2003;52:761–2 | Review |
Gazzano V, Berger A, Benito Y, Freydiere AM, Tristan A, Boisset S, et al. Reassessment of the role of rapid antigen detection tests in diagnosis of invasive group A streptococcal infections. J Clin Microbiol 2016;54:994–9. https://doi.org/10.1128/JCM.02516-15 | Wrong population |
Gieseker KE, Roe MH, MacKenzie T, Todd JK. Evaluating the American Academy of Pediatrics diagnostic standard for Streptococcus pyogenes pharyngitis: backup culture versus repeat rapid antigen testing. Pediatrics 2003;111:e666–70 | Wrong test |
Gieseker KE, Mackenzie T, Roe MH, Todd JK. Comparison of two rapid Streptococcus pyogenes diagnostic tests with a rigorous culture standard. Pediatr Infect Dis J 2002;21:922–7. https://doi.org/10.1097/00006454-200210000-00007 | Wrong test |
Giraldez-Garcia C, Rubio B, Gallegos-Braun JF, Imaz I, Gonzalez-Enriquez J, Sarria-Santamera A. Diagnosis and management of acute pharyngitis in a paediatric population: a cost-effectiveness analysis. Eur J Pediatr 2011;170:1059–67. https://doi.org/10.1007/s00431-011-1410-0 | No specific RADT mentioned |
Gonsu HK, Bomki CM, Djomou F, Toukam M, Ndze VN, Lyonga EE, et al. A comparative study of the diagnostic methods for group A streptococcal sore throat in two reference hospitals in Yaounde, Cameroon. Pan Afr Med J 2015;20:139. https://doi.org/10.11604/pamj.2015.20.139.4810 | Wrong test |
Greiver M. Practice tips. Incorporating a rapid group A streptococcus assay with the sore throat score. Can Fam Physician 1999;45:1181–2 | Review |
Gröndal H, Hedin K, Strandberg EL, André M, Brorsson A. Near-patient tests and the clinical gaze in decision-making of Swedish GPs not following current guidelines for sore throat - a qualitative interview study. BMC Fam Pract 2015;16:81. https://doi.org/10.1186/s12875-015-0285-y | No specific RADT mentioned |
Gurol Y, Akan H, Izbirak G, Tekkanat ZT, Gunduz TS, Hayran O, Yilmaz G. The sensitivity and the specifity of rapid antigen test in streptococcal upper respiratory tract infections. Int J Pediatr Otorhinolaryngol 2010;74:591–3. https://doi.org/10.1016/j.ijporl.2010.02.020 | Wrong test |
Haldrup S, Thomsen RW, Bro F, Skov R, Bjerrum L, Søgaard M. Microbiological point of care testing before antibiotic prescribing in primary care: considerable variations between practices. BMC Fam Pract 2017;18:9. https://doi.org/10.1186/s12875-016-0576-y | No specific RADT mentioned |
Hall MC, Kieke B, Gonzales R, Belongia EA. Spectrum bias of a rapid antigen detection test for group A beta-hemolytic streptococcal pharyngitis in a pediatric population. Pediatrics 2004;114:182–6. https://doi.org/10.1542/peds.114.1.182 | Wrong test |
Hammond-Collins K, Strauss B, Barnes K, Demczuk W, Domingo MC, Lamontagne MC, et al. Group A streptococcus outbreak in a Canadian armed forces training facility. Mil Med 2018;21:21 | Wrong test |
Herranz B, Rodriguez-Salinas E, Orden B. [From laboratory to clinic: usefulness of rapid diagnostic techniques for the diagnostic techniques of Streptococcus pyogenes.] An Pediatr Contin 2007;5:92–5 | Foreign-language paper |
Hinfey P, Nicholls BH, Garcia F, Ripper J, Cameron Y, Joshi S. Sensitivity of a rapid antigen detection test for the diagnosis of group A streptococcal pharyngitis in the emergency department. Ann Emerg Med 2010;56:S132 | Wrong test |
Hoffmann K, Reichardt B, Zehetmayer S, Maier M. Evaluation of the implementation of a rapid streptococcal antigen test in a routine primary health care setting: from recommendations to practice. Wien Klin Wochenschr 2012;124:633–8. https://doi.org/10.1007/s00508-012-0225-y | No specific RADT mentioned |
Homme JH, Greenwood CS, Cronk LB, Nyre LM, Uhl JR, Weaver AL, Patel R. Duration of group A streptococcus PCR positivity following antibiotic treatment of pharyngitis. Diagn Microbiol Infect Dis 2018;90:105–8 | Wrong test |
Honkanen PO, Rautakorpi UM, Huovinen P, Klaukka T, Palva E, Roine R, et al. Diagnostic tools in respiratory tract infections: use and comparison with Finnish guidelines. Scand J Infect Dis 2002;34:827–30. https://doi.org/10.1080/0036554021000026939 | No specific RADT mentioned |
Humair JP, Revaz SA, Stalder H. Antibiotic prescription in strategies using a clinical score and a rapid streptococcal test for acute pharyngitis. J Gen Intern Med 2002;17:125 | Wrong test |
Igarashi H, Nago N, Kiyokawa H, Fukushi M. Abdominal pain and nausea in the diagnosis of streptococcal pharyngitis in boys. Int J Gen Med 2017;10:311–18. https://doi.org/10.2147/IJGM.S144310 | Wrong test |
Jayaratne P, Rutherford C. Detection of group A streptococci (GAS) by loop-mediated isothermal amplification (LAMP) directly from specimens: a rapid, simple and cost-effective alternative to culture. Can J Infect Dis Med Microbiol 2015;26:e19 | Wrong test |
Joachim L, Campos D, Smeesters PR. Pragmatic scoring system for pharyngitis in low-resource settings. Pediatrics 2010;126:e608–14. https://doi.org/10.1542/peds.2010-0569 | Wrong test |
Kato Y, Suzuki K, Shibai Y, Iwamoto H. Development of simultaneous detection lateral flow immunoassay kit for GAS and ADV. Clin Chem 2015;1:S150 | Wrong test |
Keahey L, Bulloch B, Jacobson R, Tenenbein M, Kabani A. Diagnostic accuracy of a rapid antigen test for GABHS performed by nurses in a pediatric ED. Am J Emerg Med 2002;20:128–30 | Wrong test |
Khattak MH, Khan MA, Shafiullah Orakzi UK. Incidence of acute streptococcal pharyngitis. J Med Sci 2015;23:118–20 | No specific RADT mentioned |
Kivi N, Vanhanen AR, Nissinen A. Assessment of strep a point-of-care testing performance through external quality assurance (EQA) scheme: results of a 6-year study period (2009-2015). Clin Chem Lab Med 2016;54:eA277 | No specific RADT mentioned |
Klepser DG, Klepser ME, Dering-Anderson AM, Morse JA, Smith JK, Klepser SA. Community pharmacist-physician collaborative streptococcal pharyngitis management program. J Am Pharm Assoc 2016;56:323–9.e1. https://doi.org/10.1016/j.japh.2015.11.013 | Wrong test |
Klepser DG, Klepser ME, Smith JK, Dering-Anderson AM, Nelson M, Pohren LE. Utilization of influenza and streptococcal pharyngitis point-of-care testing in the community pharmacy practice setting. Res Social Adm Pharm 2018;14:356–9 | No specific RADT mentioned |
Klepser D, Grismer SE, Klepser ME. Cost-effectiveness of pharmacist provided care for the treatment of adult pharyngitis. J Manag Care Pharm 2011;17:241 | No specific RADT mentioned |
Klepser DG, Bisanz SE, Klepser ME. Cost-effectiveness of pharmacist-provided treatment of adult pharyngitis. Am J Manag Care 2012;18:e145–54 | No specific RADT mentioned |
Kose E, Sirin Kose S, Akca D, Yildiz K, Elmas C, Baris M, Anil M. The effect of rapid antigen detection test on antibiotic prescription decision of clinicians and reducing antibiotic costs in children with acute pharyngitis. J Trop Pediatr 2016;62:308–15. https://doi.org/10.1093/tropej/fmw014 | Wrong test |
Kreher NE, Hickner JM, Barry HC, Messimer SR. Do gastrointestinal symptoms accompanying sore throat predict streptococcal pharyngitis? An UPRNet study. Upper Peninsula Research Network. J Fam Pract 1998;46:159–64 | Wrong test |
Küçük O, Biçer S, Giray T, Cöl D, Erdağ GC, Gürol Y, et al. Validity of rapid antigen detection testing in group A beta-hemolytic streptococcal tonsillopharyngitis. Indian J Pediatr 2014;81:138–42. https://doi.org/10.1007/s12098-013-1067-y | Wrong test |
Kulkarni T, Aikawa C, Nozawa T, Murase K, Maruyama F, Nakagawa I. DNA-based culture-independent analysis detects the presence of group A Streptococcus in throat samples from healthy adults in Japan. BMC Microbiol 2016;16:237. https://doi.org/10.1186/s12866-016-0858-5 | Wrong type of test |
Kurtz B, Kurtz M, Roe M, Todd J. Importance of inoculum size and sampling effect in rapid antigen detection of Streptococcus pyogenes pharyngitis. Pediatr Res 1999;45:166A | Not enough information |
Lasseter G, McNulty C, Hobbs FDR, Mant D, PRISM Investigators. In vitro evaluation of five rapid antigen detection tests for group A beta-haemolytic streptococcal sore throat infections. Health Technol Assess 2014;18(6) | Wrong population |
Lasseter GM, McNulty CA, Hobbs FD, Mant D, Little P, PRImary care Streptococcal Management (PRISM) Investigators Group. Effect of swab type on the analytical sensitivity of five point-of-care tests for group A streptococci. Br J Biomed Sci 2011;68:91–4. https://doi.org/10.1080/09674845.2011.11978232 | Wrong population |
Lasseter GM, McNulty CA, Richard Hobbs FD, Mant D, Little P, PRISM Investigators. In vitro evaluation of five rapid antigen detection tests for group A beta-haemolytic streptococcal sore throat infections. Fam Pract 2009;26:437–44. https://doi.org/10.1093/fampra/cmp054 | Wrong population |
Lathia N, Sullivan K, Tam K, Brna M, MacNeil P, Saltmarche D, Agro K. Cost-minimization analysis of community pharmacy-based point-of-care testing for strep throat in 5 Canadian provinces. Can Pharm J 2018;151:322–31. https://doi.org/10.1177/1715163518790993 | No specific RADT mentioned |
Leydon G, McDermott L, Moore M, Williamson I, Hobbs FDR, Little P, PRISM Investigators. A qualitative study of general practitioner, nurse practitioner and patient views about the use of rapid Streptococcus antigen detection tests in primary care: ‘swamped with sore throats?’. Health Technol Assess 2014;18(6) | Duplicate |
Leydon GM, McDermott L, Moore M, Williamson I, Hobbs FD, Lambton T, et al. A qualitative study of GP, NP and patient views about the use of rapid streptococcal antigen detection tests (RADTs) in primary care: ‘swamped with sore throats?’. BMJ Open 2013;3:e002460. https://doi.org/10.1136/bmjopen-2012-002460 | No specific RADT mentioned |
Li Y, Kim HJ, Kong H, Ranalli TA, Olivo PD, Stenzel TT. Detection of group A Streptococcus with a rapid, non-instrumented, isothermal molecular assay on throat swab specimens. J Mol Diagn 2013;15:890 | No specific RADT mentioned |
Little P, Hobbs FD, Moore M, Mant D, Williamson I, McNulty C, et al. PRImary care Streptococcal Management (PRISM) study: in vitro study, diagnostic cohorts and a pragmatic adaptive randomised controlled trial with nested qualitative study and cost-effectiveness study. Health Technol Assess 2014;18(6). https://doi.org/10.3310/hta18060 | Wrong population in study 1. Wrong test in study 3 |
Little P, Moore M, Hobbs FDR, Mant D, McNulty C, Williamson I, et al. Randomised controlled trial of a clinical score and rapid antigen detection test for sore throats. Health Technol Assess 2014;18(6) | Duplicate |
Llor C, Bjerrum L, Munck A, Cots JM, Hernández S, Moragas A, HAPPY AUDIT Investigators. Access to point-of-care tests reduces the prescription of antibiotics among antibiotic-requesting subjects with respiratory tract infections. Respir Care 2014;59:1918–23. https://doi.org/10.4187/respcare.03275 | No specific RADT mentioned |
Llor C, Hernández S, Sierra N, Moragas A, Hernández M, Bayona C. Association between use of rapid antigen detection tests and adherence to antibiotics in suspected streptococcal pharyngitis. Scand J Prim Health Care 2010;28:12–17. https://doi.org/10.3109/02813431003669301 | Wrong population and outcome |
Llor C, Moragas A, Cots JM, López-Valcárcel BG, Happy Audit Study Group. Estimated saving of antibiotics in pharyngitis and lower respiratory tract infections if general practitioners used rapid tests and followed guidelines. Aten Primaria 2017;49:319–25 | No specific RADT mentioned |
Lotufo JPB, Cardoso DM, Gilio AE, Hsin SH, Machado BM, De Paulis M, et al. Impact of the use of rapid antigen detection test in the diagnosis and treatment of acute pharyngitis in pediatric emergency room. Paediatr Respir Rev 2012;13(Suppl. 1):S70 | No specific RADT mentioned |
Luo R, Sickler J, Vahidnia F, Lee YC, Frogner B, Thompson M. Diagnosis and management of group A streptococcal pharyngitis in the United States, 2011–2015. BMC Infect Dis 2019;19:193. https://doi.org/10.1186/s12879-019-3835-4 | No specific RADT mentioned |
Madurell J, Balague M, Gomez M, Cots JM, Llor C. Impact of rapid antigen detection testing on antibiotic prescription in acute pharyngitis in adults. FARINGOCAT STUDY: a multicentric randomized controlled trial. BMC Fam Pract 2010;11:25 | Protocol only. No results |
Makri A, Tzanakaki G, Kalimeratzi S, Iliadou H, Xiroyianni A, Kremastinou J, Voyatzi A. Evaluation of polymerase chain reaction as rapid diagnostic tool compared to culture in respiratory tract infections. Clin Microbiol Infect 2010;2:S513 | No specific RADT mentioned |
Malecki M, Mazur A, Sobolewski M, Binkowska-Bury M, Marc M, Januszewicz P. Rapid strip tests as a decision-making tool about antibiotic treatment in children – a prospective study. Pediatr Pol 2017;92:149–55 | No comparator |
Maltezou HC, Tsagris V, Antoniadou A, Galani L, Douros C, Katsarolis I, et al. Evaluation of a rapid antigen detection test in the diagnosis of streptococcal pharyngitis in children and its impact on antibiotic prescription. J Antimicrob Chemother 2008;62:1407–12. https://doi.org/10.1093/jac/dkn376 | Wrong test |
Mayes T, Pichichero ME. Are follow-up throat cultures necessary when rapid antigen detection tests are negative for group A streptococci? Clin Pediatr 2001;40:191–5 | Wrong test |
Mazur E, Bochyńska E, Juda M, Kozioł-Montewka M. Empirical validation of Polish guidelines for the management of acute streptococcal pharyngitis in children. Int J Pediatr Otorhinolaryngol 2014;78:102–6. https://doi.org/10.1016/j.ijporl.2013.10.064 | Wrong test |
Messina A, Bottaro G, Morselli I. Utility of rapid antigen detection test for group A streptococci in a family paediatrician office setting. Acta Med Mediterr 2010;26:101–5 | No outcomes by test |
Michel-Lepage A, Ventelou B, Nebout A, Verger P, Pulcini C. Cross-sectional survey: risk-averse French GPs use more rapid-antigen diagnostic tests in tonsillitis in children. BMJ Open 2013;3:e003540. https://doi.org/10.1136/bmjopen-2013-003540 | No specific RADT mentioned |
Michel-Lepage A, Ventelou B, Verger P, Pulcini C. Factors associated with the use of rapid antigen diagnostic tests in children presenting with acute pharyngitis among French general practitioners. Eur J Clin Microbiol Infect Dis 2014;33:723–8. https://doi.org/10.1007/s10096-013-2003-9 | No specific RADT mentioned |
Mirjat KA, Fatima I, Mustafa F. Prevalence of pharyngitis and tonsilitis among children. Med For Mon 2012;23:64–7 | Not enough information |
Mirjat KA, ValiRam P, Fatima I. Role of rapid antigen detection test (RADT) and throat culture in the diagnosis of streptococcal pharyngotonsillitis. Med Foru Mon 2012;23:60–3 | Not enough information |
Mirza A, Wludyka P, Chiu TT, Rathore MH. Throat culture is necessary after negative rapid antigen detection tests. Clin Pediatr 2007;46:241–6 | Wrong test |
Miyashita N, Kawai Y, Kato T, Tanaka T, Akaike H, Teranishi H, et al. Rapid diagnostic method for the identification of Mycoplasma pneumoniae respiratory tract infection. J Infect Chemother 2016;22:327–30. https://doi.org/10.1016/j.jiac.2016.02.005 | Wrong test |
Mlejnek JR, Almulhem K, Spadafore S. Utility and cost effectiveness of throat culture in the treatment of patients with negative rapid strep screens. Acad Emerg Med 2014;1:S51 | Unclear population |
Moore N. Rapid and sensitive isothermal molecular amplification of group A Streptococcus (GAS) with Alere i Molecular Platform. J Molec Diagnos 2017;19:987 | Wrong outcome |
Morandi PA, Deom A, Mauris A, Rohner P. External quality control of direct antigen tests to detect group A streptococcal antigen. Eur J Clin Microbiol Infect Dis 2003;22:670–4. https://doi.org/10.1007/s10096-003-1027-y | Wrong population |
Murphy ML, Pichichero ME. Prospective identification and treatment of children with pediatric autoimmune neuropsychiatric disorder associated with group A streptococcal infection (PANDAS). Arch Pediatr Adolesc Med 2002;156:356–61 | Wrong test |
Nakhoul GN, Hickner J. Management of adults with acute streptococcal pharyngitis: minimal value for backup strep testing and overuse of antibiotics. J Gen Intern Med 2013;28:830–4. https://doi.org/10.1007/s11606-012-2245-8 | Wrong test |
Nazgul O, Yoshihisa Y, Guli S, Toshihiro N, Mayramkan A. 2 Prevalence of group A b-hemolytic Streptococcus among children with tonsillopharyngitis in Kyrgyzstan: the difficulty of diagnostics and therapy. Int J Rheum Dis 2010;1:212–13 | No specific RADT mentioned |
NCT. Performance of Ellume·Lab Group A Strep Test Versus Culture for the Rapid Detection of Group A Streptococcus in Patients with Acute Pharyngitis. 2017. URL: https://clinicaltrials.gov/show/nct03171350 (accessed 20 February 2019) | No study publication. No results |
NCT. Comparison of Two Rapid Antigen Detection Tests for the Detection of Group-A Streptococcal Pharyngitis in Children. 2017. URL: https://clinicaltrials.gov/show/nct03099018 (accessed 20 February 2019) | No outcome data |
Neumark T, Brudin L, Molstad S. Use of rapid diagnostic tests and choice of antibiotics in respiratory tract infections in primary healthcare – a 6-y follow-up study. Scand J Infect Dis 2010;42:90–6 | No specific RADT mentioned |
Nissinen A, Strandén P, Myllys R, Takkinen J, Björkman Y, Leinikki P, Siitonen A. Point-of-care testing of group A streptococcal antigen: performance evaluated by external quality assessment. Eur J Clin Microbiol Infect Dis 2009;28:17–20. https://doi.org/10.1007/s10096-008-0580-9 | No comparison with culture or clinical score |
Noorbakhsh S, Tabatabaei A, Farhadi M, Ebrahimi TF. Immunoasssay chromatographic antigen test for rapid diagnosis of Group A beta hemolytic streptococcus pharyngitis in children: a cross/sectional study. Iran J Microbiol 2011;3:99–103 | Wrong test |
Orda U, Mitra B, Orda S, Fitzgerald M, Gunnarsson R, Rofe G, Dargan A. Point of care testing for group A streptococci in patients presenting with pharyngitis will improve appropriate antibiotic prescription. Emerg Med Australas 2016;28:199–204. https://doi.org/10.1111/1742-6723.12567 | Wrong reference standard |
Orda U, Gunnarsson R, Orda S, Fitzgerald M, Rofe G, Dargan A. Etiologic predictive value of a rapid immunoassay for the detection of group A streptococcus antigen from throat swabs in patients presenting with a sore throat. Int J Infect Dis 2016;45:32–5. https://doi.org/10.1016/j.ijid.2016.02.002 | No comparison with culture or clinical score |
Ouchi K, Hasegawa K, Nonaka Y, Matsushima H, Komura H, Maki T, Nakazawa T. Rapid diagnosis of adenovirus respiratory tract infections by immunochromatography. J Infect Chemother 1999;5:220–2. https://doi.org/10.1007/s101560050040 | Wrong type of test |
Papastergiou J, Diamantouros A, Davidson S, Saltmarche D. Community pharmacist-directed point-of-care group A strep testing: results of a Canadian pilot program. Int J Clin Pharm 2017;39:208 | No comparison with culture or clinical score |
Papastergiou J, Trieu CR, Saltmarche D, Diamantouros A. Community pharmacist-directed point-of-care group A streptococcus testing: evaluation of a Canadian program. J Am Pharm Assoc 2018;58:450–6 | No comparison with culture or clinical score |
Park SY, Gerber MA, Tanz RR, Hickner JM, Galliher JM, Chuang I, Besser RE. Clinicians’ management of children and adolescents with acute pharyngitis. Pediatrics 2006;117:1871–8 | No specific RADT mentioned |
Pauchard JY, Verga ME, Bersier J, Prod’Hom G, Gehri M, Vaudaux B. Spectrum bias of rapid antigen diagnostic test for group A beta-haemolytic streptococcal pharyngitis in a tertiary paediatric emergency department. Swiss Med Wkly 2012;142:9S–10S | No specific RADT mentioned |
Pauchard JY, Verga ME, Bersier J, Prod’Hom G, Gehri M, Vaudaux B. Performance of rapid antigen diagnostic test for group A beta-haemolytic streptococcal pharyngitis in a tertiary paediatric emergency department. Swiss Med Wkly 2012;142:35S | No specific RADT mentioned |
Peralta NV, Alcaraz LE. Frequency of isolates of Streptococcus pyogenes in patients with clinical diagnosis of acute pharyngotonsillitis in a private laboratory in the city of San Luis. Biocell 2018;3:26–7 | No specific RADT mentioned |
Phung E, Mirzaian E, Arouchanova D. Utilization of pharmacist-performed rapid influenza and group A Streptococcus testing and treatment in the community pharmacy setting: economic value and patient satisfaction. J Am Pharm Assoc 2018;58:e129 | Abstract only. No extractable data |
Pitetti RD, Drenning SD, Wald ER. Evaluation of a new rapid antigen detection kit for group A beta-hemolytic streptococci. Pediatr Emerg Care 1998;14:396–8. https://doi.org/10.1097/00006565-199812000-00004 | Wrong test |
Plainvert C, Duquesne I, Touak G, Dmytruk N, Poyart C. In vitro evaluation and comparison of 5 rapid antigen detection tests for the diagnosis of beta-hemolytic group A streptococcal pharyngitis. Diagn Microbiol Infect Dis 2015;83:105–11. https://doi.org/10.1016/j.diagmicrobio.2015.06.012 | Wrong population |
Pulcini C, Pauvif L, Paraponaris A, Verger P, Ventelou B. Perceptions and attitudes of French general practitioners towards rapid antigen diagnostic tests in acute pharyngitis using a randomised case-vignette study: a cross-sectional study. Clin Microbiol Infect 2012;3:494 | No specific RADT mentioned |
Pulcini C, Pauvif L, Paraponaris A, Verger P, Ventelou B. Perceptions and attitudes of French general practitioners towards rapid antigen diagnostic tests in acute pharyngitis using a randomized case vignette study. J Antimicrob Chemother 2012;67:1540–6. https://doi.org/10.1093/jac/dks073 | No specific RADT mentioned |
Ramos JL, Fraile MT, Chanza M, Tormo N, Lurbe A, Gimeno C. Rapid detection of Streptococcus pyogenes in peripheral medical centres. A pilot custody assay. Clin Microbiol Infect 2011;4:S250 | Wrong test |
Rao A, Berg B, Quezada T, Fader R, Walker K, Tang S, et al. Diagnosis and antibiotic treatment of group A streptococcal pharyngitis in children in a primary care setting: impact of point-of-care polymerase chain reaction. BMC Pediatr 2019;19:24. https://doi.org/10.1186/s12887-019-1393-y | Wrong reference standard and no comparison with clinical score |
Rathi SK, Ahmed R. Pakistan prevalence survey in acute pharyngitis. J Pak Med Assoc 2014;64:928–31 | Wrong test |
Rimoin AW, Vince A, Hamza H, da Cunha ALA, Chitale R, Oazi S, Steinhoff MC. Evaluation of a rapid test for streptococcal pharyngitis in children in 3 countries. Pediatr Res 2004;55:279A | Meeting abstract could not be located |
Rimoin AW, Walker CL, Hamza HS, Elminawi N, Ghafar HA, Vince A, et al. The utility of rapid antigen detection testing for the diagnosis of streptococcal pharyngitis in low-resource settings. Int J Infect Dis 2010;14:e1048–53. https://doi.org/10.1016/j.ijid.2010.02.2269 | Wrong test |
Russo ME, Kline J, Jaggi P, Leber AL, Cohen DM. The challenge of patient notification and the work of follow-up generated by a 2-step testing protocol for group A streptococcal pharyngitis in the pediatric emergency department. Pediatr Emerge Care 2017;30:30 | No comparison with throat score or culture |
Sancho A, Diaz-Almiron M, Yebra J, Hawkins M. S. pyogenes reviewed in a paediatric population: age and predictive models. Arch Dis Child 2014;2:A325 | No specific RADT mentioned |
Sarikaya S, Aktaş C, Ay D, Cetin A, Celikmen F. Sensitivity and specificity of rapid antigen detection testing for diagnosing pharyngitis in the emergency department. Ear Nose Throat J 2010;89:180–2 | Wrong test |
Sayyahfar S, Fahimzad A, Naddaf A, Tavassoli S. Antibiotic susceptibility evaluation of group A streptococcus isolated from children with pharyngitis: a study from Iran. Infect Chemother 2015;47:225–30. https://doi.org/10.3947/ic.2015.47.4.225 | Wrong type of test |
Scheel A, DeWyer A, Sarnacki R, Kamarembo J, Okello E, Beaton A. The utility of existing clinical decision rules for streptococcal pharyngitis in Ugandan school children. Global Heart 2018;13:508–9 | No specific RADT mentioned |
Schwartz RH, Kim D, Martin M, Pichichero ME. A Reappraisal of the minimum duration of antibiotic treatment before approval of return to school for children with streptococcal pharyngitis. Pediatr Infect Dis J 2015;34:1302–4. https://doi.org/10.1097/INF.0000000000000883 | Wrong test |
Schwartz K, Monsur J, Northrup J, West P, Neale AV. Pharyngitis clinical prediction rules: effect of interobserver agreement: a MetroNet study. J Clin Epidemiol 2004;57:142–6. https://doi.org/10.1016/S0895-4356(03)00249-X | Wrong test |
Shapiro DJ, Lindgren CE, Neuman MI, Fine AM. Viral features and testing for streptococcal pharyngitis. Pediatrics 2017;139:e20163403 | No specific RADT mentioned |
Sheeler RD, Houston MS, Radke S, Dale JC, Adamson SC. Accuracy of rapid strep testing in patients who have had recent streptococcal pharyngitis. J Am Board Fam Pract 2002;15:261–5 | Wrong population |
Singh S, Dolan JG, Centor RM. Optimal management of adults with pharyngitis – a multi-criteria decision analysis. BMC Med Inform Decis Mak 2006;6:14 | No specific RADT mentioned |
Skoog G, Edlund C, Giske CG, Mölstad S, Norman C, Sundvall PD, Hedin K. A randomized controlled study of 5 and 10 days treatment with phenoxymethylpenicillin for pharyngotonsillitis caused by Streptococcus group A – a protocol study. BMC Infect Dis 2016;16:484. https://doi.org/10.1186/s12879-016-1813-7 | No specific RADT mentioned |
Slinger R, Goldfarb D, Rajakumar D, Moldovan I, Barrowman N, Tam R, Chan F. Rapid PCR detection of group A streptococcus from flocked throat swabs: a retrospective clinical study. Ann Clin Microbiol Antimicrob 2011;10:33. https://doi.org/10.1186/1476-0711-10-33 | Wrong type of test |
St Sauver JL, Weaver AL, Orvidas LJ, Jacobson RM, Jacobsen SJ. Population-based prevalence of repeated group A beta-hemolytic streptococcal pharyngitis episodes. Mayo Clin Proc 2006;81:1172–6 | Wrong test |
Subashini B, Anandan S, Balaji V. Evaluation of a rapid antigen detection test for the diagnosis of group-A beta-hemolytic streptococcus in pharyngotonsillitis. J Glob Infect Dis 2015;7:91–2. https://doi.org/10.4103/0974-777X.154447 | Letter |
Sultan AM, Seliem WA. Evaluating the use of dedicated swab for rapid antigen detection testing in group A streptococcal pharyngitis in children. Afr J Clin Exp Microbiol 2018;19:24–9 | Wrong test |
Supon PA, Tunnell S, Greene M, Ostroff RM. Rapid detection of group A streptococcal antigen with a new optical immunoassay. Pediatr Infect Dis J 1998;17:349–51. https://doi.org/10.1097/00006454-199804000-00019 | Wrong test |
Syriopoulou T, Konstantelos D, Papoula M, Karachanidi E, Maggana I, Straka K, et al. Laboratory methods for diagnosing streptococcal pharyngitis: predictive value, usefulness. Clin Biochem 2011;44:534–5 | No specific RADT mentioned |
Tanz RR, Gerber MA, Kabat W, Rippe J, Seshadri R, Shulman ST. Performance of a rapid antigen-detection test and throat culture in community pediatric offices: implications for management of pharyngitis. Pediatrics 2009;123:437–44. https://doi.org/10.1542/peds.2008-0488 | Wrong test |
Tanz RR, Zheng XT, Carter DM, Steele MC, Shulman ST. Caution needed: molecular diagnosis of pediatric group A streptococcal pharyngitis. J Pediatric Infect Dis Soc 2018;7:e145–e147. https://doi.org/10.1093/jpids/pix086 | Wrong test |
Teratani Y, Hagiya H, Koyama T, Ohshima A, Zamami Y, Tatebe Y, et al. Association between rapid antigen detection tests and antibiotics for acute pharyngitis in Japan: a retrospective observational study. J Infect Chemother 2019;25:267–72 | No specific RADT mentioned |
Thamlikitkul V, Rachata T, Popum S, Chinswangwatanakul P, Srisomnuek A, Seenama C, et al. Accuracy and utility of rapid antigen detection tests for group A beta-hemolytic Streptococcus on ambulatory adult patients with sore throat associated with acute respiratory infections at Siriraj Hospital. J Med Assoc Thai 2018;101 | Wrong test |
Toepfner N, Henneke P, Berner R, Hufnagel M. Impact of technical training on rapid antigen detection tests (RADT) in group A streptococcal tonsillopharyngitis. Eur J Clin Microbiol Infect Dis 2013;32:609–11. https://doi.org/10.1007/s10096-012-1783-7 | Wrong test |
Tsevat J, Kotagal UR. Management of sore throats in children: a cost-effectiveness analysis. Arch Pediatr Adolesc Med 1999;153:681–8. https://doi.org/10.1001/archpedi.153.7.681 | Wrong test |
Tsung LY, Choi KC, Nelson EA, Chan PK, Sung RY. Factors associated with length of hospital stay in children with respiratory disease. Hong Kong Med J 2010;16:440–6 | Wrong type of test |
Tsutsumi H, Ouchi K, Ohsaki M, Yamanaka T, Kuniya Y, Takeuchi Y, et al. Immunochromatography test for rapid diagnosis of adenovirus respiratory tract infections: comparison with virus isolation in tissue culture. J Clin Microbiol 1999;37:2007–9 | Wrong type of test |
Upton A, Lowe C, Stewart J, Taylor S, Lennon D. In vitro comparison of four rapid antigen tests for group A Streptococcus detection. N Z Med J 2014;127:77–83 | Wrong population. No comparison with culture or clinical score |
Vachhani R, Patel T, Centor RM, Estrada CA. Sensitivity for diagnosing group A streptococcal pharyngitis from manufacturers is 10% higher than reported in peer-reviewed publications. South Med J 2017;110:59–64. https://doi.org/10.14423/SMJ.0000000000000597 | Review |
Van Howe RS, Kusnier LP. Diagnosis and management of pharyngitis in a pediatric population based on cost-effectiveness and projected health outcomes. Pediatrics 2006;117:609–19 | No specific RADT mentioned |
Van Limbergen J, Kalima P, Taheri S, Beattie TF. Streptococcus A in paediatric accident and emergency: are rapid streptococcal tests and clinical examination of any help? Emerge Med J 2006;23:32–4 | Wrong test |
Vedia C, Garcia JA, Valles R, Franzi A, Morales C, Prat N. Is it possible to decrease antibiotic prescription in pediatrics? Basic Clin Pharmacol Toxicol 2016;119(Suppl. 1):47 | No specific RADT mentioned |
Waseem M, Ayanruoh S, Humphrey A, Reynolds T. Impact of rapid streptococcal test on antibiotic use in a pediatric emergency department. Ann Emerg Med 2009;1:S41 | No specific RADT mentioned |
Webb KH, Needham CA, Kurtz SR. Use of a high-sensitivity rapid strep test without culture confirmation of negative results: 2 years’ experience. J Fam Pract 2000;49:34–8 | Wrong test |
Williams KM, Jackson MA, Hamilton M. Rapid diagnostic testing for URIs in children: impact on physician decision making and costs. Infect Med 2002;19:109–17 | No empirical data |
Wong MC, Chung CH. Group A streptococcal infection in patients presenting with a sore throat at an accident and emergency department: prospective observational study. Hong Kong Med J 2002;8:92–8 | Wrong test |
Woodburn JD, Smith KL, Nelson GD. Quality of care in the retail health care setting using national clinical guidelines for acute pharyngitis. Am J Med Qual 2007;22:457–62 | Wrong test |
Wright M, Williams G, Ludeman L. Comparison of two rapid tests for detecting group A streptococcal pharyngitis in the pediatric population at wright-patterson air force base. Mil Med 2007;172:644–6. https://doi.org/10.7205/milmed.172.6.644 | Wrong test |
Xu J, Schwartz K, Monsur J, Northrup J, Neale AV. Patient-clinician agreement on signs and symptoms of ‘strep throat’: a MetroNet study. Fam Pract 2004;21:599–604 | Wrong test |
Yang JH, Huang PY, Shie SS, Yang S, Tsao KC, Wu TL, et al. Diagnostic performance of the Sofia® influenza A + B fluorescent immunoassay in adult outpatients in Northern Taiwan. J Med Virol 2018;90:1010–18. https://doi.org/10.1002/jmv.25043 | Wrong test |
Yoon J, Yun SG, Nam J, Choi SH, Lim CS. The use of saliva specimens for detection of influenza A and B viruses by rapid influenza diagnostic tests. J Virol Methods 2017;243:15–19 | Wrong test |
Appendix 4 The QUADAS-2 tailored guidance notes and form
Modified QUADAS-2 and guidance notes for strep A
Risk of bias should only be classed as low for each domain if all questions could be answered with ‘yes’. If one or more signalling question is answered with ‘no’ the risk of bias should be classed as ‘high’ and equally if at least one question is answered with ‘unclear’ the risk of bias should be judged ‘unclear’.
Domain 1: patient selection
Test measurement ratings will differ depending on whether or not antibiotics have been previously prescribed.
A. Risk of bias
Guidance
Was a consecutive or random sample of patients enrolled?
This question should only be answered with ‘yes’ if the study clearly states that children/adults were recruited consecutively or randomly. Case–control or two-gate studies should be answered no.
Was a case–control design avoided?
There is increased bias in a case–control (two-gate) study compared with a cohort (one-gate) study.
Were selection criteria clearly described (age limits and Centor/FeverPAIN scores)?
All inclusion criteria should be clearly specified. Lack of clear selection criteria, or different selection criteria, introduces bias through unclear adherence to consecutive or random sampling, and because there is a recognised bias with the reference standard detecting strep A carriage (rather than strep A detection), which is exacerbated if a greater proportion of less symptomatic patients are introduced.
Did the study avoid inappropriate exclusions?
Patients who meet the inclusion criteria should be given the index test. If > 5% meet the inclusion criteria but are not given the test, this is an inappropriate exclusion. If < 5% and no reasons are provided, this is also an inappropriate exclusion.
All patients who received the index test should have their results reported. If > 5% are not reported, this is an inappropriate exclusion. If < 5% are reported but no reasons are provided, this is also an inappropriate exclusion.
We would expect the whole cohort to receive a rapid test(s) (from one of our included list: Clearview Exact, BD Veritor Plus, Strep A Rapid Test, NADAL Strep A, OSOM Strep A test, QuikRead Go Strep A test kit, Alere TestPack +Plus Strep A, bioNexia Strep A, Biosynex, Sofia Strep A FIA) or a molecular test (from one of our included list: Alere i, cobas Liat Strep A Assay or Xpert Xpress Strep A). Also a comparator [Centor (modified Centor or McIsaac) or FeverPAIN] where included in the study design and a biological culture as the reference standard. Very small numbers of exclusions (< 5%) may be acceptable, if accompanied by reasonable explanations.
Were patients seen in an ambulatory care setting?
Patients seen as inpatients may vary in severity and have comorbidities affecting their diagnosis.
B. Concerns regarding applicability
Guidance
Patients aged < 5 years do not meet our inclusion criteria. If more than 10% of the sample are under 5 years this should be rated as high.
In the UK the test would be given following an assessment using Centor or FeverPAIN. The rapid test would be given only in people with Centor scores of > 2 points and FeverPAIN scores of > 1 point. If the study does not mention these tests or no assessment test was undertaken, it should be rated high concern. If the study included people with scores of ≤ 2 points on Centor or 1 point on FeverPAIN, this can only be classed as low risk of bias if the test accuracy is reported separately for Centor scores of > 2 points and FeverPAIN scores of > 1 point. If the test accuracy for low- and high-rated Centor/FeverPAIN groups are ONLY reported together, this should be reported as a high concern for applicability.
Domain 2: index test
The main sources of bias introduced by conducting and interpreting the index test are blinding and defining the threshold. If the reference standard is carried out before the index test (e.g. in case control studies) it is important to blind personnel to the results of the reference standard.
The QUADAS-2 tool requires a threshold to be prespecified in the methods to avoid adjustment of the threshold according to the test outcome. In manufactured tests the threshold has been predetermined. There is some subjectivity in how the RADT tests are read. If the operator claimed to follow the product insert then the subjectivity has been reduced; however, a bias still exists. There is no subjectivity in the molecular tests, which tell you on the screen whether or not strep A is present. In studies of test development, the threshold must be reported and must be prespecified.
A. Risk of bias
Were the index test results interpreted without knowledge of the results of the reference standard?
In cohort designs where the reference standard was given after or at the same time as the index test, answer yes. This is because the reference standard is read after a longer time period than the rapid test. If timing is unclear or the study has a case–control design then this is a yes only if blinding is specifically mentioned or if the index test is fully automated with no human interpretation.
Was a separate swab undertaken for the index test?
Manufacturers’ specifications require separate swabs be taken for the index and reference standard. Using one swab for multiple purposes may reduce the amount of the sample and affect the accuracy of the test.
Was a threshold explicitly prespecified?
All manufactured rapid tests and have an inbuilt threshold; therefore, the answer should be low. If the threshold is not prespecified then it must be rated as high risk of bias. In test development studies it must explicitly state that the threshold has been prespecified and what the threshold is.
Is the test reading objective?
Molecular tests provide the result on the screen so should always be answered yes (low risk of bias). All rapid tests are subjectively read based on the internal inbuilt threshold bar the BD Veritor plus system, NADAL Strep A scan test, QuikRead Go Strep A test and Sofia Strep A FIA, which use analysers/readers to digitally display results. Any test where a subjective reading is taken will have a high risk of bias and should be answered no.
B. Concerns about applicability
If the study does not specify that the test was carried out to the manufacturer’s specification the rating should be noted as unclear. Previous versions of included tests should be rated as high.
Domain 3: reference standard
The reference standard should be throat culture. FeverPAIN or Centor are appropriate comparator screening tests but not a reference standard.
The reference standard should be undertaken using Staphylococcus or Streptococcus agar or simple blood agar. Cultures using a blood agar should be incubated in an anaerobic atmosphere at 35–37 °C for 18–24 hours, with cultures read after > 18 hours. Alternatively, blood agar could be incubated in 5–10% CO2 at 35–37 °C for 18–24 hours. Cultures using staphylococcal or streptococcal selective agar should be incubated at 35–37 °C in aerobic conditions for 18–48 hours and read after > 24 hours. Current guidance advises to re-examine plates at 48 hours that yield negative results at 24 hours. 29 If the culture is not incubated in the correct manner then there will be a high risk of bias.
Investigators will not be blinded to the clinical scoring tool but should be blinded to the reference standard.
A. Risk of bias
Was a separate swab taken for throat culture testing?
The American Academy of Pediatrics recommends separate swabs be taken for the index and reference standard testing (Mitul Patel, Birmingham Women’s and Children’s NHS Foundation Trust, Birmingham, UK, personal communication). Using one swab for multiple purposes may reduce the amount of the sample and affect the accuracy of the test.
Is the reference standard likely to correctly classify the target condition?
If the reference standard used was throat culture and this was done appropriately then the answer should be yes. This should be a laboratory culture on a staphylococcal, streptococcal or blood agar plate during 48 hours. Were the culture medium, atmosphere, duration of incubation and GAS-confirmation technique described?
Were the reference standard results interpreted without knowledge of the results of the index test?
This can be rated as low providing the operator in the laboratory is competency assessed and follows the standard operating procedure. This is applicable to all types of laboratory cultures.
B. Concerns about applicability
The concern of applicability of the reference standard will be ‘high’ if any measure other than a throat culture is used. The culture should be carried out using a staphylococcal or streptococcal or simple blood agar plate, incubated as described above and then serotyped. If any of these measures differ then there is a high risk of bias. If it is not reported then this should be noted as unclear.
Domain 4: flow and timing
The index test should be carried out prior to the reference standard and to antibiotic prescribing.
A. Risk of bias
Was there an appropriate interval between index test(s) and reference standard?
The swab for throat culture should be taken at the same time as the swab for the RADT and should be processed within 48 hours. Consider the following:
-
Were both index test(s) and reference standard (and comparator where included) all carried out at the same appointment?
-
Were all swabs processed within 48 hours?
If the answer to any of these is no then this is high risk of bias.
Were both index test(s) and reference standard (and comparator where included) all carried out prior to commencement of antibiotics?
Patients should not have been treated with antibiotics prior to receiving the index test(s) and/or reference standard.
Did all patients receive a reference standard?
All should receive both the index test and reference standard. Very small numbers of exclusions (< 5%) may be acceptable, if accompanied by reasonable explanations.
Did all patients receive the same reference standard?
This question should be answered with ‘no’ if patients received different reference standards or if positive cases on the index test received a different reference standard to negative subjects.
Were all patients included in the analysis?
All patients should be included in the analysis. If inconclusive or intermediate results are not considered in the analysis the question should be answered with ‘no’. Very small numbers of exclusions (< 5%) may be acceptable, if accompanied by reasonable explanations. If patients lost to follow-up were not included in the analysis or > 5% of patients were lost to follow-up (even if considered in the analysis) the question should be answered with ‘no’. (The actual proportion of patients lost to follow-up needs to be recorded for each study.) In both cases the risk of bias should be classed as ‘high’.
Appendix 5 Antibiotic-prescribing behaviours
Study (first author and year of publication) | Country | Index test | Study details | Antibiotic-prescribing behaviour |
---|---|---|---|---|
Little 20136 | UK | Alere TestPack Plus (IMI TestPack) |
Three-armed trial with a delayed antibiotics arm (clinical assessment without a tool), a clinical tool arm and a rapid test following clinical tool arm. Clinicians given guidance to follow on prescribing Arm 1: delayed antibiotics control arm – depending on severity of presentation patients were given antibiotics, given no antibiotics or given a delayed prescription to collect after 3–5 days if symptoms did not improve or worsened Arm 2: clinical score arm – patients assessed using FeverPAIN. Patients with scores of 0 or 1 were not offered antibiotics. Immediate antibiotics were offered for patients with scores of ≥ 4 and for scores of 2 or 3 delayed antibiotics were offered Arm 3: RADT arm – those patients with a clinical score of 0 or 1 were not offered antibiotics or a RADT, those with a score of 2 were offered delayed antibiotics and those with scores of ≥ 3 were given a RADT. All those with negative RADTs were not offered antibiotics |
Antibiotics offered immediately or a delayed prescription to 89% (185/207) in delayed prescription control arm, to 59% (124/211) in the clinical score arm and 40% (86/213) in the clinical score plus RADT arm Use of antibiotics ascertained from the patients with incomplete responses as follows: 46% (75/164) used antibiotics in the delayed prescription arm compared with 37% (60/161) in the clinical score arm and 35% (58/164) in the clinical score plus RADT arm |
Llor 201145 | Spain | OSOM Strep A test | Two-arm cluster randomised trial. Health-care centres randomised to intervention (RADT) arm or control arm (management with clinical criteria only) |
Control arm GPs prescribed antibiotics in 64% (168/262) of patients compared with 44% (123/281) in the RADT arm. Of the 60 test-positive cases, 59 were given antibiotics (98%). In those for whom the test was negative, 69/225 were given antibiotics (31%) Across both trial arms, antibiotic treatment was ‘inappropriate’ (as culture was negative) in 40% (210/526) of patients, and in 3% (16/526) of patients antibiotics were not prescribed when culture was positive; 153 of these cases were in the control arm and 73 were in the RADT arm. Category of inappropriate decision (overprescribing or underprescribing) is not reported by trial arm |
Worrall 200755 | Canada | Clearview Exact Strep A | Four-armed trial: control arm using clinician’s independent decisions as usual practice, arm using STDR (≤ 1, no need for antibiotics; 2, decisions made by the clinician; 3 or 4, antibiotics needed), arm using a rapid test (RADT) and arm using both STDR and RADT (≤ 1, no need for antibiotics; 2, RADT; 3 or 4, antibiotics needed). Clinicians were recommended to follow the guidance but it was not enforced | 46.7% (247/533) of patients received antibiotics. 58% (82/141) usual practice, 55% (94/170) with Centor score alone compared with 27% (32/120) with rapid antigen testing alone and 38% (39/102) with combined rapid antigen testing and Centor score |
Study (first author and year of publication) | Country | Index test | Study details | Antibiotic-prescribing behaviour |
---|---|---|---|---|
Bird 201835 | UK | bioNexia Strep A | Prospective cohort before-and-after study. Baseline antibiotic-prescribing data were collected retrospectively from October to November 2014 (method of diagnosis in this phase is not reported) and compared (following introduction of a new algorithm, RADT for those with a McIsaac score of > 3) with rates in August to November 2015 and September to November 2016. Only positive RADT given antibiotics but clinicians could prescribe if they still had a high level of clinical suspicion of strep A pharyngitis | Following implementation of an algorithm combining McIsaac scores and bioNexia Strep A Rapid Testing, antibiotic-prescribing rates fell steeply from 79% (166/210) at baseline to 24% (51/214) in year 1 and 28.2% (51/181) for the second year |
Appendix 6 Record of searches: cost-effectiveness
Sore throat/group A Streptococcus with economic evaluations/quality of life/cost and resource use
Bibliographic databases
Summary of bibliographic database searches
Database | Date of search | Number of records from targeted search results (to screen first) + other results picked up by broader search = total number of records (+ update search results) |
---|---|---|
MEDLINE (via OvidSP) | 22 January 2019 (updated 13 March 2019) | 304 + 1728 = 2032 (+ 36) |
EMBASE (via OvidSP) | 22 January 2019 (updated 13 March 2019) | 434 + 2673 = 3107 (+ 67) |
NHS EED and HTA database (via CRD) | 22 January 2019 (not updated as no new records added) | 13 + 42 = 55 |
Science Citation Index and Conference Proceedings Citation Index – Science (via the Web of Science) | 29 January 2019 (updated 13 March 2019) | 260 + 1397 = 1657 (+ 17) |
Cost-Effectiveness Analysis (CEA) Registry | 29 January 2019 (updated 13 March 2019) | 3 (+ 0) |
EconPapers (RePEc) | 29 January 2019 (updated 13 March 2019) | 6 (+ 0) |
ScHARRHUD | 29 January 2019 (updated 13 March 2019) | 0 (+ 0) |
Total number of records from database searches: (1011 + 5849 = 6860) + 120 from 2019 update search = 6980.
Total number of records after deduplication: (522 + 2175 = 2697) + 58 from 2019 update search = 2755.
MEDLINE (via OvidSP)
Databases: Ovid MEDLINE and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions.
Date searched: 22 January 2019 (updated on 13 March 2019; see at the end of this search record).
Date range searched: 1946 to 21 January 2019.
Original search: 22 January 2019
-
exp Pharyngitis/ (15,095)
-
pharyngit*.ti,ab,kf. (5487)
-
(nasophyryngit* or rhinopharyngit* or epipharyngit*).ti,ab,kf. (178)
-
(tonsillit* or tonsilit*).ti,ab,kf. (5615)
-
((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*)).ti,ab,kf. (9975)
-
or/1-5 (25,268)
-
Streptococcal Infections/di, mi (13,421)
-
Streptococcus pyogenes/im, ip (5463)
-
7 or 8 (16,691)
-
((strep or streptococcal or group) adj2 A).ti,ab,kf. (564,113)
-
9 and 10 (4859)
-
(strep* adj5 (throat* or pharyn* or tonsil*)).ti,ab,kf. (3410)
-
streptoco* A.ti,ab,kf. (480)
-
(group A adj5 streptoco*).ti,ab,kf. (9515)
-
((streptococcus or strep or staphylococcus) adj1 (pyogenes or pyogenic)).ti,ab,kf. (7726)
-
((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield)).ti,ab,kf. (240)
-
(s pyogenes or pyogenes s or micrococcus scarlatinae).ti,ab,kf. (2497)
-
lancefield group.ti,ab,kf. (476)
-
gabhs.ti,ab,kf. (394)
-
or/11-19 (18,885)
-
exp Economics/ (571,394)
-
exp “Costs and Cost Analysis"/ (221,362)
-
Health Status/ (75,366)
-
exp “Quality of Life"/ (171,033)
-
exp Quality-Adjusted Life Years/ (10,672)
-
(pharmacoeconomic* or pharmaco-economic* or economic* or cost* or price or prices or pricing).ti,ab,kf. (752,907)
-
(expenditure$ not energy).ti,ab,kf. (27,109)
-
(value adj1 money).ti,ab,kf. (32)
-
budget*.ti,ab,kf. (26,932)
-
(health state* or health status).ti,ab,kf. (57,854)
-
(qaly* or ICER or utilit* or EQ5D or EQ-5D or euroqol or euro-qol or short-form 36 or shortform 36 or SF-36 or SF36 or SF-6D or SF6D or SF-12 or SF12 or health utilities index or HUI).ti,ab,kf. (224,115)
-
(markov or time trade off or TTO or standard gamble or SG or hrql or hrqol or disabilit* or disutilit* or net benefit or contingent valuation).ti,ab,kf. (215,735)
-
(quality adj2 life).ti,ab,kf. (248,124)
-
(decision adj2 model).ti,ab,kf. (6096)
-
(visual analog* scale* or discrete choice experiment* or health* year* equivalen* or (willing* adj2 pay)).ti,ab,kf. (54,743)
-
resource*.ti,ab,kf. (294,615)
-
(well-being or wellbeing).ti,ab,kf. (77,269)
-
21 or 22 or 23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31 or 32 or 33 or 34 or 35 or 36 or 37 (2,072,673)
-
6 and 38 (1622)
-
20 and 38 (714)
-
39 and 40 (304)
-
39 or 40 (2032
-
42 not 41 (1728)
Updated search: 13 March 2019
Re-ran above search with following date limits:
-
limit 42 to ed=20190122-20190313 (8)
-
limit 42 to ep=20190122-20190313 (17)
-
2019*.dt,ez. (265,815)
-
42 and 46 (29)
-
44 or 45 or 47 (36)
Total after removing duplicates with previous search: 27.
EMBASE (via OvidSP)
Databases: EMBASE Classic and EMBASE.
Date range searched: 1947 to 2019 week 3.
Date searched: 22 January 2019 (updated on 13 March 2019; see at the end of this search record).
Original search: 22 January 2019
-
*streptococcal pharyngitis/ or *pharyngitis/ or *rhinopharyngitis/ or *sore throat/ or *tonsillitis/ or *chronic tonsillitis/ or *palatine tonsillitis/ (12,255)
-
pharyngit*.ti,ab,kw. (7907)
-
(nasophyryngit* or rhinopharyngit* or epipharyngit*).ti,ab,kw. (381)
-
(tonsillit* or tonsilit*).ti,ab,kw. (8351)
-
((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*)).ti,ab,kw. (15,999)
-
or/1-5 (32,848)
-
Streptococcus infection/di (3828)
-
Streptococcus pyogenes/ or streptococcus group a/ or group A streptococcal infection/ (24,060)
-
7 or 8 (27,010)
-
((strep or streptococcal or group) adj2 A).ti,ab,kw. (799,616)
-
9 and 10 (9653)
-
(strep* adj5 (throat* or pharyn* or tonsil*)).ti,ab,kw. (4855)
-
streptoco* A.ti,ab,kw. (636)
-
(group A adj5 streptoco*).ti,ab,kw. (12,259)
-
((streptococcus or strep or staphylococcus) adj1 (pyogenes or pyogenic)).ti,ab,kw. (9749)
-
((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield)).ti,ab,kw. (391)
-
(s pyogenes or pyogenes s or micrococcus scarlatinae).ti,ab,kw. (3246)
-
lancefield group.ti,ab,kw. (566)
-
gabhs.ti,ab,kw. (507)
-
or/11-19 (24,568)
-
exp health economics/ (803,214)
-
exp health status/ (219,256)
-
exp “quality of life”/ (447,670)
-
exp quality adjusted life year/ (23,005)
-
(pharmacoeconomic* or pharmaco-economic* or economic* or cost* or price or prices or pricing).ti,ab,kw. (986,866)
-
(expenditure* not energy).ti,ab,kw. (37,545)
-
(value adj2 money).ti,ab,kw. (2246)
-
budget*.ti,ab,kw. (35,940)
-
(health state* or health status).tw. (75,069)
-
(qaly* or ICER or utilit* or EQ5D or EQ-5D or euroqol or euro-qol or short-form 36 or shortform 36 or SF-36 or SF36 or SF-6D or SF6D or SF-12 or SF12 or health utilities index or HUI).ti,ab,kw. (321,459)
-
(markov or time trade off or TTO or standard gamble or SG or hrql or hrqol or disabilit* or disutilit* or net benefit or contingent valuation).ti,ab,kw. (311,593)
-
(quality adj2 life).tw. (384,281)
-
(decision adj2 model).tw. (9229)
-
(visual analog* scale* or discrete choice experiment* or health* year* equivalen* or (willing* adj2 pay)).tw. (78,125)
-
resource*.ti,ab,kw. (375,642)
-
(well-being or wellbeing).tw. (99,946)
-
21 or 22 or 23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31 or 32 or 33 or 34 or 35 or 36 (2,880,444)
-
6 and 37 (2459)
-
20 and 37 (1082)
-
38 and 39 (434)
-
38 or 39 (3107)
-
41 not 40 (2673)
Updated search: 13 March 2019
Re-ran the above search with the following date limits:
-
limit 41 to dd=20190122-20190313 (16)
-
limit 41 to em=201901-201903 (25)
-
43 or 44 (41)
-
limit 41 to dc=20190122-20190313 (42)
-
45 or 46 (67)
Total after removing duplicates with other update and previous searches: 25.
NHS Economic Evaluation Database and Health Technology Assessment database (via Centre for Reviews and Dissemination)
Searched on 22 January 2019. (Not updated because no new records have been added to NHS EED since 31 March 2015 or to the HTA database since 31 March 2018. The INAHTA website was checked in March 2019 to see if a new platform for the HTA database was available.)
Original search: 22 January 2019
-
MeSH DESCRIPTOR Pharyngitis EXPLODE ALL TREES IN DARE,NHSEED,HTA (73)
-
(pharyngit*) (85)
-
(nasophyryngit*) OR (rhinopharyngit*) OR (epipharyngit*) (5)
-
(tonsillit* or tonsilit*) (43)
-
(((sore or pain* or ache* or aching or inflam* or infect*) adj3 (pharyn* or throat* or tonsil* or nasopharyn* or rhinopharyn* or epipharyn*))) (91)
-
#1 OR #2 OR #3 OR #4 OR #5 (163)
-
MeSH DESCRIPTOR Streptococcal Infections WITH QUALIFIERS DI, MI IN DARE,NHSEED,HTA (31)
-
MeSH DESCRIPTOR Streptococcus pyogenes WITH QUALIFIERS IM, IP IN DARE,NHSEED,HTA (13)
-
#7 OR #8 (36)
-
(((strep or streptococcal or group) adj2 A)) (2025)
-
#9 AND #10 (17)
-
((strep* adj5 (throat* or pharyn* or tonsil*))) (39)
-
(streptoco* adj1 A) (10)
-
((group A adj5 streptoco*)) (27)
-
(((streptococcus or strep or staphylococcus) adj1 (pyogenes or pyogenic))) (25)
-
(((streptococcus or strep) adj1 (epidemicus or erysipelatis or erysipelatos or hemolyticus or haemolyticus or scarlatinae or lancefield))) (0)
-
((s pyogenes or pyogenes s or micrococcus scarlatinae)) (1)
-
(lancefield group) (0)
-
(gabhs) (8)
-
#12 OR #13 OR #14 OR #15 OR #16 OR #17 OR #18 OR #19 (51)
-
#6 AND #20 (43)
-
(#21) IN NHSEED, HTA (13)
-
#6 OR #20 (171)
-
(#23) IN NHSEED, HTA (55)
-
(#24 NOT #22) IN NHSEED, HTA (42)
Science Citation Index and Conference Proceedings Citation Index – Science (via the Web of Science)
Date of search: 29 January 2019 (updated on 13 March 2019; see at the end of this search record).
Original search: 29 January 2019
Note: search record reads from bottom to top.
# 20 | 1397 |
#19 not #18 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 19 | 1657 |
#17 OR #16 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 18 | 260 |
#17 AND #16 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 17 | 709 |
#15 AND #14 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 16 | 1208 |
#15 AND #5 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 15 | 3,164,661 |
TS=(“quality of life” or qol or hrql or hrqol or (“quality adjusted life” NEAR/0 year*) or qaly* or icer or cost* or economic* or pharmacoeconomic* or pharmaco-economic* or price or prices or pricing or (expenditure* not energy) or (value NEAR/1 money) or budget* or euro-qol or utilit* or disutilit* or (net NEAR/0 benefit*) or (contingent NEAR/0 valuation*) or euroqol or “euro qol” or eq5d or eq-5d or “short-form 36” or “shortform 36” or sf-36 or sf36 or sf-6d or sf6d or sf-12 or sf12 or “health utilities index” or hui or (time NEAR/0 trade*) or tto or “standard gamble” or sg or markov or (decision NEAR/1 model*) or (visual NEAR/0 analog*) or “discrete choice” or ((health* NEAR/0 year*) NEAR/0 equivalen*) or (health NEAR/0 stat*) or (willing* NEAR/1 pay) or resource* or wellbeing or well-being) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 14 | 17,381 |
#6 OR #7 OR #8 OR #9 OR #10 OR #11 OR #12 OR #13 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 13 | 308 |
TS=gabhs Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 12 | 445 |
TS="lancefield group” Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 11 | 2059 |
TS=(“s pyogenes” OR “pyogenes s” OR “micrococcus scarlatinae”) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 10 | 60 |
TS=((strep*) NEAR/0 (epidemicus OR erysipelatis OR erysipelatos OR hemolyticus OR haemolyticus OR scarlatinae OR lancefield)) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 9 | 7163 |
TS=((strep*) NEAR/0 (pyogenes OR pyogenic)) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 8 | 9682 |
TS=(“group A” NEAR/4 strep*) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 7 | 1165 |
TS="strep* A” Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 6 | 2877 |
TS=(strep* NEAR/4 (throat* OR pharyn* OR tonsil*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 5 | 12,508 |
#1 OR #2 OR #3 OR #4 Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 4 | 7034 |
TS=((sore OR pain* OR ache* OR aching OR inflam* OR infect*) NEAR/2 (pharyn* OR throat* OR tonsil* OR nasopharyn* OR rhinopharyn* OR epipharyn*)) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 3 | 2716 |
TS=(tonsillit* OR tonsilit*) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 2 | 96 |
TS=(nasophyryngit* OR rhinopharyngit* OR epipharyngit*) Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
# 1 | 4667 |
TS=pharyngit* Indexes=SCI-EXPANDED, CPCI-S Timespan=1900-2019 |
Updated search: 13 March 2019
Re-ran the above search with the following date limits.
# 19 | 17 | #17 or #16 Indexes=SCI-EXPANDED, CPCI-S Timespan=2019-2019 |
Total after removing duplicates with other update and previous searches: 6.
Cost-Effectiveness Analysis (CEA) Registry
Searched on 29 January 2019 (updated on 13 March 2019; see at the end of this search record).
Original search: 29 January 2019
Single-term searches, deduplicated and screened online. Results (number selected):
-
pharyngitis 6 (3)
-
pharynx 4 (0)
-
nasopharyngitis 0 (0)
-
nasopharynx 0 (0)
-
rhinopharyngitis 0 (0)
-
rhinopharynx 0 (0)
-
epipharyngitis 0 (0)
-
epipharynx 0 (0)
-
tonsillitis 0 (0)
-
tonsilitis 0 (0)
-
tonsil 3 (1, already got from search on pharyngitis above)
-
throat 6 (2, both already got from search on pharyngitis above)
-
streptococcus 22 (2, both already got from search on pharyngitis above)
-
streptococcal 7 (2, both already got from search on pharyngitis above)
-
strep 30 (3, both already got from search on pharyngitis above)
Potentially relevant results downloaded to EndNote: 3.
Updated search: 13 March 2019
Re-ran the above searches on 13 March 2019. No further records added.
EconPapers (RePEc)
Date searched: 30 January 2019 (updated on 13 March 2019; see at the end of this search record).
Original search: 30 January 2019; advanced search
#1
-
((pharyn* | nasopharyn* | rhinopharyn* | epipharyn* | tonsil* | throat) + (strep* | “lancefield group” | pyogenes | micrococcus)) 6
#2
-
((sore | pain* | ache* | aching | inflam* | infect*) + (throat | pharyn* | tonsil* | nasopharyn* | rhinopharyn* | epipharyn*)) 31
#3
-
pharyngitis | nasopharyngitis | rhinopharyngitis | epipharyngitis | tonsilitis | tonsillitis 28
Above three searches combined with OR (|):
#4
-
((pharyn* | nasopharyn* | rhinopharyn* | epipharyn* | tonsil* | throat) + (strep* | “lancefield group” | pyogenes | micrococcus)) | ((sore | pain* | ache* | aching | inflam* | infect*) + (throat | pharyn* | tonsil* | nasopharyn* | rhinopharyn* | epipharyn*)) | (pharyngitis | nasopharyngitis | rhinopharyngitis | epipharyngitis | tonsilitis | tonsillitis) 52
Deduplicated and screened online, selecting all potentially relevant.
Potentially relevant results downloaded to EndNote: 6.
Updated search: 13 March 2019
Re-ran above combination search on 13 March 2019. One further record added, but this was not relevant.
School of Health and Related Research Health Utilities Database
Searched on 30 January 2019 (updated on 13 March 2019; see at the end of this search record).
Original search: 30 January 2019
Single-term searches, deduplicated and screened online. Results (number selected):
-
pharynx* 0 (0)
-
nasopharyn* 0 (0)
-
rhinopharyn* 0 (0)
-
epipharyn* 0 (0)
-
tonsil* 0 (0)
-
throat* 1 (0)
-
strep* 0 (0)
-
gapbs 0 (0)
-
pyogene* 0 (0)
Potentially relevant results downloaded to EndNote: 0.
Updated search: 13 March 2019
Re-ran above searches on 13 March 2019. No further records added.
Other sources
In addition to these searches, any relevant cost-effectiveness studies identified during the clinical effectiveness review were brought to the attention of the reviewers.
Search engine
Google: searched 22 March 2019.
Search strategy
(HTA OR “health technology assessment”) AND (pharyngitis OR strep OR streptococcus OR streptococcal).
Checked first 20 records.
Appendix 7 Excluded studies after full-text papers received for group A Streptococcus economics search
Study (first author and year of publication) | Title | Reason for exclusion |
---|---|---|
Banerjee 2018104 | Rapid tests for the diagnosis of group A streptococcal infection: a review of diagnostic test accuracy, clinical utility, safety, and cost-effectiveness | The review provides information on two cost-effectiveness studies. One study has been included76 and the other study was excluded as it is not an economic evaluationa and the test is outside the NICE scope105 |
Benjamin 2000106 | The costs of testing for streptococcal pharyngitis in the office laboratory |
Letter to editor commenting on Tsevat and Kotagal107 Not an economic evaluationa |
Boyler 2002108 | A cost-effectiveness analysis of recommended strategies for acute pharyngitis |
Abstract Test outside NICE scope |
Ehrlich 2002109 | Cost-effectiveness of treatment options for prevention of rheumatic heart disease from group A streptococcal pharyngitis in a pediatric population | No specific test stated |
Giraldez-Garcia 2011110 | Diagnosis and management of acute pharyngitis in a paediatric population: a cost-effectiveness analysis | No specific test stated |
Klepser 2011111 | Cost-effectiveness of pharmacist provided care for the treatment of adult pharyngitis |
Abstract No specific test stated |
Klepser 201280 | Cost-effectiveness of pharmacist provided care for the treatment of adult pharyngitis | No specific test stated |
Komaroff 1983112 | A cost-effectiveness analysis of alternate strategies for management of sore throat |
Abstract No specific test stated |
Lathia 2018113 | Cost-minimization analysis of community pharmacy-based point-of-care testing for strep throat in 5 Canadian provinces | No specific test stated |
Maizia 2012114 | Diagnostic strategies for acute tonsillitis in France: a cost-effectiveness study |
Not in English (in French) No specific test stated |
Malecki 2017115 | Rapid strip tests as a decision-making tool about antibiotic treatment in children – a prospective study |
Not an economic evaluationa No comparator |
Meier 1990116 | Effects of a rapid antigen test for group A streptococcal pharyngitis on physician prescribing and antibiotic costs | No specific test stated |
Mlejnek 2014100 | Utility and cost effectiveness of throat culture in the treatment of patients with negative rapid strep screens | No specific test stated |
Neuner 200378 | Diagnosis and management of adults with pharyngitis. A cost-effectiveness analysis | Test outside NICE scope |
Polisena 2009117 | Point of care testing for streptococcal sore throat: a review of diagnostic accuracy, cost-effectiveness, and guidelines | The review provides information on one cost-effectiveness study that was excluded as it did not mention a specific test79 |
Tsevat 1999107 | Management of sore throats in children: a cost-effectiveness analysis | No specific test stated |
Van Howe 200679 | Diagnosis and management of pharyngitis in a pediatric population based on cost-effectiveness and projected health outcomes | No specific test stated |
Appendix 8 Data extraction for cost-effectiveness studies
Study details subheading | Description of study details |
---|---|
Study details | |
Study title | PRImary care Streptococcal Management (PRISM) study: in vitro study, diagnostic cohorts and a pragmatic adaptive RCT with nested qualitative study and cost-effectiveness study |
First author | Paul Little (Programme Director of Programme Grants for Applied Research, Editor-in-Chief for the Programme Grants for Applied Research journal and member of the NIHR Journals Library Board) |
Co-authors | Richard Hobbs, Michael Moore, David Mant, Ian Williamson, Cliodna McNulty, Gemma Lasseter, MY Edith Cheng, Geraldine Leydon, Lisa McDermott, David Turner, Rafael Pinedo-Villanueva, James Raftery [previously a member of the NIHR Journals Library Editorial Group (2012–14), current member of NIHR HTA and EME Editorial Board, previously Director of the Wessex Institute and Head of NIHR Evaluation, Trials and Studies Coordinating Centre], Paul Glasziou and Mark Mullee on behalf of the PRISM investigators |
Source of publication | Health Technology Assessment 2014, Volume 18, Issue 6 |
Language | English language |
Publication type | Original article |
Inclusion criteria/study eligibility/PICOS | |
Population (and subgroups) | Patients aged ≥ 3 years, who had acute sore throat |
Intervention(s) |
RADTs used with clinical score (FeverPAIN) All patients received the clinical scoring tool. Those with a score of 0 or 1 were not offered antibiotics or a RADT, those with a score of 2 were offered delayed antibiotics and those with scores of ≥ 3 were given a RADT. All those with negative RADTs were not offered antibiotics |
Comparator(s) |
Delayed antibiotics (control group) or clinical score only In the control group, depending on the severity of their presentation patients were given antibiotics, given no antibiotics or given a delayed prescription to collect after 3–5 days if symptoms did not improve or worsened In the clinical score group, patients were assessed using the FeverPAIN clinical scoring tool. Patients with scores of 0 or 1 were not offered antibiotics. Immediate antibiotics were offered to patients for scores of ≥ 4 and for scores of 2 or 3 delayed antibiotics were offered |
Outcome(s) |
Point change in symptom severity score (primary outcome measure in trial) and QALYs based on EQ-5D The symptom severity score is a two-item score (sore throat, difficulty swallowing); each symptom was scored 0 = no problem to 6 = as bad as it can be. A higher score indicates worse symptoms |
Study design | Economic analysis alongside a clinical trial |
Setting and location | GP clinics in south and central England |
Type of economic evaluation | Cost-effectiveness and cost–utility analysis |
Methods | |
Study perspective | NHS perspective |
Time horizon | 14 days and 28 days (1 month) after randomisation |
Discount rate | Not applicable |
Measurement of effectiveness | EQ-5D measure completed at baseline and 14 days after recruitment and recorded in a patient-completed diary |
Measurement and valuation of preference-based outcomes | EQ-5D values were scored using the standard UK tariff |
Resource use and costs |
Resource use data were obtained from GP case notes and from study clinicians. Data included GP and nurse practitioner visits; antibiotics; practice visits for complications of infections and antibiotic complications; and hospital admissions related to infections. Costs included test costs, staff time, medications, complications and hospital admissions. Unit costs were obtained from the Unit Costs of Health and Social Care, NHS Reference Costs and NHS drug tariff The costs associated with the clinical score plus the test comprised the additional time required to provide the intervention as well as the cost of the RADT (£3.25 per test; £65 for 20 tests) |
Currency, price date and conversion | Costs are in 2010/11 prices in Great British pounds |
Model type | None, as it was based on trial data |
Assumptions | EQ-5D results for the end of the 28-day follow-up period were not available; therefore, the values obtained at the end of the 14-day period were assumed to persist to the end of the study period; that is, the last value obtained was carried forward for 14 days |
Analytical methods | Incremental costs and outcomes presented |
Results | |
Study parameters | Means and 95% CIs were generated for use cost variables. Mean values (with 95% CI) for outcome variables (both symptom score and QALYs) were estimated using regression equations controlling for baseline characteristics (fever and baseline symptoms) |
Incremental costs and outcomes | Cost-effectiveness analysis
|
Characterising uncertainty |
Bootstrapping using 5000 samples was used to generate CEACs. Bootstrapping was also used to generate scatterplots on the cost-effectiveness plane At a value of £30,000 per QALY, the probabilities that the three groups were cost-effective were 28%, 38% and 35%, for the delayed prescribing, clinical score and RADT groups, respectively, for the 28-day QALY gain |
Discussion | |
Study findings | The clinical scoring tool (FeverPAIN) was effective in helping to reduce symptoms, and the costs in all three groups were similar. The cost–utility analysis was less clear, as QALY differences were very small, generating wide CIs. The CEACs for the cost–utility study indicate that clinical score is most likely to be cost-effective over all values; however, they also indicate considerable uncertainty |
Limitations |
|
Generalisability | The generalisability of the analysis may be limited to the unit costs used in the analysis |
Other | |
Source of funding | The study was funded by the NIHR HTA programme |
Conflicts of interest | None declared |
Comments | None |
Authors conclusion | |
Using a clinical score appears to be an efficient use of health-care resources compared with either delayed antibiotic prescribing or the use of a RADT combined with a clinical score | |
Reviewer’s conclusion | |
The authors used appropriate economic methods for the study |
Appendix 9 Adult primary care model: exploratory sensitivity analyses
Adult primary care model: prevalence of group A Streptococcus and clinical score threshold for starting antibiotics (usual-care arm) and testing (intervention arm)
In the base case, a cut-off score of 3 points on the Centor scale was used as the threshold for starting antibiotic treatment, with scores of ≥ 3 points indicating positive strep A infection. Changing this threshold to a score of ≥ 2 points had minimal impact on the base-case cost-effectiveness estimates. However, a threshold of ≥ 1 point for initiating point-of-care testing in primary care (equivalent to a test-all approach) favoured testing and changed the QALY difference from incremental QALY loss (–0.00396 per 1000 individuals) to incremental QALY gain (0.00346 per 1000 individuals) for Clearview Exact Strep A test cassette (Abbott Laboratories) and Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) compared with usual care (Table 40). The corresponding ICERs changed from these two tests being dominated in the base case to £7,071,480 and £6,875,048 per QALY gained for the cassette and dipstick versions, respectively, when compared with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 2 – changed Centor threshold for starting antibiotics from ≥ 3 to ≥ 1 points | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £24,462 | 0.00346 | £7,071,480 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £23,783 | 0.00346 | £6,875,048 |
Sensitivity analysis 5 – changed strep A prevalence from 22.6% (base case) to 10% (Neuner et al.78) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £6092 | 0.00131 | £4,638,696 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £5923 | 0.00131 | £4,510,168 |
The cost-effectiveness estimates were also sensitive to the prevalence of strep A among adults presenting in primary care. Increasing the prevalence rate from 22.6% (base-case model) to 35.9% (upper estimate from studies included in systematic review of test accuracy studies) generally favoured usual care (results not shown here); however, decreasing the prevalence to 10% (the value used in the Neuner et al. study78) favoured the intervention arm (i.e. testing). In the majority of cases, the ICERs did not change substantially to influence interpretation of cost-effectiveness, but the ICERs for Clearview Exact Strep A dipstick – test strip and Clearview Exact Strep A test – cassette (Abbott Laboratories) changed from being dominated (less effective and more costly) to being more effective and more costly at a 10% prevalence rate (see Table 40).
Adult primary care model: complication rates in treated and untreated group A streptococcal infection
The ICERs for only Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A dipstick – cassette (Abbott Laboratories) were sensitive to modelled rates of complications (peritonsillar abscess, quinsy and cellulitis as the probabilities used in the model represented all of these complications as shown Table 41). In the base-case analysis, strep A-related complications rates were set to 1.5% for untreated infection and 1.3% for treated strep A infection based on UK primary care data published by Little et al. 86 Halving and doubling the complications rates in the untreated group did not influence ICERs substantially but doubling complications in the treated infection to 2.6% favoured testing. The ICER for the Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changed from being dominated in the base case by usual care to £3,935,182 and £4,062,173 per QALY gained compared with usual care, respectively (see Table 41).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 10 – doubled complications in treated strep A to 2.6% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £6399 | 0.00158 | £4,062,173 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £6199 | 0.00158 | £3,935,182 |
Adult primary care model: side effects of penicillin
Cost-effectiveness estimates were most sensitive to modelled rates of penicillin-induced anaphylaxis. In the base case, penicillin-induced anaphylaxis was set to 0.01% probability (see Table 23) and a utility decrement of 9 quality-adjusted life-days lost (see Table 24) based on figures reported in the Neuner et al. study,78 with £1744 in treatment costs (Hex et al. 93), reflecting the rare but serious nature of this event. Changing the rate of penicillin-induced anaphylaxis from 0.01% to 0.64% as reported in Van Howe and Kusnier79 favoured testing: the ICER for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changed from being dominated by usual care in the base case to £3,935,182 and £4,062,173 per QALY gained compared with usual care. When the rate of mild penicillin rash was doubled from 2% to 4%, the ICER for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changed from being dominated by usual care in the base case to £288,702 and £299,305 per QALY gained compared with usual care, respectively (Table 42).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 16 – doubled rates of mild penicillin reaction (rash) to 4% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £6399 | 0.00107 | £4,062,173 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £6199 | 0.00107 | £3,935,182 |
Sensitivity analysis 17 – changed rates of anaphylaxis from 0.01% (Neuner et al.78) to 0.64% (Van Howe and Kusnier79) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £5647 | 0.01887 | £299,305 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £5447 | 0.01887 | £288,702 |
Adult primary care model: assume testing within standard general practice consultation time
The base-case analysis assumes that the typical general practice consultation duration of 9.22 minutes on average90 is not sufficient to administer and process tests concurrently with usual consultation activities. Consequently, 5–12 minutes (depending on test) of additional clinician time was added when calculating test costs to account for longer consultation during testing in primary care. Excluding the additional cost of clinician time favoured testing but the ICERs for only the five NADAL tests fell below £100,000 per QALY gained compared with usual care (Table 43).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 22 – assume testing within standard GP time | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £5248 | 0.00388 | £1,353,677 | £171 | 0.00388 | £44,184 |
NADAL Strep A – cassette (nal von minden GmbH) | £5298 | 0.00388 | £1,366,577 | £221 | 0.00388 | £57,085 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £5323 | 0.00388 | £1,373,029 | £246 | 0.00388 | £63,537 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £5272 | 0.00388 | £1,360,126 | £196 | 0.00388 | £50,636 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £5438 | 0.00388 | £1,402,700 | £361 | 0.00388 | £93,211 |
Adult primary care model: utility decrement, group A Streptococcus sore throat and related complications
The base-case estimates were sensitive to changes in disutility associated with strep A sore throat and related complications. Decreasing the utility decrement associated with untreated strep A by half favoured testing, and doubling it favoured usual care (Table 44). All other testing scenarios involving doubling the utility decrements for treated strep A infection and penicillin-induced rash produced ICERs favourable to testing (key result changes are presented in Table 44).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 27 – halved the utility decrement, untreated strep A | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £7033 | 0.00667 | £1,054,577 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £6833 | 0.00667 | £1,024,581 |
Sensitivity analysis 28 – doubled utility decrement, untreated strep A | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £6295 | 0.00311 | £2,026,496 | £6295 | –0.0002 | Dominated |
Strep A Rapid Test – test strip (Biopanda Reagents) | £6250 | 0.00311 | £2,012,006 | £6250 | –0.0002 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £5634 | 0.00293 | £1,924,717 | £5634 | –0.0004 | Dominated |
Sensitivity analysis 30 – doubled utility decrement, treated strep A | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £7033 | 0.00879 | £799,685 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £6833 | 0.00879 | £776,939 |
Sensitivity analysis 36 – doubled utility decrement, penicillin-induced rash | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7033 | –0.00396 | Dominated | £7033 | 0.00107 | £6,554,023 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £6833 | –0.00396 | Dominated | £6833 | 0.00107 | £6,367,600 |
Appendix 10 Adult secondary care model: exploratory sensitivity analyses
Adults in secondary care: Centor threshold for starting antibiotics and testing
In the base-case secondary care model, a Centor score of ≥ 3 points was used as an indication for starting antibiotic treatment in the usual-care arm and to initiate testing using a point-of-care test in the intervention arm. Changing this threshold to a Centor score of ≥ 2 points favoured testing and produced ICERs for the NADAL’s tests ranging between £30,230 and £69,690 per QALY gained compared with usual care (Table 45). Using a threshold of ≥ 1 point also favoured testing. The ICER for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changed from being dominated by usual care to £1,890,627 and £2,087,056 per QALY gained in comparison with usual care, respectively (see Table 45). The ICERs for NADAL’s tests reduced further to between £22,220 and £56,190 per QALY gained in comparison with usual care. ICERs for the other tests remained well above £100,000 per QALY gained in these scenario analyses.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 1 – changed Centor threshold from ≥ 3 (base case) to ≥ 2 points | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £307 | 0.01015 | £30,230 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £412 | 0.01015 | £40,614 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £465 | 0.01015 | £45,807 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £359 | 0.01015 | £35,422 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £707 | 0.01015 | £69,690 |
Sensitivity analysis 2 – changed Centor threshold from ≥ 3 (base case) to ≥ 1 points | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £7220 | 0.00346 | £2,087,056 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £6540 | 0.00346 | £1,890,627 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £422 | 0.019 | £22,220 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £592 | 0.019 | £31,159 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £677 | 0.019 | £35,629 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £507 | 0.019 | £26,690 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £1068 | 0.019 | £56,190 |
Adults in secondary care: prevalence of group A Streptococcus
Changing the prevalence of strep A infection in secondary care from 22.6% base-case value to 35.9% (upper value reported in studies included in the test accuracy systematic review) was less favourable to testing, with usual care dominating QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) in comparison with base-case results (Table 46). In contrast, a lower prevalence of disease was more favourable to testing, with ICERs for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott Laboratories) changing from being dominated by usual care to £1,248,775 and £1,377,303 per QALY gained, respectively, in comparison with usual care (see Table 46). ICERs for NADAL’s tests decreased to between £20,628 and £53,506 per QALY gained in comparison with usual care. ICERs for all other tests did not change substantially to suggest change in the direction of cost-effectiveness in comparison with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 4 – changed strep A prevalence from 22.6% to 35.9% | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £246 | 0.00282 | £87,196 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £275 | 0.00282 | £97,522 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £2120 | –0.00241 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £711 | –0.00055 | Dominated |
Sensitivity analysis 5 – changed strep A prevalence from 22.6% to 10% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1809 | 0.00131 | £1,377,303 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1640 | 0.00131 | £1,248,775 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £101 | 0.00488 | £20,628 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £143 | 0.00488 | £29,280 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £164 | 0.00488 | £33,606 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £122 | 0.00488 | £24,954 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £261 | 0.00488 | £53,506 |
Adults in secondary care: complication rates
In the base-case analysis, strep A-related complications rates were set to 1.5% for untreated infection and 1.3% for treated infection based on UK primary care data published by Little et al. 86 Halving complications in the treated group to 0.65% and doubling the rate in the untreated group to 3% were less favourable to testing, with usual care dominating QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories). In contrast, doubling the complications rates in the treated group to 2.6% and halving the rate in the untreated group to 0.75% favoured testing. The ICER for the Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) and Clearview Exact Strep A cassette (Abbott) changed from being dominated by usual care in the base case to £712,813 and £839,805 per QALY gained, respectively (Table 47). ICERs for the NADAL tests ranged between £31,184 and £83,041 per QALY gained in the scenarios that favoured testing. ICERs for all other tests were much lower in comparison with the base-case estimates but still remained well above £100,000 per QALY gained in comparison with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 9 – halved complications in treated infection to 0.65% | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £191 | 0.0037 | £51,597 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £241 | 0.0037 | £65,100 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £266 | 0.0037 | £71,853 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £216 | 0.0037 | £58,350 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £2119 | –0.00097 | Dominated |
Sensitivity analysis 10 – doubled complications in treated infection to 2.6% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1323 | 0.00158 | £839,805 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1123 | 0.00158 | £712,813 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £132 | 0.00422 | £31,184 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £182 | 0.00422 | £43,028 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £207 | 0.00422 | £48,948 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £157 | 0.00422 | £37,104 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £322 | 0.00422 | £76,191 |
Sensitivity analysis 11 – halved complications in untreated infection to 0.75% | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £148 | 0.00408 | £36,415 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £198 | 0.00408 | £48,684 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £223 | 0.00408 | £54,820 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £173 | 0.00408 | £42,551 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £338 | 0.00408 | £83,041 |
Sensitivity analysis 12 – doubled complications in untreated infection to 3% | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £217 | 0.00348 | £62,404 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £267 | 0.00348 | £76,786 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £292 | 0.00348 | £83,978 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £242 | 0.00348 | £69,596 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £2287 | –0.00244 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £795 | –0.00031 | Dominated |
Adults in secondary care: adverse effects of penicillin
Cost-effectiveness estimates were most sensitive to the adverse effects of penicillin. Halving the mild/uncomplicated side effects of penicillin (rash) to 1.0% favoured usual care, and doubling it favoured testing (Table 48). The Clearview Exact Strep A cassette (Abbott Laboratories) and Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) are no longer dominated by usual care under this scenario. ICERs for the NADAL tests ranged between £8913 and £32,557 per QALY gained compared with usual care. In the base case, penicillin-induced anaphylaxis was set to 0.01% probability (see Table 23) and a utility decrement of 9 quality-adjusted life-days lost (see Table 24) based on figures reported in Neuner et al. ,78 with £1744 in treatment costs (Hex et al. 93). Changing the rate of penicillin-induced rash from 0.01% to 0.64% as reported in Van Howe and Kusnier79 favoured testing with Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and all five NADAL tests, dominating usual care (see Table 48). ICERs for the remaining 12 tests ranged from £18 per QALY gained for Strep A Rapid Test – test strip (Biopanda Reagents) to £57,598 per QALY gained for QuikRead Go Strep A test kit (Orion Diagnostica).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 15 – halved probability of mild penicillin reaction (rash) to 1% | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £2034 | –0.00169 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £618 | –0.00046 | Dominated |
Sensitivity analysis 16 – doubled rates of mild penicillin reaction (rash) to 4% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1836 | 0.00107 | £1,711,314 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1636 | 0.00107 | £1,524,891 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £72 | 0.00804 | £8913 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £122 | 0.00804 | £15,136 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £147 | 0.00804 | £18,246 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £97 | 0.00804 | £12,024 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £262 | 0.00804 | £32,557 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £463 | 0.00599 | £77,328 |
Sensitivity analysis 17 – changed penicillin-induced anaphylaxis from 0.01% (Neuner et al.78) to 0.64% (Van Howe and Kusnier79) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £571 | 0.01887 | £30,270 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £371 | 0.01887 | £19,668 |
Strep A Rapid Test – cassette (Biopanda Reagents) | £1219 | 0.00311 | £392,342 | £45 | 0.02243 | £2024 |
Strep A Rapid Test – test strip (Biopanda Reagents) | £1174 | 0.00311 | £377,852 | £0 | 0.02243 | £18 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | –£975 | 0.02275 | Dominant |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | –£925 | 0.02275 | Dominant |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | –£900 | 0.02275 | Dominant |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | –£950 | 0.02275 | Dominant |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | –£785 | 0.02275 | Dominant |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £973 | 0.0169 | £57,598 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | –£618 | 0.0212 | Dominant |
Xpert Xpress Strep A (Cepheid) | £1994 | 0.00395 | £504,287 | £903 | 0.02192 | £41,202 |
Note that, of the tests with ICERs in the region of £30,000 per QALY, only the Alere TestPack Plus and QuikRead Go tests used test accuracy data from published peer-reviewed studies. See Table 15 for more information.
Adults in secondary care: cost of testing in secondary care
In the base case, the cost of confirmatory throat culture following a negative test result was applied to 6 of the 14 tests considered in the analyses [Clearview Exact Strep A cassette (Abbott Laboratories), Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), Strep A Rapid Test – cassette (Biopanda Reagents), Strep A Rapid Test – test strip (Biopanda Reagents), QuikRead Go Strep A test kit (Orion Diagnostica) and Xpert Xpress Strep A (Cepheid)]. Excluding confirmatory throat culture favoured testing. The ICER for Strep A Rapid Test – cassette and test strip supplied by Biopanda Reagents reduced from £392,342 and £377,852 to £26,452 and £11,963 per QALY gained compared with usual care, respectively (Table 49).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 20 – assume no swab culture in those with a negative test result | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £1219 | 0.00311 | £392,342 | £82 | 0.00311 | £26,452 |
Strep A Rapid Test – test strip (Biopanda Reagents) | £1174 | 0.00311 | £377,852 | £37 | 0.00311 | £11,963 |
Adults in secondary care: utility decrement, group A Streptococcus sore throat and related complications
The base-case estimates were sensitive to changes in disutility associated with strep A-related complications (Table 50). Decreasing the utility decrement associated with treated infection and the utility decrement for penicillin-induced rash by a half, doubling the decrement associated with untreated infection and doubling the decrement for abscess each favoured usual care, producing ICERs that suggested that usual care dominated testing (see Table 50 for specific tests) in comparison with the base-case assumptions. Halving the utility decrement for untreated infection and doubling the decrements for treated infection and penicillin-induced rash all favoured testing. The Clearview Exact Strep A cassette (Abbott Laboratories) and Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) were no longer dominated by usual care when the utility decrement associated with penicillin-induced rash was doubled, and the NADAL tests produced ICERs ranging from £21,309 to £44,953 per QALY gained compared with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 27 – halved utility decrement, untreated infection | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1957 | 0.00667 | £293,426 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1757 | 0.00667 | £263,430 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £171 | 0.00454 | £37,720 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £221 | 0.00454 | £48,734 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £246 | 0.00454 | £54,242 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £196 | 0.00454 | £43,228 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £361 | 0.00454 | £79,575 |
Sensitivity analysis 28 – doubled utility decrement, untreated infection | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £1219 | 0.00311 | £392,342 | £1219 | –0.00022 | Dominated |
Strep A Rapid Test – test strip (Biopanda Reagents) | £1174 | 0.00311 | £377,852 | £1174 | –0.00022 | Dominated |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £171 | 0.00255 | £67,224 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £221 | 0.00255 | £86,852 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £246 | 0.00255 | £96,668 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £196 | 0.00255 | £77,040 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £1990 | –0.00848 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £566 | –0.00495 | Dominated |
Sensitivity analysis 29 – halved utility decrement, treated infection | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £171 | 0.00348 | £49,248 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £221 | 0.00348 | £63,627 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £246 | 0.00348 | £70,818 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £196 | 0.00348 | £56,439 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £1990 | –0.00243 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £566 | –0.0003 | Dominated |
Sensitivity analysis 30 – doubled utility decrement, treated infection | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1957 | 0.00879 | £222,505 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1757 | 0.00879 | £199,759 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £171 | 0.00467 | £36,648 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £221 | 0.00467 | £47,349 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £246 | 0.00467 | £52,700 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £196 | 0.00467 | £42,000 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £361 | 0.00467 | £77,313 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £566 | 0.00567 | £99,787 |
Sensitivity analysis 32 – doubled utility decrement, abscess | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £1990 | –0.00019 | Dominated |
Sensitivity analysis 35 – halved utility decrement, penicillin-induced rash | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £1990 | 0.00016 | £12,700,432 | £1990 | –0.00169 | Dominated |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £566 | –0.00046 | Dominated |
Sensitivity analysis 36 – doubled utility decrement, penicillin-induced rash | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £1957 | –0.00396 | Dominated | £1957 | 0.00107 | £1,823,596 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1757 | –0.00396 | Dominated | £1757 | 0.00107 | £1,637,173 |
NADAL Strep A – test strip (nal von minden GmbH) | £171 | 0.00388 | £44,184 | £171 | 0.00804 | £21,309 |
NADAL Strep A – cassette (nal von minden GmbH) | £221 | 0.00388 | £57,085 | £221 | 0.00804 | £27,531 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £246 | 0.00388 | £63,537 | £246 | 0.00804 | £30,642 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £196 | 0.00388 | £50,636 | £196 | 0.00804 | £24,420 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £361 | 0.00388 | £93,211 | £361 | 0.00804 | £44,953 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £566 | 0.00169 | £335,358 | £566 | 0.00599 | £94,521 |
Appendix 11 Children’s primary care model: exploratory sensitivity analyses
Children’s primary care model: Centor threshold for starting antibiotics and testing
In the base-case children’s primary care model, a Centor score of ≥ 3 points was used as the cut-off score for starting antibiotic treatment in the usual-care arm and to initiate testing in the intervention arm. Lowering the threshold to a Centor score of ≥ 1 point favoured testing. The ICER for the QuikRead Go Strep A test kit (Orion Diagnostica) and the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changed from being dominated in the base case to £2,163,678 and £7,367,395 per QALY gained, respectively, compared with usual care (Table 51). Lowering the threshold to a Centor score of ≥ 2 points favoured testing, with the ICER for Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changing from being dominated in the base case to £5,525,377 per QALY gained compared with usual care. ICERs for the other tests remain unchanged in comparison with base-case ICERs.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 1 – changed Centor threshold for starting antibiotics from ≥ 3 to ≥ 2 points | ||||||
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £12,460 | 0.00226 | £5,525,377 |
Sensitivity analysis 2 – changed Centor threshold for starting antibiotics from ≥ 3 to ≥ 1 points | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £25,379 | 0.00344 | £7,367,395 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £19,273 | 0.00891 | £2,163,678 |
Children’s primary care model: prevalence of group A Streptococcus
Changing the prevalence of strep A infection among children presenting in primary care from 30.2% (base-case value) to 40.1% (upper value reported in studies included in the test accuracy systematic review) had minimal impact on base-case cost-effectiveness results. Changing the prevalence rate to 10% favoured testing but only the ICERs for Clearview Exact Strep A test – cassette (Abbott Laboratories), Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changed from being dominated in the base case to values between £1,319,975 per QALY gained for Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and £4,635,543 per QALY gained for Clearview Exact Strep A cassette (Abbott Laboratories) compared with usual care (Table 52).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 5 – changed strep A prevalence from 30.2% to 10% (Neuner et al.78) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7588 | –0.00714 | Dominated | £6088 | 0.00131 | £4,635,543 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £7369 | –0.00714 | Dominated | £5919 | 0.00131 | £4,507,015 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £6328 | 0.00247 | £2,564,058 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £4707 | 0.00357 | £1,319,975 |
Children’s primary care model: complication rates in treated and untreated group A streptococcal infection
In the base-case analysis, strep A-related complication rates were set to 1.5% for untreated infection and 1.3% for treated strep A infection based on UK primary care data published by Little et al. 86 Doubling the complications rate in the treated group to 2.6% favoured testing and changed the ICERs for Clearview Exact Strep A cassette (Abbott Laboratories), Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) from being dominated to values between £2,412,772 per QALY gained for Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and £26,635,474 per QALY gained for Clearview Exact Strep A cassette (Abbott Laboratories) compared with usual care (Table 53). Decreasing complications in the untreated group to 0.75% favoured testing and changed the ICER for Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) from being dominated in the base-case analysis to £5,652,302 per QALY gained compared with usual care. The ICERs for all other tests were much lower in comparison with the base-case estimates but remained well above £100,000 per QALY gained in comparison with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 10 – doubled complications in treated strep A infection to 2.6% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7588 | –0.00714 | Dominated | £6822 | 0.00026 | £26,635,474 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £7369 | –0.00714 | Dominated | £6603 | 0.00026 | £25,780,890 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £7348 | 0.00144 | £5,111,532 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £5869 | 0.00243 | £2,412,772 |
Sensitivity analysis 11 – halved complications in untreated strep A infection to 0.075% | ||||||
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £6010 | 0.00106 | £5,652,302 |
Children’s primary care model: side effects of penicillin
Changing the rate of penicillin-induced rash from 0.01% to 0.64%, as reported in Van Howe and Kusnier,79 favoured testing: the ICERs for Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), Clearview Exact Strep A cassette (Abbott Laboratories), QuikRead Go Strep A test kit (Orion Diagnostica) and Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changed from being dominated by usual care in the base case, ranging from £264,313 to £404,873 per QALY gained compared with usual care (Table 54).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 16 – doubled rates of mild penicillin reaction (rash) to 4% | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £7724 | 0.00113 | £6,823,310 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.00080 | Dominated | £6100 | 0.00355 | £1,718,859 |
Sensitivity analysis 17 – changed rates of anaphylaxis from 0.01% (Neuner et al.78) to 0.64% (Van Howe and Kusnier79) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7588 | –0.00714 | Dominated | £6211 | 0.01554 | £399,674 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £7369 | –0.00714 | Dominated | £5992 | 0.01554 | £385,589 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £6638 | 0.01640 | £404,873 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.00080 | Dominated | £5005 | 0.01894 | £264,313 |
Note that, of the tests with ICERs in the region of £30,000 per QALY, only the Alere TestPack Plus used test accuracy data from published peer-reviewed studies. See Table 15 for more information.
Children’s primary care model: utility decrement, group A Streptococcus sore throat and related complications
As in the adult primary and secondary care models, decreasing the utility decrement associated with untreated strep A by half, doubling the utility treatment for treated strep A and doubling the utility decrement for penicillin-induced rash all favoured testing, and doubling the decrement associated with untreated infection favoured usual care (Table 55).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 27 – halved the utility decrement, untreated strep A | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7588 | –0.00714 | Dominated | £7588 | 0.00706 | £1,074,366 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £7369 | –0.00714 | Dominated | £7369 | 0.00706 | £1,043,375 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £7827 | 0.00569 | £1,375,142 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.00080 | Dominated | £6204 | 0.00541 | £1,146,652 |
Sensitivity analysis 28 – doubled utility decrement, untreated strep A | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £6715 | 0.00224 | £2,992,743 | £6715 | –0.00219 | Dominated |
Strep A Rapid Test – test strip (Biopanda Reagents) | £6665 | 0.00224 | £2,970,792 | £6665 | –0.00219 | Dominated |
Sensitivity analysis 30 – doubled utility decrement, treated strep A | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £7588 | –0.00714 | Dominated | £7588 | 0.00990 | £766,212 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £7369 | –0.00714 | Dominated | £7369 | 0.00990 | £744,109 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £7827 | 0.00747 | £1,048,198 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.00080 | Dominated | £6204 | 0.00665 | £932,465 |
Sensitivity analysis 36 – doubled utility decrement, penicillin-induced rash | ||||||
QuikRead Go Strep A test kit (Orion Diagnostica) | £7827 | –0.00318 | Dominated | £7827 | 0.00113 | £6,914,611 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £6204 | 0.00355 | £1,748,223 |
Children’s primary care model: lower and upper estimates of the accuracy for the clinical score and test
Changing the test accuracy data from the central estimate of test sensitivity and specificity to the lower confidence limit for all tests and the Centor score favoured testing, but the ICER for only Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changed from being dominated by usual care under the base-case assumption to £13,737,541 per QALY gained compared with usual care (Table 56). The upper limits of test sensitivity and specificity favoured testing (results not presented) but none of the ICERs changed substantially to suggest different interpretation of base-case cost-effectiveness results.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 39 – lower confidence limits of test accuracy | ||||||
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £6204 | –0.0008 | Dominated | £6987 | 0.00051 | £13,737,541 |
Appendix 12 Children’s secondary care model: exploratory sensitivity analyses
Children’s secondary care model: Centor threshold for starting antibiotics and testing
In the base-case model for children treated in secondary care, a threshold of a Centor score of ≥ 3 points plus clinical assessment was used as the basis for immediate antibiotic treatment in the usual-care arm and to initiate testing in the intervention arm. Changing this threshold to a Centor score of ≥ 2 points had minimal impact on the base-case cost-effectiveness of all tests included in the analysis [except the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories)]. Using a threshold of a Centor score of ≥ 1 point favoured testing and changed the ICERs for the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and the QuikRead Go Strep A test kit (Orion Diagnostica) from being dominated in the base case to £205,449 per QALY gained and £2,303,715 per QALY gained compared with usual care, respectively (Table 57).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 1 – changed Centor threshold for starting antibiotics from ≥ 3 (base case) to ≥ 2 points | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £371 | 0.00879 | £42,226 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £482 | 0.00879 | £54,800 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £537 | 0.00879 | £61,086 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £426 | 0.00879 | £48,513 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £791 | 0.00879 | £90,007 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £1241 | 0.00226 | £550,135 |
Sensitivity analysis 2 – changed Centor threshold for starting antibiotics from ≥ 3 (base case) to ≥ 1 points | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £495 | 0.01670 | £29,604 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £666 | 0.01670 | £39,891 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £752 | 0.01670 | £45,035 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £580 | 0.01670 | £34,748 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £1148 | 0.01670 | £68,697 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £7936 | 0.00344 | £2,303,715 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £1830 | 0.00891 | £205,449 |
Children’s secondary care model: prevalence of group A Streptococcus
Changing the prevalence of strep A infection among children presenting in secondary care from 30.2%24 to 40.1%48 (upper value reported in studies included in the test accuracy systematic review) had minimal impact on the base-case ICERs in the children’s secondary care model. In contrast (Table 58), a lower prevalence of disease at 10% was more favourable to testing, with ICERs ranging from £20,575 per QALY gained for NADAL Strep A – test strip (nal von minden GmbH) to £1,374,151 per QALY gained for the Clearview Exact Strep A cassette (Abbott Laboratories) compared with usual care. ICERs for all other tests did not change substantially to change the direction of the base-case cost-effectiveness estimates.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 5 – changed strep A prevalence from 22.6% to 10% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £2034 | –0.00714 | Dominated | £1805 | 0.00131 | £1,374,151 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1815 | –0.00714 | Dominated | £1636 | 0.00131 | £1,245,623 |
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £100 | 0.00488 | £20,575 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £143 | 0.00488 | £29,227 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £164 | 0.00488 | £33,553 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £121 | 0.00488 | £24,901 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £261 | 0.00488 | £53,453 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £2045 | 0.00247 | £828,590 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £424 | 0.00357 | £118,942 |
Children’s secondary care model: complication rates
Halving complications in the treated group to 0.65%, the ICER was favourable to testing for the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) in comparison with ICERs produced under base-case assumptions. Doubling the complications rate in the treated group to 2.6% favoured testing; the ICER for the Clearview Exact Strep A dipstick – test strip (Abbott Laboratories), Clearview Exact Strep A cassette (Abbott Laboratories), Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and QuikRead Go Strep A test kit (Orion Diagnostica) changed from being dominated by usual care to between £1,247,882 [Alere TestPack +Plus Strep A – cassette (Abbott Laboratories)] and £4,949,827 [Clearview Exact Strep A cassette (Abbott Laboratories)] per QALY gained compared with usual care (Table 59). ICERs for all other tests were much lower in comparison with the base-case estimates but still remained well above £100,000 per QALY gained in comparison with usual care.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 10 – doubled complications in treated infection to 2.6% | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £2034 | –0.00714 | Dominated | £1268 | 0.00026 | £4,949,827 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1815 | –0.00714 | Dominated | £1049 | 0.00026 | £4,095,204 |
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £165 | 0.00374 | £44,246 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £220 | 0.00374 | £58,899 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £247 | 0.00374 | £66,225 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £193 | 0.00374 | £51,574 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £373 | 0.00374 | £99,924 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £1794 | 0.00144 | £1,247,882 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £314 | 0.00243 | £129,172 |
Sensitivity analysis 11 – halved complications in untreated infection to 0.75% | ||||||
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £456 | 0.00106 | £428,791 |
Children’s secondary care model: adverse effects of penicillin
Cost-effectiveness estimates were most sensitive to the adverse effects of penicillin. Halving the mild/uncomplicated side effects of penicillin (rash) to 1.0% favoured usual care (results not shown here), and doubling it favoured testing (Table 60). Changing the rate of penicillin-induced anaphylaxis from 0.01% to 0.64% favoured testing and generated ICERs with the Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) and the NADAL tests all dominating usual care. The Clearview Exact Strep A dipstick – test strip and cassette supplied by Abbott Laboratories produced ICERs of £28,181 and £42,266 per QALY gained, and the Strep A Rapid Test – test strip and cassette supplied by Biopanda Reagents produced £1643 and £4105 per QALY gained compared with usual care (see Table 60). The Xpert Xpress Strep A (Cepheid) and QuikRead Go Strep A test kit (Orion Diagnostica) produced ICERs of £51,637 and £66,111 per QALY gained, respectively. ICERs for the Alere i Strep A 2 (Abbott Laboratories) and cobas Liat Strep A Assay (Roche Diagnostics) remained above £100,000 per QALY gained compared with usual care (not displayed in Table 60).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 16 – doubled rates of mild penicillin reaction (rash) to 4% | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £123 | 0.00705 | £17,378 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £177 | 0.00705 | £25,134 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £205 | 0.00705 | £29,013 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £150 | 0.00705 | £21,256 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £331 | 0.00705 | £46,855 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £2169 | 0.00113 | £1,916,392 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £545 | 0.00355 | £153,598 |
Sensitivity analysis 17 – changed penicillin-induced anaphylaxis from 0.01% (Neuner et al.78) to 0.64% (Van Howe and Kusnier79) | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £2034 | –0.00714 | Dominated | £657 | 0.01554 | £42,266 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1815 | –0.00714 | Dominated | £438 | 0.01554 | £28,181 |
Strep A Rapid Test – cassette (Biopanda Reagents) | £1160 | 0.00224 | £517,066 | £82 | 0.02000 | £4105 |
Strep A Rapid Test – test strip (Biopanda Reagents) | £1111 | 0.00224 | £495,115 | £33 | 0.02000 | £1643 |
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | –£828 | 0.02043 | Dominant |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | –£773 | 0.02043 | Dominant |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | –£746 | 0.02043 | Dominant |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | –£801 | 0.02043 | Dominant |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | –£620 | 0.02043 | Dominant |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £1084 | 0.0164 | £66,111 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | –£549 | 0.01894 | Dominant |
Xpert Xpress Strep A (Cepheid) | £2006 | 0.00349 | £574,900 | £1019 | 0.01974 | £51,637 |
Note that, of the tests with ICERs in the region of £30,000 per QALY, only the Alere TestPack Plus used test accuracy data from published peer-reviewed studies. See Table 15 for more information.
Children’s secondary care model: cost of testing in secondary care
Excluding confirmatory throat culture costs following a negative test result favoured testing and generated ICERs ranging from £29,702 per QALY gained for Strep A Rapid Test – test strip (Biopanda Reagents) to £51,653 per QALY gained for the Strep A Rapid Test – cassette (Biopanda Reagents) compared with usual care (Table 61).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 20 – assume no swab culture in those with a negative test result | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £1160 | 0.00224 | £517,066 | £116 | 0.00224 | £51,653 |
Strep A Rapid Test- test strip (Biopanda Reagents) | £1111 | 0.00224 | £495,115 | £67 | 0.00224 | £29,702 |
Children’s secondary care model: utility decrement and group A Streptococcus-related complications
The base-case estimates were sensitive to changes in disutility associated with strep A-related complications (Table 62). Scenarios that favoured testing include decreasing the utility decrement of untreated infection by half, doubling the decrement of treated infection and doubling the decrement associated with mild penicillin reaction. The ICER for the Clearview Exact Strep A cassette and test strip supplied by Abbott Laboratories were no longer dominated by usual care, and ICERs for NADAL’s tests remained under £100,000 per QALY gained compared with usual care. In contrast, doubling the utility decrement of untreated infection was less favourable to testing and resulted in Strep A Rapid Test – cassette (Biopanda Reagents) and Strep A Rapid Test – test strip (Biopanda Reagents) being dominated by usual care (see Table 62).
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 27 – halved utility decrement, untreated infection | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £2034 | –0.00714 | Dominated | £2034 | 0.00706 | £287,940 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1815 | –0.00714 | Dominated | £1815 | 0.00706 | £256,947 |
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £213 | 0.00416 | £51,228 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £268 | 0.00416 | £64,383 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £295 | 0.00416 | £70,959 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £240 | 0.00416 | £57,804 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £2273 | 0.00569 | £399,281 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £649 | 0.00541 | £120,004 |
Sensitivity analysis 28 – doubled utility decrement, untreated infection | ||||||
Strep A Rapid Test – cassette (Biopanda Reagents) | £1160 | 0.00224 | £517,066 | £1160 | –0.00219 | Dominated |
Strep A Rapid Test – test strip (Biopanda Reagents) | £1111 | 0.00224 | £495,115 | £1111 | –0.00219 | Dominated |
Sensitivity analysis 30 – doubled utility decrement, treated infection | ||||||
Clearview Exact Strep A cassette (Abbott Laboratories) | £2034 | –0.00714 | Dominated | £2034 | 0.00990 | £205,352 |
Clearview Exact Strep A dipstick – test strip (Abbott Laboratories) | £1815 | –0.00714 | Dominated | £1815 | 0.00990 | £183,248 |
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £213 | 0.00434 | £49,131 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £268 | 0.00434 | £61,748 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £295 | 0.00434 | £68,055 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £240 | 0.00434 | £55,438 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £421 | 0.00434 | £97,068 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £2273 | 0.00747 | £304,351 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £649 | 0.00665 | £97,588 |
Sensitivity analysis 36 – doubled utility decrement, penicillin-induced rash | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £213 | 0.00705 | £30,212 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £268 | 0.00705 | £37,970 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £295 | 0.00705 | £41,848 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £240 | 0.00705 | £34,090 |
NADAL Strep A scan test – cassette (nal von minden GmbH) | £421 | 0.00327 | £128,662 | £421 | 0.00705 | £59,689 |
QuikRead Go Strep A test kit (Orion Diagnostica) | £2273 | –0.00318 | Dominated | £2273 | 0.00113 | £2,007,701 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £649 | 0.00355 | £182,962 |
Children’s secondary care model: lower and upper estimates of the accuracy for the clinical score and test
Changing the test accuracy data from the central estimate of test sensitivity and specificity to the lower confidence limit for all tests and the Centor score favoured testing, but only the ICER for Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) changed from being dominated by usual care under base-case assumption to £1,356,265 per QALY gained compared with usual care, whereas ICERs for NADAL’s tests remained under £100,000 per QALY gained compared with usual care (Table 63). The upper limits of test sensitivity and specificity favoured testing (results not presented) but none of the ICERs changed substantially to suggest a different interpretation of base-case cost-effectiveness results.
Test | Base case | Sensitivity analysis | ||||
---|---|---|---|---|---|---|
Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | Incremental costs per 1000 individuals | Incremental QALYs per 1000 individuals | ICER | |
Sensitivity analysis 39 – lower confidence limits of test accuracy | ||||||
NADAL Strep A – test strip (nal von minden GmbH) | £213 | 0.00327 | £65,122 | £208 | 0.00378 | £54,933 |
NADAL Strep A – cassette (nal von minden GmbH) | £268 | 0.00327 | £81,845 | £270 | 0.00378 | £71,352 |
NADAL Strep A plus – cassette (nal von minden GmbH) | £295 | 0.00327 | £90,205 | £301 | 0.00378 | £79,562 |
NADAL Strep A plus – test strip (nal von minden GmbH) | £240 | 0.00327 | £73,482 | £239 | 0.00378 | £63,143 |
Alere TestPack +Plus Strep A – cassette (Abbott Laboratories) | £649 | –0.0008 | Dominated | £690 | 0.00051 | £1,356,265 |
Appendix 13 Additional sensitivity analyses
Sensitivity analysis | Description of sensitivity analysis | Updated input parameter |
---|---|---|
0 | Base case | |
1 | Changed Centor threshold score for starting antibiotics from ≥ 3 (base case) to ≥ 2 points | 2 |
2 | Changed Centor threshold score for starting antibiotics from ≥ 3 (base case) to ≥ 1 points | 1 |
3 | Changed time horizon to 14 days | 14 |
4 | Changed strep A prevalence (adults) from 22.6% to 35.9% (upper value reported in studies included in the test accuracy systematic review) | 0.359 |
5 | Changed strep A prevalence (adults) from 22.6% to 10% (Neuner et al.78) | 0.1 |
6 | Delayed prescription rate set to 27.3% in both arms (RADT group, Little et al.6) | 0.273 |
7 | Delayed prescription rate set to 51% in both arms (clinical score group, Little et al.6) | 0.51 |
8 | Doubled proportion who use delayed antibiotics to 92% | 0.92 |
9 | Halved probability of strep A complications when given antibiotics from 0.013 (Little et al.6) to 0.0065 (analyst assumption) | 0.0065 |
10 | Doubled probability of strep A complications when given antibiotics from 0.013 (Little et al.6) to 0.026 (analyst assumption) | 0.026 |
11 | Halved probability of strep A complications when given no antibiotics from 0.015 (Little et al.6) to 0.0075 (analyst assumption) | 0.0075 |
12 | Doubled probability of strep A complications when given no antibiotics from 0.015 (Little et al.6) to 0.03 (analyst assumption) | 0.03 |
13 | Halved probability of rheumatic fever to 0.00005 | 0.00005 |
14 | Increased probability of rheumatic fever 10-fold to 0.001 | 0.001 |
15 | Halved mild penicillin reaction (rash) to 0.01 | 0.01 |
16 | Doubled mild penicillin reaction (rash) to 0.04 | 0.04 |
17 | Changed probability of anaphylaxis from 0.0001 (Neuner et al.78) to 0.0064 (Van Howe and Kusnier79) | 0.0064 |
18 | Changed cost of antibiotics from £0.74 (BNF,91 15 capsules of amoxicillin 500 mg) to £6.11 (NG51 costing report118) | 6.11 |
19 | Assume that patient is seen by practice nurse (£62/hour, PSSRU,90 section 10.1) instead of doctor | 1.03 |
20 | Assume no swab culture in those with a negative test result | 0 |
21 | Double the cost of alternative antibiotic in those with penicillin-induced rash to £20 | 20 |
22 | Assume testing within standard GP time | Yes |
23 | Doubled cost of anaphylaxis to £3489.28 | £3489.28 |
24 | Doubled cost of abscess to £3142.56 | £3142.56 |
25 | Doubled cost of acute rheumatic fever to £3544.88 | £3544.88 |
26 | Changed baseline utility from 0.863 (UK norm) to 0.6305 (PRISM study, table 176) | 0.6305 |
27 | Halved utility decrement, untreated strep A | 0.125 |
28 | Doubled utility decrement, untreated strep A | 0.5 |
29 | Halved utility decrement, treated strep A | 0.075 |
30 | Doubled utility decrement, treated strep A | 0.3 |
31 | Halved utility decrement, strep A-related abscess | 2.5 |
32 | Doubled utility decrement, strep A-related abscess | 10 |
33 | Halved utility decrement, acute rheumatic fever | 38.25 |
34 | Doubled utility decrement, acute rheumatic fever | 153 |
35 | Halved utility decrement, penicillin-induced rash | 0.3125 |
36 | Doubled utility decrement, penicillin-induced rash | 1.25 |
37 | Halved utility decrement, strep A-related sepsis | 4.5 |
38 | Doubled utility decrement, strep A-related sepsis | 18 |
39 | Lower confidence limits of test accuracy | |
40 | Upper confidence limits of test accuracy |
Appendix 14 Summary of manufacturers’ information
Biopanda Reagents
-
Checklist of confidential information.
-
Product insert: Strep A Rapid Test RAPG-STRA-001.
-
Declaration of conformity DOCSTRA1826.
-
Response to request for information.
Cepheid
-
Package insert: Xpert Xpress Strep A XPRSTREPA-CE-10.
-
CE declaration of conformity.
-
The GeneXpert System. CE-IVD test menu 2.
-
The GeneXpert System. CE-IVD test menu.
-
Ferrieri et al. 85
-
Matthys et al. 119
-
Response to request for information.
-
Xpert Xpress Strep A brochure CEIVD 3106-01.A.
-
Xpert Xpress Strep A datasheet CEIVD 3105-01.
nal von minden GmbH
-
Gazzano et al. 120
NADAL Strep A
-
EC-declaration of conformity for product number 221002A – signed 30 January 2017.
-
EC-declaration of conformity for product number 221002A – signed 9 February 2017.
-
EC-declaration of conformity for product number 222008 – signed 28 July 2017.
-
Instructions for use for NADAL Strep A Test (test strip), reference 221001A, version 2.2, 11 August 2017.
-
Instructions for use for NADAL Strep A Test (test cassette), reference 222001A, version 2.3, 24 October 2017.
-
Checklist of confidential information. For test strip.
-
Checklist of confidential information. For cassette.
-
Response to request for information NADAL Strep A cassette.
-
Response to request for information NADAL Strep A test strip.
NADAL Strep A plus
-
EC-declaration of conformity. Product number 221050N-50.
-
Instructions for use for NADAL Strep A plus Test (test strip) 221050N-50.
-
Instructions for use for NADAL Strep A plus Test (test cassette) 222007.
-
Instructions for use for NADAL Strep A plus Test (test cassette) 222008.
-
Checklist of confidential information. For test strip.
-
Checklist of confidential information. For test cassette.
-
Response to request for information NADAL Strep A+ cassette.
-
Response to request for information NADAL Strep A+ test strip.
NADAL Strep A Scan
-
EC-declaration of conformity. Product number 222049NBUL-20.
-
Instructions for use for NADAL Strep A scan test (test cassette) 222049NBUL-20.
-
Checklist of confidential information. For NADAL Strep A scan (cassette).
-
Response to request for information NADAL Strep A scan (cassette).
Orion Diagnostica
-
Shallcross and Davies. 121
-
Checklist of confidential information. 10122018.
-
Clinical impact of rapid POC test for acute sore throat poster ECCMID 2016. URL: www.oriondiagnostica.com/globalassets/documents-and-materials/quikread-go/quikread-go-strep-a/9031_clinical_impact_of_rapid_poc_tests_for_accute_sore_throat_eccmid_2016_a3_web.pdf (accessed 17 April 2019).
-
Response to request for information.
-
Declaration of conformity for QuikRead Go Strep A System and QuikRead Go Strep A cat. no 135883.
-
Instructions for use QuikRead Go Strep A. 136262-3.
-
Poster ESPID 2013.
-
QuikRead Go Strep A – an evaluation of performance in comparison with Alere TestPack+Plus with OBC, by Oulun Työterveys laboratory.
-
Evaluation of QuikRead Go Strep A test regarding the detection level of Streptococcus pyogenes, by Pia Karlsson at Microbiology laboratory of Medicinsk Diagnostik, Jönköping, Sweden.
-
Stefaniuk et al. 52
-
The report from Scandinavian evaluation of laboratory equipment for primary health care (SKUP) on QuikRead Go Strep A.
Roche Diagnostics
-
Declaration of conformity DOC-2017-38.
-
cobas Strep A – nucleic acid test for use on the cobas Liat system – package insert.
-
Response to request for information.
-
Checklist of confidential information.
List of abbreviations
- AMR
- antimicrobial resistance
- CEAC
- cost-effectiveness acceptability curve
- CENTRAL
- Cochrane Central Register of Controlled Trials
- CHEERS
- Consolidated Health Economic Evaluation Reporting Standards
- CI
- confidence interval
- COBA
- colistin and oxolinic acid
- CRD
- Centre for Reviews and Dissemination
- DARE
- Database of Abstracts of Reviews of Effects
- EAG
- External Assessment Group
- ECCMID
- European Congress of Clinical Microbiology and Infectious Diseases
- EQ-5D
- EuroQol-5 Dimensions
- ESCMID
- European Society of Clinical Microbiology and Infectious Diseases
- FDA
- Food and Drug Administration
- FIA
- fluorescent immunoassay
- GP
- general practitioner
- HTA
- Health Technology Assessment
- ICER
- incremental cost-effectiveness ratio
- INAHTA
- International Network of Agencies for Health Technology Assessment
- JBI
- Joanna Briggs Institute
- MeSH
- medical subject heading
- NG
- NICE Guidance
- NHS EED
- NHS Economic Evaluation Database
- NICE
- National Institute for Health and Care Excellence
- NPV
- negative predictive value
- PCR
- polymerase chain reaction
- PHE
- Public Health England
- PPV
- positive predictive value
- PRISM
- PRImary care Streptococcal Management
- PRISMA
- Preferred Reporting Items for Systematic Reviews and Meta-Analyses
- PSA
- probabilistic sensitivity analysis
- QALY
- quality-adjusted life-year
- QUADAS-2
- Quality Assessment of Diagnostic Accuracy Studies – 2
- RADT
- rapid antigen detection test
- RCT
- randomised controlled trial
- RePEc
- Research Papers in Economics
- RTI
- respiratory tract infection
- ScHARRHUD
- School of Health and Related Research Health Utilities Database
- SE
- standard error
- STDR
- sore throat decision rules
- strep A
- group A Streptococcus
- strep C
- group C Streptococcus
- strep G
- group G Streptococcus