27 resultados para Technology Assessment, Biomedical
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Military decision makers need to understand and assess the benefits and consequences of their decisions in order to make cost efficient, timely, and successful choices. Technology selection is one such critical decision, especially when considering the design or retrofit of a complex system, such as an aircraft. An integrated and systematic methodology that will support decision-making between technology alternatives and options while assessing the consequences of such decisions is a key enabler. This paper presents and demonstrates, through application to a notional medium range short takeoff and landing (STOL) aircraft, one such enabler: the Technology Impact Forecasting (TIF) method. The goal of the TIF process is to explore both generic, undefined areas of technology, as well as specific technologies, and assess their potential impacts. This is actualized through the development and use of technology scenarios, and allows the designer to determine where to allocate resources for further technology definition and refinement, as well as provide useful design information. The paper particularly discusses the use of technology scenarios and demonstrates their use in the exploration of seven technologies of varying technology readiness levels.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
Objective Within the framework of a health technology assessment and using an economic model, to determine the most clinically and cost effective policy of scanning and screening for fetal abnormalities in early pregnancy. Design A discrete event simulation model of 50,000 singleton pregnancies. Setting Maternity services in Scotland. Population Women during the first 24 weeks of their pregnancy. Methods The mathematical model was populated with data on uptake of screening, prevalence, detection and false positive rates for eight fetal abnormalities and with costs for ultrasound scanning and serum screening. Inclusion of abnormalities was based on the relative prevalence and clinical importance of conditions and the availability of data. Six strategies for the identification of abnormalities prenatally including combinations of first and second trimester ultrasound scanning and first and second trimester screening for chromosomal abnormalities were compared. Main outcome measures The number of abnormalities detected and missed, the number of iatrogenic losses resulting from invasive tests, the total cost of strategies and the cost per abnormality detected were compared between strategies. Results First trimester screening for chromosomal abnormalities costs more than second trimester screening but results in fewer iatrogenic losses. Strategies which include a second trimester ultrasound scan result in more abnormalities being detected and have lower costs per anomaly detected. Conclusions The preferred strategy includes both first and second trimester ultrasound scans and a first trimester screening test for chromosomal abnormalities. It has been recommended that this policy is offered to all women in Scotland.
Resumo:
Objectives: The Secondary Prevention of Heart disEase in geneRal practicE (SPHERE) trial has recently reported. This study examines the cost-effectiveness of the SPHERE intervention in both healthcare systems on the island of Ireland. Methods: Incremental cost-effectiveness analysis. A probabilistic model was developed to combine within-trial and beyond-trial impacts of treatment to estimate the lifetime costs and benefits of two secondary prevention strategies: Intervention - tailored practice and patient care plans; and Control - standardized usual care. Results: The intervention strategy resulted in mean cost savings per patient of 512.77 (95 percent confidence interval [CI], 1086.46-91.98) and an increase in mean quality-adjusted life-years (QALYs) per patient of 0.0051 (95 percent CI, 0.0101-0.0200), when compared with the control strategy. The probability of the intervention being cost-effective was 94 percent if decision makers are willing to pay €45,000 per additional QALY. Conclusions: Decision makers in both settings must determine whether the level of evidence presented is sufficient to justify the adoption of the SPHERE intervention in clinical practice. Copyright © Cambridge University Press 2010.
Resumo:
Abstract
Background: Automated closed loop systems may improve adaptation of the mechanical support to a patient's ventilatory needs and
facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of
ventilation.
Objectives: To compare the duration of weaning from mechanical ventilation for critically ill ventilated adults and children when managed
with automated closed loop systems versus non-automated strategies. Secondary objectives were to determine differences
in duration of ventilation, intensive care unit (ICU) and hospital length of stay (LOS), mortality, and adverse events.
Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 2); MEDLINE (OvidSP) (1948 to August 2011); EMBASE (OvidSP) (1980 to August 2011); CINAHL (EBSCOhost) (1982 to August 2011); and the Latin American and Caribbean Health Sciences Literature (LILACS). In addition we received and reviewed auto-alerts for our search strategy in MEDLINE, EMBASE, and CINAHL up to August 2012. Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles.
Selection criteria: We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning
strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an intensive care unit (ICU).
Data collection and analysis: Two authors independently extracted study data and assessed risk of bias. We combined data into forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria.
Main results: Pooled data from 15 eligible trials (14 adult, one paediatric) totalling 1173 participants (1143 adults, 30 children) indicated that automated closed loop systems reduced the geometric mean duration of weaning by 32% (95% CI 19% to 46%, P =0.002), however heterogeneity was substantial (I2 = 89%, P < 0.00001). Reduced weaning duration was found with mixed or
medical ICU populations (43%, 95% CI 8% to 65%, P = 0.02) and Smartcare/PS™ (31%, 95% CI 7% to 49%, P = 0.02) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (17%, 95% CI 8% to 26%) and ICU length of stay (LOS) (11%, 95% CI 0% to 21%). There was no difference in mortality rates or hospital LOS. Overall the quality of evidence was high with the majority of trials rated as low risk.
Authors' conclusions: Automated closed loop systems may result in reduced duration of weaning, ventilation, and ICU stay. Reductions are more
likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized
controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
Background: Nursing homes for older people provide an environment likely to promote the acquisition and spread of meticillin-resistant Staphylococcus aureus (MRSA), putting residents at increased risk of colonisation and infection. It is recognised that infection control strategies are important in preventing and controlling MRSA transmission.
Objectives: The objective of this review was to determine the effects of infection control strategies for preventing the transmission of MRSA in nursing homes for older people.
Search strategy: We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2009, Issue 2), the Cochrane Wounds Group Specialised Register (searched May 29th, 2009). We also searched MEDLINE (from 1950 to May Week 4 2009), Ovid EMBASE (1980 to 2009 Week 21), EBSCO CINAHL (1982 to May Week 4 2009), British Nursing Index (1985 to May 2009), DARE (1992 to May 2009), Web of Science (1981 to May 2009), and the Health Technology Assessment (HTA) website (1988 to May 2009). Research in progress was sought through Current Clinical Trials (www.controlled-trials.com), Medical Research Council Research portfolio, and HSRPRoj (current USA projects). SIGLE was also searched in order to identify atypical material which was not accessible through more conventional sources.
Selection criteria: All randomised and controlled clinical trials, controlled before and after studies and interrupted time series studies of infection control interventions in nursing homes for older people were eligible for inclusion.
Data collection and analysis: Two authors independently reviewed the results of the searches.
Main results: Since no studies met the selection criteria, neither a meta-analysis nor a narrative description of studies was possible.
Authors' conclusions: The lack of studies in this field is surprising. Nursing homes for older people provide an environment likely to promote the acquisition and spread of infection, with observational studies repeatedly reporting that being a resident of a nursing home increases the risk of MRSA colonisation. Much of the evidence for recently-issued United Kingdom guidelines for the control and prevention of MRSA in health care facilities was generated in the acute care setting. It may not be possible to transfer such strategies directly to the nursing home environment, which serves as both a healthcare setting and a resident's home. Rigorous studies should be conducted in nursing homes, to test interventions that have been specifically designed for this unique environment.
Resumo:
Background: There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probebased real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design: Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). Study selection: diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. Data extraction: three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination: Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care.
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
Background Automated closed loop systems may improve adaptation of mechanical support for a patient's ventilatory needs and facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of ventilation. This review was originally published in 2013 with an update published in 2014. Objectives The primary objective for this review was to compare the total duration of weaning from mechanical ventilation, defined as the time from study randomization to successful extubation (as defined by study authors), for critically ill ventilated patients managed with an automated weaning system versus no automated weaning system (usual care). Secondary objectives for this review were to determine differences in the duration of ventilation, intensive care unit (ICU) and hospital lengths of stay (LOS), mortality, and adverse events related to early or delayed extubation with the use of automated weaning systems compared to weaning in the absence of an automated weaning system. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 8); MEDLINE (OvidSP) (1948 to September 2013); EMBASE (OvidSP) (1980 to September 2013); CINAHL (EBSCOhost) (1982 to September 2013); and the Latin American and Caribbean Health Sciences Literature (LILACS). Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles. The original search was run in August 2011, with database auto-alerts up to August 2012. Selection criteria We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an ICU. Data collection and analysis Two authors independently extracted study data and assessed risk of bias. We combined data in forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria. Main results We included 21 trials (19 adult, two paediatric) totaling 1676 participants (1628 adults, 48 children) in this updated review. Pooled data from 16 eligible trials reporting weaning duration indicated that automated closed loop systems reduced the geometric mean duration of weaning by 30% (95% confidence interval (CI) 13% to 45%), however heterogeneity was substantial (I2 = 87%, P < 0.00001). Reduced weaning duration was found with mixed or medical ICU populations (42%, 95% CI 10% to 63%) and Smartcare/PS™ (28%, 95% CI 7% to 49%) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (10%, 95% CI 3% to 16%) and ICU LOS (8%, 95% CI 0% to 15%). There was no strong evidence of an effect on mortality rates, hospital LOS, reintubation rates, self-extubation and use of non-invasive ventilation following extubation. Prolonged mechanical ventilation > 21 days and tracheostomy were reduced in favour of automated systems (relative risk (RR) 0.51, 95% CI 0.27 to 0.95 and RR 0.67, 95% CI 0.50 to 0.90 respectively). Overall the quality of the evidence was high with the majority of trials rated as low risk. Authors' conclusions Automated closed loop systems may result in reduced duration of weaning, ventilation and ICU stay. Reductions are more likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
Topic
To compare the accuracy of optical coherence tomography (OCT) with alternative tests for monitoring neovascular age-related macular degeneration (nAMD) and detecting disease activity among eyes previously treated for this condition.
Clinical RelevanceTraditionally, fundus fluorescein angiography (FFA) has been considered the reference standard to detect nAMD activity, but FFA is costly and invasive. Replacement of FFA by OCT can be justified if there is a substantial agreement between tests.
MethodsSystematic review and meta-analysis. The index test was OCT. The comparator tests were visual acuity, clinical evaluation (slit lamp), Amsler chart, color fundus photographs, infrared reflectance, red-free images and blue reflectance, fundus autofluorescence imaging, indocyanine green angiography (ICGA), preferential hyperacuity perimetry, and microperimetry. We searched the following databases: MEDLINE, MEDLINE In-Process, EMBASE, Biosis, Science Citation Index, the Cochrane Library, Database of Abstracts of Reviews of Effects, MEDION, and the Health Technology Assessment database. The last literature search was conducted in March 2013. We used the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) to assess risk of bias.
ResultsWe included 8 studies involving more than 400 participants. Seven reported the performance of OCT (3 time-domain [TD] OCT, 3 spectral-domain [SD] OCT, 1 both types) and 1 reported the performance of ICGA in the detection of nAMD activity. We did not find studies directly comparing tests in the same population. The pooled sensitivity and specificity of TD OCT and SD OCT for detecting active nAMD was 85% (95% confidence interval [CI], 72%–93%) and 48% (95% CI, 30%–67%), respectively. One study reported ICGA with sensitivity of 75.9% and specificity of 88.0% for the detection of active nAMD. Half of the studies were considered to have a high risk of bias.
ConclusionsThere is substantial disagreement between OCT and FFA findings in detecting active disease in patients with nAMD who are being monitored. Both methods may be needed to monitor patients comprehensively with nAMD.
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.