55 resultados para Health technology assessment
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
Military decision makers need to understand and assess the benefits and consequences of their decisions in order to make cost efficient, timely, and successful choices. Technology selection is one such critical decision, especially when considering the design or retrofit of a complex system, such as an aircraft. An integrated and systematic methodology that will support decision-making between technology alternatives and options while assessing the consequences of such decisions is a key enabler. This paper presents and demonstrates, through application to a notional medium range short takeoff and landing (STOL) aircraft, one such enabler: the Technology Impact Forecasting (TIF) method. The goal of the TIF process is to explore both generic, undefined areas of technology, as well as specific technologies, and assess their potential impacts. This is actualized through the development and use of technology scenarios, and allows the designer to determine where to allocate resources for further technology definition and refinement, as well as provide useful design information. The paper particularly discusses the use of technology scenarios and demonstrates their use in the exploration of seven technologies of varying technology readiness levels.
Resumo:
Objective Within the framework of a health technology assessment and using an economic model, to determine the most clinically and cost effective policy of scanning and screening for fetal abnormalities in early pregnancy. Design A discrete event simulation model of 50,000 singleton pregnancies. Setting Maternity services in Scotland. Population Women during the first 24 weeks of their pregnancy. Methods The mathematical model was populated with data on uptake of screening, prevalence, detection and false positive rates for eight fetal abnormalities and with costs for ultrasound scanning and serum screening. Inclusion of abnormalities was based on the relative prevalence and clinical importance of conditions and the availability of data. Six strategies for the identification of abnormalities prenatally including combinations of first and second trimester ultrasound scanning and first and second trimester screening for chromosomal abnormalities were compared. Main outcome measures The number of abnormalities detected and missed, the number of iatrogenic losses resulting from invasive tests, the total cost of strategies and the cost per abnormality detected were compared between strategies. Results First trimester screening for chromosomal abnormalities costs more than second trimester screening but results in fewer iatrogenic losses. Strategies which include a second trimester ultrasound scan result in more abnormalities being detected and have lower costs per anomaly detected. Conclusions The preferred strategy includes both first and second trimester ultrasound scans and a first trimester screening test for chromosomal abnormalities. It has been recommended that this policy is offered to all women in Scotland.
Resumo:
Abstract
Background: Automated closed loop systems may improve adaptation of the mechanical support to a patient's ventilatory needs and
facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of
ventilation.
Objectives: To compare the duration of weaning from mechanical ventilation for critically ill ventilated adults and children when managed
with automated closed loop systems versus non-automated strategies. Secondary objectives were to determine differences
in duration of ventilation, intensive care unit (ICU) and hospital length of stay (LOS), mortality, and adverse events.
Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 2); MEDLINE (OvidSP) (1948 to August 2011); EMBASE (OvidSP) (1980 to August 2011); CINAHL (EBSCOhost) (1982 to August 2011); and the Latin American and Caribbean Health Sciences Literature (LILACS). In addition we received and reviewed auto-alerts for our search strategy in MEDLINE, EMBASE, and CINAHL up to August 2012. Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles.
Selection criteria: We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning
strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an intensive care unit (ICU).
Data collection and analysis: Two authors independently extracted study data and assessed risk of bias. We combined data into forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria.
Main results: Pooled data from 15 eligible trials (14 adult, one paediatric) totalling 1173 participants (1143 adults, 30 children) indicated that automated closed loop systems reduced the geometric mean duration of weaning by 32% (95% CI 19% to 46%, P =0.002), however heterogeneity was substantial (I2 = 89%, P < 0.00001). Reduced weaning duration was found with mixed or
medical ICU populations (43%, 95% CI 8% to 65%, P = 0.02) and Smartcare/PS™ (31%, 95% CI 7% to 49%, P = 0.02) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (17%, 95% CI 8% to 26%) and ICU length of stay (LOS) (11%, 95% CI 0% to 21%). There was no difference in mortality rates or hospital LOS. Overall the quality of evidence was high with the majority of trials rated as low risk.
Authors' conclusions: Automated closed loop systems may result in reduced duration of weaning, ventilation, and ICU stay. Reductions are more
likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized
controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
Background: Nursing homes for older people provide an environment likely to promote the acquisition and spread of meticillin-resistant Staphylococcus aureus (MRSA), putting residents at increased risk of colonisation and infection. It is recognised that infection control strategies are important in preventing and controlling MRSA transmission.
Objectives: The objective of this review was to determine the effects of infection control strategies for preventing the transmission of MRSA in nursing homes for older people.
Search strategy: We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2009, Issue 2), the Cochrane Wounds Group Specialised Register (searched May 29th, 2009). We also searched MEDLINE (from 1950 to May Week 4 2009), Ovid EMBASE (1980 to 2009 Week 21), EBSCO CINAHL (1982 to May Week 4 2009), British Nursing Index (1985 to May 2009), DARE (1992 to May 2009), Web of Science (1981 to May 2009), and the Health Technology Assessment (HTA) website (1988 to May 2009). Research in progress was sought through Current Clinical Trials (www.controlled-trials.com), Medical Research Council Research portfolio, and HSRPRoj (current USA projects). SIGLE was also searched in order to identify atypical material which was not accessible through more conventional sources.
Selection criteria: All randomised and controlled clinical trials, controlled before and after studies and interrupted time series studies of infection control interventions in nursing homes for older people were eligible for inclusion.
Data collection and analysis: Two authors independently reviewed the results of the searches.
Main results: Since no studies met the selection criteria, neither a meta-analysis nor a narrative description of studies was possible.
Authors' conclusions: The lack of studies in this field is surprising. Nursing homes for older people provide an environment likely to promote the acquisition and spread of infection, with observational studies repeatedly reporting that being a resident of a nursing home increases the risk of MRSA colonisation. Much of the evidence for recently-issued United Kingdom guidelines for the control and prevention of MRSA in health care facilities was generated in the acute care setting. It may not be possible to transfer such strategies directly to the nursing home environment, which serves as both a healthcare setting and a resident's home. Rigorous studies should be conducted in nursing homes, to test interventions that have been specifically designed for this unique environment.
Resumo:
Background: There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probebased real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design: Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). Study selection: diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. Data extraction: three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination: Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care.
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.