996 resultados para Implementation accuracy
Resumo:
Objectives: To analyze mortality rates of children with severe sepsis and septic shock in relation to time-sensitive fluid resuscitation and treatments received and to define barriers to the implementation of the American College of Critical Care Medicine/Pediatric Advanced Life Support guidelines in a pediatric intensive care unit in a developing country. Methods: Retrospective chart review and prospective analysis of septic shock treatment in a pediatric intensive care unit of a tertiary care teaching hospital. Ninety patients with severe sepsis or septic shock admitted between July 2002 and June 2003 were included in this study. Results: Of the 90 patients, 83% had septic shock and 17% had severe sepsis; 80 patients had preexisting severe chronic diseases. Patients with septic shock who received less than a 20-mL/kg dose of resuscitation fluid in the first hour of treatment had a mortality rate of 73%, whereas patients who received more than a 40-mL/kg dose in the first hour of treatment had a mortality rate of 33% (P < 0.05.) Patients treated less than 30 minutes after diagnosis of severe sepsis and septic shock had a significantly lower mortality rate (40%) than patients treated more than 60 Minutes after diagnosis (P < 0.05). Controlling for the risk of mortality, early fluid resuscitation was associated with a 3-fold reduction in the odds of death (odds ratio, 0.33; 95% confidence interval, 0.13-0.85). The most important barriers to achieve adequate severe sepsis and septic shock treatment were lack of adequate vascular access, lack of recognition of early shock, shortage of health care providers, and nonuse of goals and treatment protocols. Conclusions: The mortality rate was higher for children older than years, for those who received less than 40 mL/kg in the first hour, and for those whose treatment was not initiated in the first 30 Minutes after the diagnosis of septic shock. The acknowledgment of existing barriers to a timely fluid administration and the establishment of objectives to overcome these barriers may lead to a more successful implementation of the American College of Critical Care Medicine guidelines and reduced mortality rates for children with septic shock in the developing world.
Resumo:
PURPOSE. To evaluate the effect of disease severity and optic disc size on the diagnostic accuracies of optic nerve head (ONH), retinal nerve fiber layer (RNFL), and macular parameters with RTVue (Optovue, Fremont, CA) spectral domain optical coherence tomography (SDOCT) in glaucoma. METHODS. 110 eyes of 62 normal subjects and 193 eyes of 136 glaucoma patients from the Diagnostic Innovations in Glaucoma Study underwent ONH, RNFL, and macular imaging with RTVue. Severity of glaucoma was based on visual field index (VFI) values from standard automated perimetry. Optic disc size was based on disc area measurement using the Heidelberg Retina Tomograph II (Heidelberg Engineering, Dossenheim, Germany). Influence of disease severity and disc size on the diagnostic accuracy of RTVue was evaluated by receiver operating characteristic (ROC) and logistic regression models. RESULTS. Areas under ROC curve (AUC) of all scanning areas increased (P < 0.05) as disease severity increased. For a VFI value of 99%, indicating early damage, AUCs for rim area, average RNLI thickness, and ganglion cell complex-root mean square were 0.693, 0.799, and 0.779, respectively. For a VFI of 70%, indicating severe damage, corresponding AUCs were 0.828, 0.985, and 0.992, respectively. Optic disc size did not influence the AUCs of any of the SDOCT scanning protocols of RTVue (P > 0.05). Sensitivity of the rim area increased and specificity decreased in large optic discs. CONCLUSIONS. Diagnostic accuracies of RTVue scanning protocols for glaucoma were significantly influenced by disease severity. Sensitivity of the rim area increased in large optic discs at the expense of specificity. (Invest Ophthalmol Vis Sci. 2011;92:1290-1296) DOI:10.1167/iovs.10-5516
Resumo:
One of the challenges in screening for dementia in developing countries is related to performance differences due to educational and cultural factors. This study evaluated the accuracy of single screening tests as well as combined protocols including the Mini-Mental State Examination (MMSE), Verbal Fluency animal category (VF), Clock Drawing test (CDT), and Pfeffer Functional Activities Questionnaire (PFAQ) to discriminate illiterate elderly with and without Alzheimer`s disease (AD) in a clinical sample. Cross-sectional study with 66 illiterate outpatients diagnosed with mild and moderate AD and 40 illiterate normal controls. Diagnosis of AD was based on NINCDS-ADRDA. All patients were submitted to a diagnostic protocol including a clinical interview based on the CAMDEX sections. ROC curves area analyses were carried out to compare sensitivity and specificity for the cognitive tests to differentiate the two groups (each test separately and in two by two combinations). Scores for all cognitive (MMSE, CDT, VF) and functional assessments (PFAQ) were significantly different between the two groups (p < 0.001). The best screening instruments for this sample of illiterate elderly were the MMSE and the PFAQ. The cut-off scores for the MMSE, VF, CDT, and PFAQ were 17.5, 7.5, 2.5, and 11.5, respectively. The most sensitive combination came from the MMSE and PFAQ (94.1%), and the best specificity was observed with the combination of the MMSE and CDT (89%). Illiterate patients can be successfully screened for AD using well-known screening instruments, especially in combined protocols.
Resumo:
Introduction: Two hundred ten patients with newly diagnosed Hodgkin`s lymphoma (HL) were consecutively enrolled in this prospective trial to evaluate the cost-effectiveness of fluorine-18 ((18)F)-fluoro-2-deoxy-D-glucose-positron emission tomography (FDG-PET) scan in initial staging of patients with HL. Methods: All 210 patients were staged with conventional clinical staging (CCS) methods, including computed tomography (CT), bone marrow biopsy (BMB), and laboratory tests. Patients were also submitted to metabolic staging (MS) with whole-body FDG-PET scan before the beginning of treatment. A standard of reference for staging was determined with all staging procedures, histologic examination, and follow-up examinations. The accuracy of the CCS was compared with the MS. Local unit costs of procedures and tests were evaluated. Incremental cost-effectiveness ratio (ICER) was calculated for both strategies. Results: In the 210 patients with HL, the sensitivity for initial staging of FDG-PET was higher than that of CT and BMB in initial staging (97.9% vs. 87.3%; P < .001 and 94.2% vs. 71.4%, P < 0.003, respectively). The incorporation of FDG-PET in the staging procedure upstaged disease in 50 (24%) patients and downstaged disease in 17 (8%) patients. Changes in treatment would be seen in 32 (15%) patients. Cumulative cost for staging procedures was $3751/patient for CCS compared to $5081 for CCS + PET and $4588 for PET/CT. The ICER of PET/CT strategy was $16,215 per patient with modified treatment. PET/CT costs at the beginning and end of treatment would increase total costs of HL staging and first-line treatment by only 2%. Conclusion: FDG-PET is more accurate than CT and BMB in HL staging. Given observed probabilities, FDG-PET is highly cost-effective in the public health care program in Brazil.
Resumo:
OBJECTIVE. The purpose of the study was to investigate patient characteristics associated with image quality and their impact on the diagnostic accuracy of MDCT for the detection of coronary artery stenosis. MATERIALS AND METHODS. Two hundred ninety-one patients with a coronary artery calcification (CAC) score of <= 600 Agatston units (214 men and 77 women; mean age, 59.3 +/- 10.0 years [SD]) were analyzed. An overall image quality score was derived using an ordinal scale. The accuracy of quantitative MDCT to detect significant (>= 50%) stenoses was assessed using quantitative coronary angiography (QCA) per patient and per vessel using a modified 19-segment model. The effect of CAC, obesity, heart rate, and heart rate variability on image quality and accuracy were evaluated by multiple logistic regression. Image quality and accuracy were further analyzed in subgroups of significant predictor variables. Diagnostic analysis was determined for image quality strata using receiver operating characteristic (ROC) curves. RESULTS. Increasing body mass index (BMI) (odds ratio [OR] = 0.89, p < 0.001), increasing heart rate (OR = 0.90, p < 0.001), and the presence of breathing artifact (OR = 4.97, p = 0.001) were associated with poorer image quality whereas sex, CAC score, and heart rate variability were not. Compared with examinations of white patients, studies of black patients had significantly poorer image quality (OR = 0.58, p = 0.04). At a vessel level, CAC score (10 Agatston units) (OR = 1.03, p = 0.012) and patient age (OR = 1.02, p = 0.04) were significantly associated with the diagnostic accuracy of quantitative MDCT compared with QCA. A trend was observed in differences in the areas under the ROC curves across image quality strata at the vessel level (p = 0.08). CONCLUSION. Image quality is significantly associated with patient ethnicity, BMI, mean scan heart rate, and the presence of breathing artifact but not with CAC score at a patient level. At a vessel level, CAC score and age were associated with reduced diagnostic accuracy.
Resumo:
Aims We conducted a meta-analysis to evaluate the accuracy of quantitative stress myocardial contrast echocardiography (MCE) in coronary artery disease (CAD). Methods and results Database search was performed through January 2008. We included studies evaluating accuracy of quantitative stress MCE for detection of CAD compared with coronary angiography or single-photon emission computed tomography (SPECT) and measuring reserve parameters of A, beta, and A beta. Data from studies were verified and supplemented by the authors of each study. Using random effects meta-analysis, we estimated weighted mean difference (WMD), likelihood ratios (LRs), diagnostic odds ratios (DORs), and summary area under curve (AUC), all with 95% confidence interval (0). Of 1443 studies, 13 including 627 patients (age range, 38-75 years) and comparing MCE with angiography (n = 10), SPECT (n = 1), or both (n = 2) were eligible. WMD (95% CI) were significantly less in CAD group than no-CAD group: 0.12 (0.06-0.18) (P < 0.001), 1.38 (1.28-1.52) (P < 0.001), and 1.47 (1.18-1.76) (P < 0.001) for A, beta, and A beta reserves, respectively. Pooled LRs for positive test were 1.33 (1.13-1.57), 3.76 (2.43-5.80), and 3.64 (2.87-4.78) and LRs for negative test were 0.68 (0.55-0.83), 0.30 (0.24-0.38), and 0.27 (0.22-0.34) for A, beta, and A beta reserves, respectively. Pooled DORs were 2.09 (1.42-3.07), 15.11 (7.90-28.91), and 14.73 (9.61-22.57) and AUCs were 0.637 (0.594-0.677), 0.851 (0.828-0.872), and 0.859 (0.842-0.750) for A, beta, and A beta reserves, respectively. Conclusion Evidence supports the use of quantitative MCE as a non-invasive test for detection of CAD. Standardizing MCE quantification analysis and adherence to reporting standards for diagnostic tests could enhance the quality of evidence in this field.
Resumo:
Background: Traffic accidents constitute the main cause of death in the first decades of life. Traumatic brain injury is the event most responsible for the severity of these accidents. The SBN started an educational program for the prevention of traffic accidents, adapted from the American model ""Think First"" to the Brazilian environment, since 1995, with special effort devoted to the prevention of TBI by using seat belts and motorcycle helmets. The objective of the present study was to set up a traffic accident prevention program based on the adapted Think First and to evaluate its impact by comparing epidemiological variables before and after the beginning of the program. Methods: The program was executed in Maringa city, from September 2004 to August 2005, with educational actions targeting the entire population, especially teenagers and young adults. The program was implemented by building a network of information facilitators and multipliers inside the organized civil society, with widespread population dissemination. To measure the impact of the program, a specific software was developed for the storage and processing of the epidemiological variables. Results: The results showed a reduction of trauma severity due to traffic accidents after the execution of the program, mainly TBI. Conclusions: The adapted Think First was systematically implemented and its impact measured for the first time in Brazil, revealing the usefulness of the program for reducing trauma and TBI severity in traffic accidents through public education and representing a standardized model of implementation in a developing country. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The aim of this investigation was to assess the diagnostic accuracy of intraoperative cultures for the early identification of patients who are at risk of infection after primary total hip arthroplasty. Four or six swabs were obtained immediately before the wound closure in 263 primary total hip replacements. Patients with a maximum of one positive culture were denoted as patients with a normal profile and did not receive any treatment. Patients with two or more positive cultures, with the same organism identified, were denoted as patients with a risk profile and received treatment with a specific antibiotic as determined by the antibiogram for six weeks. The follow-up ranged from a minimum of one year to five years and eleven months, concentrating on the presence or absence of infection, which was defined as discharge of pus through the surgical wound or as a fistula at any time after surgery. The accuracy of this procedure ( number of cases correctly identified in relation to the total number of cases) in the group of 152 arthroplasties in which 4 swabs per patient were collected was 96%. In the group of 111 arthroplasties in which 6 swabs per patient were collected the accuracy was 95.5%. We conclude that the collection of swabs under the conditions described is a method of high accuracy ( above 95%) for the evaluation of risk of infection after primary total hip arthroplasty.
Resumo:
Purpose: The aim of this research was to assess the dimensional accuracy of orbital prostheses based on reversed images generated by computer-aided design/computer-assisted manufacturing (CAD/CAM) using computed tomography (CT) scans. Materials and Methods: CT scans of the faces of 15 adults, men and women older than 25 years of age not bearing any congenital or acquired craniofacial defects, were processed using CAD software to produce 30 reversed three-dimensional models of the orbital region. These models were then processed using the CAM system by means of selective laser sintering to generate surface prototypes of the volunteers` orbital regions. Two moulage impressions of the faces of each volunteer were taken to manufacture 15 pairs of casts. Orbital defects were created on the right or left side of each cast. The surface prototypes were adapted to the casts and then flasked to fabricate silicone prostheses. The establishment of anthropometric landmarks on the orbital region and facial midline allowed for the data collection of 31 linear measurements, used to assess the dimensional accuracy of the orbital prostheses and their location on the face. Results: The comparative analyses of the linear measurements taken from the orbital prostheses and the opposite sides that originated the surface prototypes demonstrated that the orbital prostheses presented similar vertical, transversal, and oblique dimensions, as well as similar depth. There was no transverse or oblique displacement of the prostheses. Conclusion: From a clinical perspective, the small differences observed after analyzing all 31 linear measurements did not indicate facial asymmetry. The dimensional accuracy of the orbital prostheses suggested that the CAD/CAM system assessed herein may be applicable for clinical purposes. Int J Prosthodont 2010;23:271-276.