959 resultados para Diagnostic Test Accuracy
Resumo:
One of the challenges in screening for dementia in developing countries is related to performance differences due to educational and cultural factors. This study evaluated the accuracy of single screening tests as well as combined protocols including the Mini-Mental State Examination (MMSE), Verbal Fluency animal category (VF), Clock Drawing test (CDT), and Pfeffer Functional Activities Questionnaire (PFAQ) to discriminate illiterate elderly with and without Alzheimer`s disease (AD) in a clinical sample. Cross-sectional study with 66 illiterate outpatients diagnosed with mild and moderate AD and 40 illiterate normal controls. Diagnosis of AD was based on NINCDS-ADRDA. All patients were submitted to a diagnostic protocol including a clinical interview based on the CAMDEX sections. ROC curves area analyses were carried out to compare sensitivity and specificity for the cognitive tests to differentiate the two groups (each test separately and in two by two combinations). Scores for all cognitive (MMSE, CDT, VF) and functional assessments (PFAQ) were significantly different between the two groups (p < 0.001). The best screening instruments for this sample of illiterate elderly were the MMSE and the PFAQ. The cut-off scores for the MMSE, VF, CDT, and PFAQ were 17.5, 7.5, 2.5, and 11.5, respectively. The most sensitive combination came from the MMSE and PFAQ (94.1%), and the best specificity was observed with the combination of the MMSE and CDT (89%). Illiterate patients can be successfully screened for AD using well-known screening instruments, especially in combined protocols.
Resumo:
OBJECTIVE. The purpose of the study was to investigate patient characteristics associated with image quality and their impact on the diagnostic accuracy of MDCT for the detection of coronary artery stenosis. MATERIALS AND METHODS. Two hundred ninety-one patients with a coronary artery calcification (CAC) score of <= 600 Agatston units (214 men and 77 women; mean age, 59.3 +/- 10.0 years [SD]) were analyzed. An overall image quality score was derived using an ordinal scale. The accuracy of quantitative MDCT to detect significant (>= 50%) stenoses was assessed using quantitative coronary angiography (QCA) per patient and per vessel using a modified 19-segment model. The effect of CAC, obesity, heart rate, and heart rate variability on image quality and accuracy were evaluated by multiple logistic regression. Image quality and accuracy were further analyzed in subgroups of significant predictor variables. Diagnostic analysis was determined for image quality strata using receiver operating characteristic (ROC) curves. RESULTS. Increasing body mass index (BMI) (odds ratio [OR] = 0.89, p < 0.001), increasing heart rate (OR = 0.90, p < 0.001), and the presence of breathing artifact (OR = 4.97, p = 0.001) were associated with poorer image quality whereas sex, CAC score, and heart rate variability were not. Compared with examinations of white patients, studies of black patients had significantly poorer image quality (OR = 0.58, p = 0.04). At a vessel level, CAC score (10 Agatston units) (OR = 1.03, p = 0.012) and patient age (OR = 1.02, p = 0.04) were significantly associated with the diagnostic accuracy of quantitative MDCT compared with QCA. A trend was observed in differences in the areas under the ROC curves across image quality strata at the vessel level (p = 0.08). CONCLUSION. Image quality is significantly associated with patient ethnicity, BMI, mean scan heart rate, and the presence of breathing artifact but not with CAC score at a patient level. At a vessel level, CAC score and age were associated with reduced diagnostic accuracy.
Resumo:
Objectives: The diagnosis of caries lesions is still a matter of concern in dentistry. The diagnosis of dental caries by digital radiography has a number of advantages over conventional radiography; however, this method has not been explored fully in the field of paediatric dentistry. This in vitro research evaluated the accuracy of direct digital radiography compared with visual inspection and conventional radiography in the diagnosis of occlusal caries lesions in primary molars. Methods: 50 molars were selected and evaluated under standardized conditions by 2 previously calibrated examiners according to 3 diagnostic methods (visual inspection, conventional radiography and direct digital radiography). Direct digital radiographs were obtained with the Dixi3 system (Planmeca, Helsinki, Finland) and the conventional radiographs with InSight film (Kodak Eastman Co., Rochester, NY). The images were scored and a reference standard was obtained histologically. The interexaminer reliability was calculated using Cohen`s kappa test and the specificity, sensitivity and accuracy of the methods were calculated. Results: Examiner reliability was good. For lesions limited to the enamel, visual inspection showed significantly higher sensitivity and accuracy than both radiographic methods, but no significant difference was found in specificity. For teeth with dentinal caries, no significant differences were found for any parameter when comparing visual and radiographic evaluation. Conclusions: Although less accurate than the visual method for detecting caries lesions confined to the enamel, the direct digital radiographic method is as effective as conventional radiographic examination and visual inspection of primary teeth with occlusal caries when the dentine is involved. Dentomaxillofacial Radiology (2010) 39, 362-367. doi: 10.1259/dmfr/22865872
Resumo:
Introduction: The aim of this study was to compare the influence of preflaring on the accuracy of 4 electronic apex locators (EALs): Root ZX, Elements Diagnostic Unit and Apex Locator, Mini Apex Locator, and Apex DSP. Methods: Forty extracted teeth were preflared by using S1 and SX ProTaper instruments. The working length was established by reducing 1 mm from the total length (TL). The ability of the EALs to detect precise (-1 mm from TL) and acceptable (-1+/-0.5 mm from TL) measurements in unflared and preflared canals was determined. Results: The precise and acceptable (P/A) readings in unflared canals for Root ZX, Elements Diagnostic Unit and Apex Locator, Mini Apex and Apex DSP were 50%/97.5%, 47.5%/95%, 50%/97.5%, and 45%/67.5%, respectively. For preflared canals, the readings were 75%/97.5%, 55%/95%, 75%/97.5%, and 60%/87.5%, respectively. For precise criteria, the preflared procedure increased the percentage of accurate electronic readings for the Root ZX and the Mini Apex Locator (P < .05). For acceptable criteria, no differences were found among Root ZX, Elements Diagnostic Unit and Apex Locator, and Mini Apex Locator (P > .05). Fisher test indicated the lower accuracy for Apex DSP (P < .05). Conclusions: The Root ZX and the Mini Apex Locator devices increased significantly the precision to determine the real working length after the preflaring procedure. All the EALs showed an acceptable determination of the working length between the ranges of+/-0.5mm except for the Apex DSP device, which had the lowest accuracy. (J Endod 2009;35:1300-1302)
Resumo:
Diagnosis involves a complex and overlapping series of steps, each of which may be a source of error and of variability between clinicians. This variation may involve the ability to elicit relevant information from the client or animal, in the accuracy, objectivity and completeness of relevant memory stores, and in psychological attributes including tolerance for uncertainty and willingness to engage in constructive self-criticism. The diagnostic acumen of an individual clinician may not be constant, varying with external and personal factors, with different clients and cases, and with the use made of tests. In relation to clients, variations may occur in the ability to gain their confidence, to ask appropriate questions and to evaluate accurately both verbal and nonverbal responses. Tests may introduce problems of accuracy, validity, sensitivity, specificity, interpretation and general appropriateness for the case. Continuing effectiveness as a diagnostician therefore requires constant attention to the maintenance of adequate and up-to-date skills and knowledge relating to the animals and their diseases and to tests, and of sensitive interpersonal skills.
Resumo:
Objective: To assess the diagnostic error rate among echocardiograms undertaken by individuals other than paediatric cardiologists in our referral area. Methodology: External group: The charts and echocardiographic results of all patients who had undergone outside echocardiograms between January 1996 and December 1999 were reviewed (110). Age at echocardiography, diagnostic complexity, presence of any diagnostic errors and the severity of any diagnostic errors were identified. Internal group: To assess our own error rate, the initial echocardiographic diagnoses of 100 patients undergoing cardiac catheterisation or corrective surgery were compared with the post-catheterisation or postoperative diagnoses. Age and diagnostic complexity were also assessed in the control group. Results: Diagnostic errors occurred in 47/110 patients (44%) of the externally studied group (of which 24% were either major or life threatening) as opposed to 3/100 of the internally studied group, despite the internally studied group being of increased diagnostic complexity. Errors were more common and of increased severity in infants less than 1 month of age but extended throughout all age groups. Major and life threatening errors increased with increasing diagnostic complexity. In the externally studied group, 8/47 errors were patients inappropriately designated as normal. Four of these patients required cardiac surgery or interventional cardiac catheterisation. Conclusions: This study suggests an unacceptably high error rate in paediatric echocardiographic diagnoses by non-paediatric cardiologists throughout all age groups. Such errors are more likely in younger infants and with increasing diagnostic complexity.
Resumo:
This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. This study is defined as study of diagnostic performance. The diagnostic accuracy of 5 visual tests -near convergence point, near accommodation point, stereopsis, contrast sensibility and amsler grid—was evaluated by means of the ROC method (receiver operating characteristics curves), sensitivity, specificity, positive and negative likelihood ratios (LR+/LR−). Visual acuity was used as the reference standard. A sample of 44 elderly aged 76.7 years (±9.32), who were institutionalized, was collected. The curves of contrast sensitivity and stereopsis are the most accurate (area under the curves were 0.814−p = 0.001, C.I.95%[0.653;0.975]— and 0.713−p = 0.027, C.I.95%[0,540;0,887], respectively). The scores with the best diagnostic validity for the stereopsis test were 0.605 (sensitivity 0.87, specificity 0.54; LR+ 1.89, LR−0.24) and 0.610 (sensitivity 0.81, specificity 0.54; LR+1.75, LR−0.36). The scores with higher diagnostic validity for the contrast sensibility test were 0.530 (sensitivity 0.94, specificity 0.69; LR+ 3.04, LR−0.09). The contrast sensitivity and stereopsis test's proved to be clinically useful in detecting vision loss in the elderly.
Resumo:
The aging of Portuguese population is characterized by an increase of individuals aged older than 65 years. Preventable visual loss in older persons is an important public health problem. Tests used for vision screening should have a high degree of diagnostic validity confirmed by means of clinical trials. The primary aim of a screening program is the early detection of visual diseases. Between 20% and 50% of older people in the UK have undetected reduced vision and in most cases is correctable. Elderly patients do not receive a systematic eye examination unless a problem arises with their glasses or suspicion vision loss. This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. Furthermore, it pretends to define the ability to find the subjects affected with vision loss as positive and the subjects not affected with the same disease as negative. The ideal vision screening method should have high sensitivity and specificity for early detection of risk factors. It should be also low cost and easy to implement in all geographic and socioeconomic regions. Sensitivity is the ability of an examination to identify the presence of a given disease and specificity is the ability of the examination to identify the absence of a given disease. It was not an aim of this study to detect abnormalities that affect visual acuity. The aim of this study was to find out what´s the best test for the identification of any vision loss.
Resumo:
The study was carried out to evaluate the diagnostic performance of the ICT malaria Pf/PvTM test for vivax malaria diagnosis in Belém, Amazon region, Brazil. The results of blood malaria parasites examination using an immunochromatography test were compared with thick blood film (TBF) examination. It was also evaluated the performance of this test storaged at three different temperatures (25°C, 30°C, and 37°C) for 24 hours before use. Overall sensitivity of ICT Pf/PvTM was 61.8% with a specificity of 100%, positive and negative predictive value of 100% and 71.8%, respectively and accuracy of 80.6%. The test sensitivity was independent of the parasite density. This test needs to be further reviewed in order to have better performance for P. vivax malaria diagnosis.
Resumo:
Abstract: INTRODUCTION: The treatment of individuals with active tuberculosis (TB) and the identification and treatment of latent tuberculosis infection (LTBI) contacts are the two most important strategies for the control of TB. The objective of this study was compare the performance of tuberculin skin testing (TST) with QuantiFERON-TB Gold In TUBE(r) in the diagnosis of LTBI in contacts of patients with active TB. METHODS: Cross-sectional analytical study with 60 contacts of patients with active pulmonary TB. A blood sample of each contact was taken for interferon-gamma release assay (IGRA) and subsequently performed the TST. A receiver operating characteristic curve was generated to assess the cutoff points and the sensitivity, predictive values, and accuracy were calculated. The agreement between IGRA and TST results was evaluated by Kappa coefficient. RESULTS: Here, 67.9% sensitivity, 84.4% specificity, 79.1% PPV, 75% NPV, and 76.7% accuracy were observed for the 5mm cutoff point. The prevalence of LTBI determined by TST and IGRA was 40% and 46.7%, respectively. CONCLUSIONS: Both QuantiFERON-TB Gold In TUBE(r) and TST showed good performance in LTBI diagnosis. The creation of specific diagnostic methods is necessary for the diagnosis of LTBI with higher sensitivity and specificity, preferably with low cost and not require a return visit for reading because with early treatment of latent forms can prevent active TB.
Resumo:
OBJECTIVE - To assess the diagnostic value, the characteristics, and feasibility of tilt-table testing in children and adolescents. METHODS - From August 1991 to June 1997, we retrospectively assessed 94 patients under the age of 18 years who had a history of recurring syncope and presyncope of unknown origin and who were referred for tilt-table testing. These patients were divided into 2 groups: group I (children) - 36 patients with ages ranging from 3 to 12 (mean of 9.19±2.31) years; group II (adolescents) - 58 patients with ages ranging from 13 to 18 (mean of 16.05±1.40) years. We compared the positivity rate, the type of hemodynamic response, and the time period required for the test to become positive in the 2 groups. RESULTS - The positivity rates were 41.6 % and 50% for groups I and II, respectively. The pattern of positive hemodynamic response that predominated in both groups was the mixed response. The mean time period required for the test to become positive was shorter in group I (11.0±7.23 min) than in group II (18.44±7.83 min). No patient experienced technical difficulty or complications. CONCLUSION - No difference was observed in regard to feasibility, positivity rate, and pattern of positive response for the tilt-table test in children and adolescents. Pediatric patients had earlier positive responses.
Resumo:
Background: Studies have demonstrated the diagnostic accuracy and prognostic value of physical stress echocardiography in coronary artery disease. However, the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia is limited. Objective: To evaluate the effectiveness of physical stress echocardiography in the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia. Methods: This is a retrospective cohort in which 866 consecutive patients with exercise test positive for myocardial ischemia, and who underwent physical stress echocardiography were studied. Patients were divided into two groups: with physical stress echocardiography negative (G1) or positive (G2) for myocardial ischemia. The endpoints analyzed were all-cause mortality and major cardiac events, defined as cardiac death and non-fatal acute myocardial infarction. Results: G2 comprised 205 patients (23.7%). During the mean 85.6 ± 15.0-month follow-up, there were 26 deaths, of which six were cardiac deaths, and 25 non-fatal myocardial infarction cases. The independent predictors of mortality were: age, diabetes mellitus, and positive physical stress echocardiography (hazard ratio: 2.69; 95% confidence interval: 1.20 - 6.01; p = 0.016). The independent predictors of major cardiac events were: age, previous coronary artery disease, positive physical stress echocardiography (hazard ratio: 2.75; 95% confidence interval: 1.15 - 6.53; p = 0.022) and absence of a 10% increase in ejection fraction. All-cause mortality and the incidence of major cardiac events were significantly higher in G2 (p < 0. 001 and p = 0.001, respectively). Conclusion: Physical stress echocardiography provides additional prognostic information in patients with exercise test positive for myocardial ischemia.
Resumo:
AbstractBackground:Myocardial perfusion scintigraphy (MPS) in patients not reaching 85% of the maximum predicted heart rate (MPHR) has reduced sensitivity.Objectives:In an attempt to maintain diagnostic sensitivity without losing functional exercise data, a new exercise and dipyridamole combined protocol (EDCP) was developed. Our aim was to evaluate the feasibility and safety of this protocol and to compare its diagnostic sensitivity against standard exercise and dipyridamole protocols.Methods:In patients not reaching a sufficient exercise (SE) test and with no contraindications, 0.56 mg/kg of dipyridamole were IV administered over 1 minute simultaneously with exercise, followed by 99mTc-MIBI injection.Results:Of 155 patients, 41 had MPS with EDCP, 47 had a SE test (≥ 85% MPHR) and 67 underwent the dipyridamole alone test (DIP). They all underwent coronary angiography within 3 months. The three stress methods for diagnosis of coronary lesions had their sensitivity compared. For stenosis ≥ 70%, EDCP yielded 97% sensitivity, SE 90% and DIP 95% (p = 0.43). For lesions ≥ 50%, the sensitivities were 94%, 88% and 95%, respectively (p = 0.35). Side effects of EDCP were present in only 12% of the patients, significantly less than with DIP (p < 0.001).Conclusions:The proposed combined protocol is a valid and safe method that yields adequate diagnostic sensitivity, keeping exercise prognostic information in patients unable to reach target heart rate, with fewer side effects than the DIP.
Resumo:
OBJECTIVE: To elucidate the diagnostic accuracy of granulocyte colony-stimulating factor (G-CSF), interleukin-8 (IL-8), and interleukin-1 receptor antagonist (IL-1ra) in identifying patients with sepsis among critically ill pediatric patients with suspected infection. DESIGN AND SETTING: Nested case-control study in a multidisciplinary neonatal and pediatric intensive care unit (PICU) PATIENTS: PICU patients during a 12-month period with suspected infection, and plasma available from the time of clinical suspicion (254 episodes, 190 patients). MEASUREMENTS AND RESULTS: Plasma levels of G-CSF, IL-8, and IL-1ra. Episodes classified on the basis of clinical and bacteriological findings into: culture-confirmed sepsis, probable sepsis, localized infection, viral infection, and no infection. Plasma levels were significantly higher in episodes of culture-confirmed sepsis than in episodes with ruled-out infection. The area under the receiver operating characteristic curve was higher for IL-8 and G-CSF than for IL-1ra. Combining IL-8 and G-CSF improved the diagnostic performance, particularly as to the detection of Gram-negative sepsis. Sensitivity was low (<50%) in detecting Staphylococcus epidermidis bacteremia or localized infections. CONCLUSIONS: In this heterogeneous population of critically ill children with suspected infection, a model combining plasma levels of IL-8 and G-CSF identified patients with sepsis. Negative results do not rule out S. epidermidis bacteremia or locally confined infectious processes. The model requires validation in an independent data-set.