76 resultados para predictive accuracy
Resumo:
Hepatitis C virus (HCV) infection has quite high prevalence in the prison system, reaching rates of up to 40%. This survey aimed to estimate the prevalence of HCV infection and evaluate risk factors for this exposure among male inmates at the Ribeirão Preto Prison, State of São Paulo, Brazil, between May and August 2003. A total of 333 participants were interviewed using a standardized questionnaire and underwent immunoenzymatic assaying to investigate anti-HCV. The prevalence of HCV infection among the inmates was 8.7% (95% CI: 5.7-11.7). The participants'mean age was 30.1 years, and the prevalence was predominantly among individuals over 30 years of age. Multivariate analysis showed that the variables that were independently associated with HCV infection were age > 30 years, tattooing, history of previous hepatitis, previous injection drug use and previous needle-sharing.
Resumo:
INTRODUCTION: Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. METHODS: Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture). Test properties were determined for each cutoff number of the criteria from the case definition. RESULTS: Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (p<0.0001). Best sensitivity (85.3%) and specificity (68.2%) combination was found with a cutoff of 7 or more criteria, reaching positive and negative predictive values of 90.1% and 57.7%, respectively; accuracy was 81.4%. CONCLUSIONS: The case definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.
Resumo:
INTRODUCTION: Operational classification of leprosy based on the number of skin lesions was conceived to screen patients presenting severe forms of the disease to enable their reception of a more intense multidrug regimen without having to undergo lymph smear testing. We evaluated the concordance between operational classification and bacilloscopy to define multibacillary and paucibacillary leprosy. METHODS: We selected 1,213 records of individuals with leprosy, who were untreated (new cases) and admitted to a dermatology clinic in Recife, Brazil, from 2000 to 2005, and who underwent bacteriological examination at diagnosis for ratification of the operational classification. RESULTS: Compared to bacilloscopy, operational classification demonstrated 88.6% sensitivity, 76.9% specificity, a positive predictive value of 61.8%, and a negative predictive value of 94.1%, with 80% accuracy and a moderate kappa index. Among the bacilloscopy-negative cases, 23% had more than 5 skin lesions. Additionally, 11% of the bacilloscopy-positive cases had up to 5 lesions, which would have led to multibacillary cases being treated as paucibacillary leprosy if the operational classification had not been confirmed by bacilloscopy. CONCLUSIONS: Operational classification has limitations that are more obvious in borderline cases, suggesting that in these cases, lymph smear testing is advisable to enable the selection of true multibacillary cases for more intense treatment, thereby contributing to minimization of resistant strain selection and possible relapse.
Resumo:
Introduction Surgical site infections (SSIs) often manifest after patients are discharged and are missed by hospital-based surveillance. Methods We conducted a case-reference study nested in a prospective cohort of patients from six surgical specialties in a teaching hospital. The factors related to SSI were compared for cases identified during the hospital stay and after discharge. Results Among 3,427 patients, 222 (6.4%) acquired an SSI. In 138 of these patients, the onset of the SSI occurred after discharge. Neurological surgery and the use of steroids were independently associated with a greater likelihood of SSI diagnosis during the hospital stay. Conclusions Our results support the idea of a specialty-based strategy for post-discharge SSI surveillance.
Resumo:
INTRODUCTION: To evaluate predictive indices for candidemia in an adult intensive care unit (ICU) and to propose a new index. METHODS: A prospective cohort study was conducted between January 2011 and December 2012. This study was performed in an ICU in a tertiary care hospital at a public university and included 114 patients staying in the adult ICU for at least 48 hours. The association of patient variables with candidemia was analyzed. RESULTS: There were 18 (15.8%) proven cases of candidemia and 96 (84.2%) cases without candidemia. Univariate analysis revealed the following risk factors: parenteral nutrition, severe sepsis, surgical procedure, dialysis, pancreatitis, acute renal failure, and an APACHE II score higher than 20. For the Candida score index, the odds ratio was 8.50 (95% CI, 2.57 to 28.09); the sensitivity, specificity, positive predictive value, and negative predictive value were 0.78, 0.71, 0.33, and 0.94, respectively. With respect to the clinical predictor index, the odds ratio was 9.45 (95%CI, 2.06 to 43.39); the sensitivity, specificity, positive predictive value, and negative predictive value were 0.89, 0.54, 0.27, and 0.96, respectively. The proposed candidemia index cutoff was 8.5; the sensitivity, specificity, positive predictive value, and negative predictive value were 0.77, 0.70, 0.33, and 0.94, respectively. CONCLUSIONS: The Candida score and clinical predictor index excluded candidemia satisfactorily. The effectiveness of the candidemia index was comparable to that of the Candida score.
Resumo:
Abstract: INTRODUCTION: The treatment of individuals with active tuberculosis (TB) and the identification and treatment of latent tuberculosis infection (LTBI) contacts are the two most important strategies for the control of TB. The objective of this study was compare the performance of tuberculin skin testing (TST) with QuantiFERON-TB Gold In TUBE(r) in the diagnosis of LTBI in contacts of patients with active TB. METHODS: Cross-sectional analytical study with 60 contacts of patients with active pulmonary TB. A blood sample of each contact was taken for interferon-gamma release assay (IGRA) and subsequently performed the TST. A receiver operating characteristic curve was generated to assess the cutoff points and the sensitivity, predictive values, and accuracy were calculated. The agreement between IGRA and TST results was evaluated by Kappa coefficient. RESULTS: Here, 67.9% sensitivity, 84.4% specificity, 79.1% PPV, 75% NPV, and 76.7% accuracy were observed for the 5mm cutoff point. The prevalence of LTBI determined by TST and IGRA was 40% and 46.7%, respectively. CONCLUSIONS: Both QuantiFERON-TB Gold In TUBE(r) and TST showed good performance in LTBI diagnosis. The creation of specific diagnostic methods is necessary for the diagnosis of LTBI with higher sensitivity and specificity, preferably with low cost and not require a return visit for reading because with early treatment of latent forms can prevent active TB.
Resumo:
Abstract: INTRODUCTION: In Brazil, culling of seropositive dogs is one of the recommended strategies to control visceral leishmaniasis. Since infectiousness is correlated with clinical signs, control measures targeting symptomatic dogs could be more effective. METHODS: A cross-sectional study was carried out among 1,410 dogs, predictive models were developed based on clinical signs and an indirect immunofluorescence antibody test. RESULTS: The validated predictive model showed sensitivity and specificity of 86.5% and 70.0%, respectively. CONCLUSIONS: Predictive models could be used as tools to aid control programs in focusing on a smaller fraction of dogs contributing more to infection dissemination.
Resumo:
The purpose of this study was to determine whether the ankle-brachial index (ABI) could be used to predict the prognosis for a patient with intermittent claudication (IC). We studied 611 patients prospectively during 28 months of follow-up. We analyzed the predictive power of using various levels of ABI - 0.30 to 0.70 at 0.05 increments - in terms of the measure's specificity (association with a favorable outcome after exercise rehabilitation therapy) and sensitivity (association with a poor outcome after exercise rehabilitation therapy). We found that using an ABI of 0.30 as a cut-off value produced the lowest margin of error overall, but the predictive power was still low with respect to identifying the patients with a poor prognosis after non-aggressive therapeutic treatment. Further study is needed to perhaps identify a second factor that could increase the sensitivity of the test.
Resumo:
OBJECTIVE: To evaluate the efficiency of a systematic diagnostic approach in patients with chest pain in the emergency room in relation to the diagnosis of acute coronary syndrome (ACS) and the rate of hospitalization in high-cost units. METHODS: One thousand and three consecutive patients with chest pain were screened according to a pre-established process of diagnostic investigation based on the pre-test probability of ACS determinate by chest pain type and ECG changes. RESULTS: Of the 1003 patients, 224 were immediately discharged home because of no suspicion of ACS (route 5) and 119 were immediately transferred to the coronary care united because of ST elevation or left bundle-branch block (LBBB) (route 1) (74% of these had a final diagnosis of acute myocardial infarction [AMI]). Of the 660 patients that remained in the emergency room under observation, 77 (12%) had AMI without ST segment elevation and 202 (31%) had unstable angina (UA). In route 2 (high probability of ACS) 17% of patients had AMI and 43% had UA, whereas in route 3 (low probability) 2% had AMI and 7 % had UA. The admission ECG has been confirmed as a poor sensitivity test for the diagnosis of AMI ( 49%), with a positive predictive value considered only satisfactory (79%). CONCLUSION: A systematic diagnostic strategy, as used in this study, is essential in managing patients with chest pain in the emergency room in order to obtain high diagnostic accuracy, lower cost, and optimization of the use of coronary care unit beds.
Resumo:
OBJECTIVE: To evaluate the sphygmomanometers calibration accuracy and the physical conditions of the cuff-bladder, bulb, pump, and valve. METHODS: Sixty hundred and forty five aneroid sphygmomanometers were evaluated, 521 used in private practice and 124 used in hospitals. Aneroid manometers were tested against a properly calibrated mercury manometer and were considered calibrated when the error was <=3mm Hg. The physical conditions of the cuffs-bladder, bulb, pump, and valve were also evaluated. RESULTS: Of the aneroid sphygmomanometers tested, 51% of those used in private practice and 56% of those used in hospitals were found to be not accurately calibrated. Of these, the magnitude of inaccuracy ranged from 4 to 8mm Hg in 70% and 51% of the devices, respectively. The problems found in the cuffs - bladders, bulbs, pumps, and valves of the private practice and hospital devices were bladder damage (34% vs. 21%, respectively), holes/leaks in the bulbs (22% vs. 4%, respectively), and rubber aging (15% vs. 12%, respectively). Of the devices tested, 72% revealed at least one problem interfering with blood pressure measurement accuracy. CONCLUSION: Most of the manometers evaluated, whether used in private practice or in hospitals, were found to be inaccurate and unreliable, and their use may jeopardize the diagnosis and treatment of arterial hypertension.
Resumo:
OBJECTIVE: Risk stratification of patients with nonsustained ventricular tachycardia (NSVT) and chronic chagasic cardiomyopathy (CCC). METHODS: Seventy eight patients with CCC and NSVT were consecutively and prospectively studied. All patients underwent to 24-hour Holter monitoring, radioisotopic ventriculography, left ventricular angiography, and electrophysiologic study. With programmed ventricular stimulation. RESULTS: Sustained monomorphic ventricular tachycardia (SMVT) was induced in 25 patients (32%), NSVT in 20 (25.6%) and ventricular fibrillation in 4 (5.1%). In 29 patients (37.2%) no arrhythmia was inducible. During a 55.7-month-follow-up, 22 (28.2%) patients died, 16 due to sudden death, 2 due to nonsudden cardiac death and 4 due to noncardiac death. Logistic regression analysis showed that induction was the independent and main variable that predicted the occurrence of subsequent events and cardiac death (probability of 2.56 and 2.17, respectively). The Mantel-Haenszel chi-square test showed that survival probability was significantly lower in the inducible group than in the noninductible group. The percentage of patients free of events was significantly higher in the noninducible group. CONCLUSION: Induction of SMVT during programmed ventricular stimulation was a predictor of arrhythmia occurrence cardiac death and general mortality in patients with CCC and NSVT.
Resumo:
OBJECTIVE: To determine in arrhythmogenic right ventricular cardiomyopathy the value of QT interval dispersion for identifying the induction of sustained ventricular tachycardia in the electrophysiological study or the risk of sudden cardiac death. METHODS: We assessed QT interval dispersion in the 12-lead electrocardiogram of 26 patients with arrhythmogenic right ventricular cardiomyopathy. We analyzed its association with sustained ventricular tachycardia and sudden cardiac death, and in 16 controls similar in age and sex. RESULTS: (mean ± SD). QT interval dispersion: patients = 53.8±14.1ms; control group = 35.0±10.6ms, p=0.001. Patients with induction of ventricular tachycardia: 52.5±13.8ms; without induction of ventricular tachycardia: 57.5±12.8ms, p=0.420. In a mean follow-up period of 41±11 months, five sudden cardiac deaths occurred. QT interval dispersion in this group was 62.0±17.8, and in the others it was 51.9±12.8ms, p=0.852. Using a cutoff > or = 60ms to define an increase in the degree of the QT interval dispersion, we were able to identify patients at risk of sudden cardiac death with a sensitivity of 60%, a specificity of 57%, and positive and negative predictive values of 25% and 85%, respectively. CONCLUSION: Patients with arrhythmogenic right ventricular cardiomyopathy have a significant increase in the degree of QT interval dispersion when compared with the healthy population. However it, did not identify patients with induction of ventricular tachycardia in the electrophysiological study, showing a very low predictive value for defining the risk of sudden cardiac death in the population studied.
Resumo:
OBJECTIVE: To compare the accuracy of 4 different indices of cardiac risk currently used for predicting perioperative cardiac complications. METHODS: We studied 119 patients at a university-affiliated hospital whose cardiac assessment had been required for noncardiac surgery. Predictive factors of high risk for perioperative cardiac complications were assessed through clinical history and physical examination, and the patients were followed up after surgery until the 4th postoperative day to assess the occurrence of cardiac events. All patients were classified according to 4 indices of cardiac risk: the Goldman risk-factor index, Detsky modified risk index, Larsen index, and the American Society of Anesthesiologists' physical status classification and their compared accuracies, examining the areas under their respective receiver operating characteristic (ROC) curves. RESULTS: Cardiac complications occurred in 16% of the patients. The areas under the ROC curves were equal for the Goldman risk-factor index, the Larsen index, and the American Society of Anesthesiologists' physical status classification: 0.48 (SEM ± 0.03). For the Detsky index, the value found was 0.38 (SEM ± 0.03). This difference in the values was not statistically significant. CONCLUSION: The cardiac risk indices currently used did not show a better accuracy than that obtained randomly. None of the indices proved to be significantly better than the others. Studies to improve our ability to predict such complications are still required.
Resumo:
OBJECTIVE: To investigate preoperative predictive factors of severe perioperative intercurrent events and in-hospital mortality in coronary artery bypass graft (CABG) surgery and to develop specific models of risk prediction for these events, mainly those that can undergo changes in the preoperative period. METHODS: We prospectively studied 453 patients who had undergone CABG. Factors independently associated with the events of interest were determined with multiple logistic regression and Cox proportional hazards regression model. RESULTS: The mortality rate was 11.3% (51/453), and 21.2% of the patients had 1 or more perioperative intercurrent events. In the final model, the following variables remained associated with the risk of intercurrent events: age ³ 70 years, female sex, hospitalization via SUS (Sistema Único de Saúde - the Brazilian public health system), cardiogenic shock, ischemia, and dependence on dialysis. Using multiple logistic regression for in-hospital mortality, the following variables participated in the model of risk prediction: age ³ 70 years, female sex, hospitalization via SUS, diabetes, renal dysfunction, and cardiogenic shock. According to the Cox regression model for death within the 7 days following surgery, the following variables remained associated with mortality: age ³ 70 years, female sex, cardiogenic shock, and hospitalization via SUS. CONCLUSION: The aspects linked to the structure of the Brazilian health system, such as factors of great impact on the results obtained, indicate that the events investigated also depend on factors that do not relate to the patient's intrinsic condition.
Resumo:
OBJECTIVE: To determine the value of the radiological study of the thorax for diagnosing left ventricular dilation and left ventricular systolic dysfunction in patients with Chagas' disease. METHODS: A cross-sectional study of 166 consecutive patients with Chagas' disease and no other associated diseases. The patients underwent cardiac assessment with chest radiography and Doppler echocardiography. Sensitivity, specificity, and positive and negative predictive values of chest radiography were calculated to detect left ventricular dysfunction and the accuracy of the cardiothoracic ratio in the diagnosis of left ventricular dysfunction with the area below the ROC curve. The cardiothoracic ratio was correlated with the left ventricular ejection fraction and the left ventricular diastolic diameter. RESULTS: The abnormal chest radiogram had a sensitivity of 50%, specificity of 80.5%, and positive and negative predictive values of 51.2% and 79.8%, respectively, in the diagnosis of left ventricular dysfunction. The cardiothoracic ratio showed a weak correlation with left ventricular ejection fraction (r=-0.23) and left ventricular diastolic diameter (r=0.30). The area calculated under the ROC curve was 0.734. CONCLUSION: The radiological study of the thorax is not an accurate indicator of left ventricular dysfunction; its use as a screening method to initially approach the patient with Chagas' disease should be reevaluated.