971 resultados para Immunologic Tests -- methods
Resumo:
PURPOSE. To evaluate the effect of disease severity on the diagnostic accuracy of the Cirrus Optical Coherence Tomograph (Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) for glaucoma detection. METHODS. One hundred thirty-five glaucomatous eyes of 99 patients and 79 normal eyes of 47 control subjects were recruited from the longitudinal Diagnostic Innovations in Glaucoma Study (DIGS). The severity of the disease was graded based on the visual field index (VFI) from standard automated perimetry. Imaging of the retinal nerve fiber layer (RNFL) was obtained using the optic disc cube protocol available on the Cirrus HD-OCT. Pooled receiver operating characteristic (ROC) curves were initially obtained for each parameter of the Cirrus HD-OCT. The effect of disease severity on diagnostic performance was evaluated by fitting an ROC regression model, with VFI used as a covariate, and calculating the area under the ROC curve (AUCs) for different levels of disease severity. RESULTS. The largest pooled AUCs were for average thickness (0.892), inferior quadrant thickness (0.881), and superior quadrant thickness (0.874). Disease severity had a significant influence on the detection of glaucoma. For the average RNFL thickness parameter, AUCs were 0.962, 0.932, 0.886, and 0.822 for VFIs of 70%, 80%, 90%, and 100%, respectively. CONCLUSIONS. Disease severity had a significant effect on the diagnostic performance of the Cirrus HD-OCT and thus should be considered when interpreting results from this device and when considering the potential applications of this instrument for diagnosing glaucoma in the various clinical settings. (Invest Ophthalmol Vis Sci. 2010;51:4104-4109) DOI:10.1167/iovs.094716
Resumo:
Objective. Chronic rhinosinusitis (CRS) is a risk factor for asthma exacerbations and is associated with greater clinical severity. Discrepancies may exist between CRS clinical diagnosis and data from paranasal sinus (PS) X-ray or computed tomography (CT) scans. The objective was to compare PS involvement using low-dose CT and plain X-ray in allergic asthmatic patients with rhinitis. Methods. Patients underwent PS radiography in the frontal and mentonian positions and low-dose CT consisting of six to eight coronal scans performed on the central region of the sphenoidal, ethmoidal, maxillary, and frontal sinuses. Possible results for each sinus were a normal aspect or the presence of mucosal thickening, opacification, and/or air-fluid level. Results. Eighty-five (93.4%) of 91 study patients had radiological changes on radiography or CT. In only six (6.6%) were both tests normal. The maxillary was the most involved sinus by both methods. Simultaneous PS abnormalities were observed in 40.5% on X-ray and 56.7% on CT. For the frontal, ethmoidal, and sphenoidal sinuses, the proportion of normal results differed significantly between X-ray and CT: 80.2% versus 89%, 76.9% versus 63.7% and 96.7% versus 70.3%, respectively (p <.05). Agreement was over 70% for the maxillary and frontal sinuses. CT also provided a better diagnosis of air-fluid level changes than X-ray. Conclusions. Low-dose CT significantly showed larger number of normal PS results and diagnosed more severe PS lesions. As the determination of true sinus severity lesion impacts in asthma control, low-dose CT may replace PS plain X-ray and conventional CT to support better clinical decisions.
Resumo:
Purpose of review This review discusses ovarian reserve tests for ovulation induction and their application in determining fertility capacity, and their current applications to assess risk of natural ovarian failure and to estimate ovarian function after cancer treatment. Recent findings The current arsenal of ovarian reserve tests comprises hormonal markers [basal follicle stimulating hormone, estradiol, inhibin-B, antimullerian hormone (AMH)] and ultrasonographic markers [ovarian volume, antral follicle counts (AFCs)]. These markers have limitations in terms of which test(s) should be used to reliably predict ovarian reserve with regard to accuracy, invasiveness, cost, convenience, and utility. Several studies have correlated sonographic AFCs with serum AMH levels for predicting the ovarian response to ovulation induction protocols during assisted reproduction treatments. Summary Serum AMH levels and AFC are reliable tests for predicting the ovarian response to ovulation induction. However, none of the currently employed tests of ovarian reserve can reliably predict pregnancy after assisted conception. Further, ovarian reserve tests cannot predict the onset of reproductive and hormonal menopause; thus, they should be used with caution for reproductive life-programming counseling. Moreover, there is no evidence to support the use of ovarian reserve tests to estimate the risk of ovarian sufficiency after cancer treatments.
Resumo:
Introduction: Two hundred ten patients with newly diagnosed Hodgkin`s lymphoma (HL) were consecutively enrolled in this prospective trial to evaluate the cost-effectiveness of fluorine-18 ((18)F)-fluoro-2-deoxy-D-glucose-positron emission tomography (FDG-PET) scan in initial staging of patients with HL. Methods: All 210 patients were staged with conventional clinical staging (CCS) methods, including computed tomography (CT), bone marrow biopsy (BMB), and laboratory tests. Patients were also submitted to metabolic staging (MS) with whole-body FDG-PET scan before the beginning of treatment. A standard of reference for staging was determined with all staging procedures, histologic examination, and follow-up examinations. The accuracy of the CCS was compared with the MS. Local unit costs of procedures and tests were evaluated. Incremental cost-effectiveness ratio (ICER) was calculated for both strategies. Results: In the 210 patients with HL, the sensitivity for initial staging of FDG-PET was higher than that of CT and BMB in initial staging (97.9% vs. 87.3%; P < .001 and 94.2% vs. 71.4%, P < 0.003, respectively). The incorporation of FDG-PET in the staging procedure upstaged disease in 50 (24%) patients and downstaged disease in 17 (8%) patients. Changes in treatment would be seen in 32 (15%) patients. Cumulative cost for staging procedures was $3751/patient for CCS compared to $5081 for CCS + PET and $4588 for PET/CT. The ICER of PET/CT strategy was $16,215 per patient with modified treatment. PET/CT costs at the beginning and end of treatment would increase total costs of HL staging and first-line treatment by only 2%. Conclusion: FDG-PET is more accurate than CT and BMB in HL staging. Given observed probabilities, FDG-PET is highly cost-effective in the public health care program in Brazil.
Resumo:
Introduction: The association between serological markers with the need of biological therapy for early rheumatoid arthritis (ERA) is not known, with few available data addressing this question. Objectives: To prospectively evaluate a cohort of patients with ERA (less than 12 months of symptoms) in order to determine the possible association between serological markers (rheumatoid factor (RF), anti-cyclic citrullinated peptide antibodies (anti-CCP), and citrullinated anti-vimentin (anti-Sa) with parameters of therapeutic outcome (this later defined by the need of introducing biological therapy). Patients and methods: Forty patients with early RA were evaluated at the time of diagnosis and have been followed for 3 years, in use of standardized therapeutic treatment. Demographic and clinical data were recorded, as well as serology tests (ELISA) for RF (IgM, IgG and IgA), anti-CCP (CCP2, CCP3 and CCP3.1) and anti-Sa in the initial evaluation and at 3, 6, 12, 18, 24 and 36 months of follow-up. As outcomes of the RA development, the need or not for biological therapy during the follow-up period were considered. Comparisons were made through the Student t test, mixed-effects regression analysis and analysis of variance (significance level of 5%). Results: The mean age was 45 (+/- 12) years; a female predominance was observed (90%). At the time of diagnosis, RF was observed in 50% of cases (RF IgA - 42%, RF IgG - 30% and RF IgM - 50%), anti-CCP in 50% (no difference between CCP2, CCP3 and CCP3. 1) and anti-Sa in 10%. After 3 years, no change in the RF prevalence neither in the anti-CCP was observed, but the anti-Sa increased to 17.5% (p = 0.001). Biological therapy was necessary in 22.5% of patients. The mean RF IgA and anti-CCP 2 levels during the 3 years were higher among patients who needed biological therapy (p <0.05 for both). Conclusion: Higher titles of RF and anti-CCP over time were associated with the need for biological therapy.
Resumo:
Purpose To assess the cost effectiveness of fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET) in patients with Hodgkin`s lymphoma (HL) with unconfirmed complete remission (CRu) or partial remission (PR) after first-line treatment. Patients and Methods One hundred thirty patients with HL were prospectively studied. After treatment, all patients with CRu/PR were evaluated with FDG-PET. In addition, PET-negative patients were evaluated with standard follow-up, and PET-positive patients were evaluated with biopsies of the positive lesions. Local unit costs of procedures and tests were evaluated. Cost effectiveness was determined by evaluating projected annual economic impact of strategies without and with FDG-PET on HL management. Results After treatment, CRu/PR was observed in 50 (40.0%) of the 127 patients; the sensitivity, specificity, and positive and negative predictive values of FDG-PET were 100%, 92.0%, 92.3%, and 100%, respectively (accuracy of 95.9%). Local restaging costs without PET were $350,050 compared with $283,262 with PET, a 19% decrease. The incremental cost-effectiveness ratio is -$3,268 to detect one true case. PET costs represented 1% of total costs of HL treatment. Simulated costs in the 974 patients registered in the 2008 Brazilian public health care database showed that the strategy including restaging PET would have a total program cost of $56,498,314, which is $516,942 less than without restaging PET, resulting in a 1% cost saving. Conclusion FDG-PET demonstrated 95.9% accuracy in restaging for patients with HL with CRu/PR after first-line therapy. Given the observed probabilities, FDG-PET is highly cost effective and would reduce costs for the public health care program in Brazil.
Resumo:
Introduction. Orthotopic heart transplantation renders the recipient denervated. This remodeling of the intrinsic cardiac nervous system should be taken in account during functional evaluation for allograft coronary artery disease. Dobutamine stress echocardiography (DSE) has been used to detect patients at greater risk. The aim of this study was to determine whether patients with various autonomic response levels, and supposed reinnervation patterns, show the same response to DSE. Methods. We studied 20 patients who had survived more than 5 years after orthotopic heart transplantation. All patients underwent a Holter evaluation. We considered patients with low variability to be those with less than a 40-bpm variation from the lowest to highest heart rate, so-called ""noninnenervated"" (group NI). Patients who had 40-bpm or more variation were considered to show high variability and called ""reinnervated"" (group RI). After that, all patients performed an ergometric test and DSE. Results. Groups were defined as NI (n = 9) and RI (n = 11). Ergometric tests confirmed this response with NI patients showing less variability when compared to RI patients (P = .0401). During DSE, patients showed similar median heart rate responses according to the dobutamine dose. Spearmen correlation showed r = 1.0 (P = .016). Conclusions: DES was effective to reach higher heart rates, probably related to catecholamine infusion. These findings may justify a better response when evaluating cardiac allograft vasculopathy in heart transplant patients.
Resumo:
PURPOSE: To evaluate the safety and efficacy of corneal collagen crosslinking (CXL) in patients with painful pseudophakic bullous keratopathy (PBK). SETTING: University of Sao Paulo, Sao Paulo and Sadalla Amin Ghanem Eye Hospital, Joinville, Santa Catarina, Brazil. METHODS: This prospective study included consecutive eyes with PBK that had CXL. After a 9.0 mm epithelial removal, riboflavin 0.1% with dextran 20% was applied for 30 minutes followed by ultraviolet-A irradiation (370 nm, 3 mW/cm(2)). Therapeutic contact lenses were placed for 1 week. Corneal transparency, central corneal thickness (CCT), and ocular pain were assessed preoperatively and 1 and 6 months postoperatively. Statistical analysis was by paired t tests. RESULTS: Fourteen patients (14 eyes) with a mean age 71.14 years +/- 11.70 (SD) (range 53 to 89 years) were enrolled. Corneal transparency was better in all eyes 1 month after surgery. At 6 months, corneal transparency was similar to preoperative levels (P = .218). The mean CCT was 747 mu m preoperatively and 623 mu m at 1 month; the decrease was statistically significant (P<.001). At 6 months, the mean CCT increased to 710 mu m, still significantly thinner than preoperatively (P = .006). Pain scores at 6 months were not significantly different than preoperatively (P = .066). CONCLUSIONS: Corneal CXL significantly improved corneal transparency, corneal thickness, and ocular pain 1 month postoperatively. However, it did not seem to have a long-lasting effect in decreasing pain and maintaining corneal transparency in patients with PBK.
Resumo:
Background: Verbal fluency (VF) tasks are simple and efficient clinical tools to detect executive dysfunction and lexico-semantic impairment. VF tasks are widely used in patients with suspected dementia, but their accuracy for detection of mild cognitive impairment (MCI) is still under investigation. Schooling in particular may influence the subject`s performance. The aim of this study was to compare the accuracy of two semantic categories (animals and fruits) in discriminating controls, MCI patients and Alzheimer`s disease (AD) patients. Methods: 178 subjects, comprising 70 controls (CG), 70 MCI patients and 38 AD patients, were tested on two semantic VF tasks. The sample was divided into two schooling groups: those with 4-8 years of education and those with 9 or more years. Results: Both VF tasks - animal fluency (VFa) and fruits fluency (VFf) - adequately discriminated CG from AD in the total sample (AUC = 0.88 +/- 0.03, p < 0.0001) and in both education groups, and high educated MCI from AD (VFa: AUC = 0.82 +/- 0.05, p < 0.0001; VFf: AUC = 0.85 +/- 0.05, p < 0.0001). Both tasks were moderately accurate in discriminating CG from MCI (VFa: AUC = 0.68 +/- 0.04, p < 0.0001 - VFf:AUC = 0.73 +/- 0.04, p < 0.0001) regardless of the schooling level, and MCI from AD in the total sample (VFa: AUC = 0.74 +/- 0.05, p < 0.0001; VFf: AUC = 0.76 +/- 0.05, p < 0.0001). Neither of the two tasks differentiated low educated MCI from AD. In the total sample, fruits fluency best discriminated CG from MCI and MCI from AD; a combination of the two improved the discrimination between CG and AD. Conclusions: Both categories were similar in discriminating CG from AD; the combination of both categories improved the accuracy for this distinction. Both tasks were less accurate in discriminating CG from MCI, and MCI from AD.
Resumo:
Introduction: Glossodynia or burning mouth syndrome (BMS) is a common and poorly understood disorder. Its treatment is uncertain. Otherwise, there is some evidence of the importance of psychological factors in the genesis of this disease. Objectives: Verify the usefulness of group psychotherapy as an adjuvant therapeutic method in the treatment of BMS. Casuistics and Methods: The study group consisted of 64 consecutive patients with a clinical diagnosis of BMS seen at the Stomatology Outpatient Clinic, ENT Department, Sao Paulo University Medical School, between May 2002 and May 2007. All the patients were submitted to physical examination, laboratorial screening tests, psychological assessment (Crown-Crisp Experimental Inventory), and answered a short form of the McGill Pain Questionnaire. Only 44 patients who did not show any abnormality in the protocol exams entered the study. Twenty-four of them underwent group psychotherapy. Twenty patients received placebo. Chi-square test was applied to compare the results of treatment with or without psychotherapy. Results: There were 15 men and 29 women in the study group. Tongue burning was the main complaint of the patients. Improvement of symptoms was reported by 17 (70.8%) of the patients undergoing psychotherapy, while among those who did not eight (40%) had improvement of symptoms (P=.04). Conclusion: Psychological assessment demonstrated a close correlation between symptoms and psychological factors, suggesting that group psychotherapy is an important alternative to conventional treatment methods. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Objectives To assess the prevalence of alcoholism in elderly living in the city of Sao Paulo (Brazil) and investigate associated risk factors. Methods A total of 1,563 individuals aged 60 years or older, of both genders of three districts of different socioeconomic classes (high, medium and low) in the city of Sao Paulo (Brazil) were interviewed. The CAGE screening test for alcoholism was applied and a structured interview was used to assess associated sociodemographic and clinical factors. The tests Mini Mental State Examination, Fuld Object Memory Evaluation, The Informant Questionnaire on Cognitive Decline in the Elderly and Bayer-Activities of Daily Living Scale were used for cognitive and functional assessment. Results Prevalence of alcoholism was 9.1%. Multivariate regression analysis showed that alcoholism was associated with male gender, `mulatto` ethnicity, smoking, and cognitive and functional impairment. In addition, the younger the individual and the lower the schooling level, the higher the risk for alcoholism. Conclusions The results obtained in this study show that alcoholism is highly frequent in the community-dwelling elderly living in Sao Paulo, and that it is associated with socio-demographic and clinical risk factors similar to those reported in the literature. This suggests that alcoholism in the elderly of a developing country shares the same basic characteristics seen in developed countries. These findings suggest that it is essential for health services and professional to be prepared to meet this demand that will significantly grow in the next years, especially in developing countries, where the rates of population aging are higher than those of developed countries. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Aims We conducted a meta-analysis to evaluate the accuracy of quantitative stress myocardial contrast echocardiography (MCE) in coronary artery disease (CAD). Methods and results Database search was performed through January 2008. We included studies evaluating accuracy of quantitative stress MCE for detection of CAD compared with coronary angiography or single-photon emission computed tomography (SPECT) and measuring reserve parameters of A, beta, and A beta. Data from studies were verified and supplemented by the authors of each study. Using random effects meta-analysis, we estimated weighted mean difference (WMD), likelihood ratios (LRs), diagnostic odds ratios (DORs), and summary area under curve (AUC), all with 95% confidence interval (0). Of 1443 studies, 13 including 627 patients (age range, 38-75 years) and comparing MCE with angiography (n = 10), SPECT (n = 1), or both (n = 2) were eligible. WMD (95% CI) were significantly less in CAD group than no-CAD group: 0.12 (0.06-0.18) (P < 0.001), 1.38 (1.28-1.52) (P < 0.001), and 1.47 (1.18-1.76) (P < 0.001) for A, beta, and A beta reserves, respectively. Pooled LRs for positive test were 1.33 (1.13-1.57), 3.76 (2.43-5.80), and 3.64 (2.87-4.78) and LRs for negative test were 0.68 (0.55-0.83), 0.30 (0.24-0.38), and 0.27 (0.22-0.34) for A, beta, and A beta reserves, respectively. Pooled DORs were 2.09 (1.42-3.07), 15.11 (7.90-28.91), and 14.73 (9.61-22.57) and AUCs were 0.637 (0.594-0.677), 0.851 (0.828-0.872), and 0.859 (0.842-0.750) for A, beta, and A beta reserves, respectively. Conclusion Evidence supports the use of quantitative MCE as a non-invasive test for detection of CAD. Standardizing MCE quantification analysis and adherence to reporting standards for diagnostic tests could enhance the quality of evidence in this field.
Resumo:
Introduction: The delay in the diagnosis of infections can be deleterious in renal transplant recipients. Thus, laboratory tests leading to an earlier diagnosis are very useful for these patients. Purpose: To assess the behavior of C-reactive protein (CRP) in renal transplant recipients with a diagnosis of cytomegalovirus (CMV) infection, tuberculosis (TB) and bacterial infection (BI). Methods: A retrospective analysis of 129 patients admitted at our hospital, from 2006 to 2008 because of CMV, TB or BI, was carried out. Appropriate statistical analysis was done and values were expressed as medians, range. Results: When CRP levels were compared among the groups with CMV disease, TB or BI, the group with CMV disease presented lower levels of CRP (18.4 mg/L, 0.28-44 mg/L) than the TB and BI (p < 0.05) groups. The area under the receiver-operating characteristics curve, distinguishing CMV disease from TB/BI, was 0.96 (p < 0.0001), resulting in 100% sensitivity and 90.63% specificity to detect CMV disease when CRP < 44.5 mg/L. The subgroup analysis of CMV infection showed increasing levels of CRP (0.28, 16 and 29.5 mg/L) in the asymptomatic, symptomatic and invasive disease subgroups, respectively (p < 0.05). Conclusion: The measurement of CRP levels may be a useful tool for differentiating CMV infection from the other types (bacterial or TB) of infection in kidney transplant recipients.