27 resultados para Clinical methods
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: Mild cognitive impairment (MCI) has been defined as a transitional state between normal aging and dementia. In many cases, MCI represents an early stage of developing cognitive impairment. Patients diagnosed with MCI do not meet the criteria for dementia as their general intellect and everyday activities are preserved, although minor changes in instrumental activities of daily living (ADL) may occur. However, they may exhibit significant behavioral and psychological signs and symptoms (BPS), also frequently observed in patients with Alzheimer's disease (AD). Hence, we wondered to what extent specific BPS are associated with cognitive decline in participants with MCI or AD. METHODS: Our sample consisted of 164 participants, including 46 patients with amnestic (single or multi-domain) MCI and 54 patients with AD, as well as 64 control participants without cognitive disorders. Global cognitive performance, BPS, and ADL were assessed using validated clinical methods at baseline and at two-year follow-up. RESULTS: The BPS variability over the follow-up period was more pronounced in the MCI group than in patients with AD: some BPS improve, others occur newly or worsen, while others still remain unchanged. Moreover, specific changes in BPS were associated with a rapid deterioration of the global cognitive level in MCI patients. In particular, an increase of euphoria, eating disorders, and aberrant motor behavior, as well as worsened sleep quality, predicted a decline in cognitive functioning. CONCLUSIONS: Our findings confirm a higher variability of BPS over time in the MCI group than in AD patients. Moreover, our results provide evidence of associations between specific BPS and cognitive decline in the MCI group that might suggest a risk of conversion of individuals with amnestic MCI to AD.
Resumo:
PURPOSE: To evaluate a diagnostic strategy for pulmonary embolism that combined clinical assessment, plasma D-dimer measurement, lower limb venous ultrasonography, and helical computed tomography (CT). METHODS: A cohort of 965 consecutive patients presenting to the emergency departments of three general and teaching hospitals with clinically suspected pulmonary embolism underwent sequential noninvasive testing. Clinical probability was assessed by a prediction rule combined with implicit judgment. All patients were followed for 3 months. RESULTS: A normal D-dimer level (<500 microg/L by a rapid enzyme-linked immunosorbent assay) ruled out venous thromboembolism in 280 patients (29%), and finding a deep vein thrombosis by ultrasonography established the diagnosis in 92 patients (9.5%). Helical CT was required in only 593 patients (61%) and showed pulmonary embolism in 124 patients (12.8%). Pulmonary embolism was considered ruled out in the 450 patients (46.6%) with a negative ultrasound and CT scan and a low-to-intermediate clinical probability. The 8 patients with a negative ultrasound and CT scan despite a high clinical probability proceeded to pulmonary angiography (positive: 2; negative: 6). Helical CT was inconclusive in 11 patients (pulmonary embolism: 4; no pulmonary embolism: 7). The overall prevalence of pulmonary embolism was 23%. Patients classified as not having pulmonary embolism were not anticoagulated during follow-up and had a 3-month thromboembolic risk of 1.0% (95% confidence interval: 0.5% to 2.1%). CONCLUSION: A noninvasive diagnostic strategy combining clinical assessment, D-dimer measurement, ultrasonography, and helical CT yielded a diagnosis in 99% of outpatients suspected of pulmonary embolism, and appeared to be safe, provided that CT was combined with ultrasonography to rule out the disease.
Resumo:
OBJECTIVE. The purpose of this study was to evaluate the prevalence of mesenteric venous thrombosis (MVT) in the Swiss Inflammatory Bowel Disease Cohort Study and to correlate MVT with clinical outcome. MATERIALS AND METHODS. Abdominal portal phase CT was used to examine patients with inflammatory bowel disease (IBD). Two experienced abdominal radiologists retrospectively analyzed the images, focusing on the superior and inferior mesenteric vein branches and looking for signs of acute or chronic thrombosis. The location of abnormalities was registered. The presence of MVT was correlated with IBD-related radiologic signs and complications. RESULTS. The cases of 160 patients with IBD (89 women, 71 men; Crohn disease [CD], 121 patients; ulcerative colitis [UC], 39 patients; median age at diagnosis, 27 years for patients with CD, 32 years for patients with UC) were analyzed. MVT was detected in 43 patients with IBD (26.8%). One of these patients had acute MVT; 38, chronic MVT; and four, both. The prevalence of MVT did not differ between CD (35/121 [28.9%]) and UC (8/39 [20.5%]) (p = 0.303). The location of thrombosis was different between CD and UC (CD, jejunal or ileal veins only [p = 0.005]; UC, rectocolic veins only [p = 0.001]). Almost all (41/43) cases of thrombosis were peripheral. MVT in CD patients was more frequently associated with bowel wall thickening (p = 0.013), mesenteric fat hypertrophy (p = 0.005), ascites (p = 0.002), and mesenteric lymph node enlargement (p = 0.036) and was associated with higher rate of bowel stenosis (p < 0.001) and more intestinal IBD-related surgery (p = 0.016) in the outcome. Statistical analyses for patients with UC were not relevant because of the limited population (n = 8). CONCLUSION. MVT is frequently found in patients with IBD. Among patients with CD, MVT is associated with bowel stenosis and CD-related intestinal surgery.
Resumo:
RATIONALE: A dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis is a well-documented neurobiological finding in major depression. Moreover, clinically effective therapy with antidepressant drugs may normalize the HPA axis activity. OBJECTIVE: The aim of this study was to test whether citalopram (R/S-CIT) affects the function of the HPA axis in patients with major depression (DSM IV). METHODS: Twenty depressed patients (11 women and 9 men) were challenged with a combined dexamethasone (DEX) suppression and corticotropin-releasing hormone (CRH) stimulation test (DEX/CRH test) following a placebo week and after 2, 4, and 16 weeks of 40 mg/day R/S-CIT treatment. RESULTS: The results show a time-dependent reduction of adrenocorticotrophic hormone (ACTH) and cortisol response during the DEX/CRH test both in treatment responders and nonresponders within 16 weeks. There was a significant relationship between post-DEX baseline cortisol levels (measured before administration of CRH) and severity of depression at pretreatment baseline. Multiple linear regression analyses were performed to identify the impact of psychopathology and hormonal stress responsiveness and R/S-CIT concentrations in plasma and cerebrospinal fluid (CSF). The magnitude of decrease in cortisol responsivity from pretreatment baseline to week 4 on drug [delta-area under the curve (AUC) cortisol] was a significant predictor (p<0.0001) of the degree of symptom improvement following 16 weeks on drug (i.e., decrease in HAM-D21 total score). The model demonstrated that the interaction of CSF S-CIT concentrations and clinical improvement was the most powerful predictor of AUC cortisol responsiveness. CONCLUSION: The present study shows that decreased AUC cortisol was highly associated with S-CIT concentrations in plasma and CSF. Therefore, our data suggest that the CSF or plasma S-CIT concentrations rather than the R/S-CIT dose should be considered as an indicator of the selective serotonergic reuptake inhibitors (SSRIs) effect on HPA axis responsiveness as measured by AUC cortisol response.
Resumo:
OBJECTIVE: To assess the impact of liver hypertrophy of the future liver remnant volume (FLR) induced by preoperative portal vein embolization (PVE) on the immediate postoperative complications after a standardized major liver resection. SUMMARY BACKGROUND DATA: PVE is usually indicated when FLR is estimated to be too small for major liver resection. However, few data exist regarding the exact quantification of sufficient minimal functional hepatic volume required to avoid postoperative complications in both patients with or without chronic liver disease. METHODS: All consecutive patients in whom an elective right hepatectomy was feasible and who fulfilled the inclusion and exclusion criteria between 1998 and 2000 were assigned to have alternatively either immediate surgery or surgery after PVE. Among 55 patients (25 liver metastases, 2 cholangiocarcinoma, and 28 hepatocellular carcinoma), 28 underwent right hepatectomy after PVE and 27 underwent immediate surgery. Twenty-eight patients had chronic liver disease. FLR and estimated rate of functional future liver remnant (%FFLR) volumes were assessed by computed tomography. RESULTS: The mean increase of FLR and %FFLR 4 to 8 weeks after PVE were respectively 44 +/- 19% and 16 +/- 7% for patients with normal liver and 35 +/- 28% and 9 +/- 3% for those with chronic liver disease. All patients with normal liver and 86% with chronic liver disease experienced hypertrophy after PVE. The postoperative course of patients with normal liver who underwent PVE before right hepatectomy was similar to those with immediate surgery. In contrast, PVE in patients with chronic liver disease significantly decreased the incidence of postoperative complications as well as the intensive care unit stay and total hospital stay after right hepatectomy. CONCLUSIONS: Before elective right hepatectomy, the hypertrophy of FLR induced by PVE had no beneficial effect on the postoperative course in patients with normal liver. In contrast, in patients with chronic liver disease, the hypertrophy of the FLR induced by PVE decreased significantly the rate of postoperative complications.
Resumo:
OBJECTIVES: : To evaluate the outcome after Hartmann's procedure (HP) versus primary anastomosis (PA) with diverting ileostomy for perforated left-sided diverticulitis. BACKGROUND: : The surgical management of left-sided colonic perforation with purulent or fecal peritonitis remains controversial. PA with ileostomy seems to be superior to HP; however, results in the literature are affected by a significant selection bias. No randomized clinical trial has yet compared the 2 procedures. METHODS: : Sixty-two patients with acute left-sided colonic perforation (Hinchey III and IV) from 4 centers were randomized to HP (n = 30) and to PA (with diverting ileostomy, n = 32), with a planned stoma reversal operation after 3 months in both groups. Data were analyzed on an intention-to-treat basis. The primary end point was the overall complication rate. The study was discontinued following an interim analysis that found significant differences of relevant secondary end points as well as a decreasing accrual rate (NCT01233713). RESULTS: : Patient demographics were equally distributed in both groups (Hinchey III: 76% vs 75% and Hinchey IV: 24% vs 25%, for HP vs PA, respectively). The overall complication rate for both resection and stoma reversal operations was comparable (80% vs 84%, P = 0.813). Although the outcome after the initial colon resection did not show any significant differences (mortality 13% vs 9% and morbidity 67% vs 75% in HP vs PA), the stoma reversal rate after PA with diverting ileostomy was higher (90% vs 57%, P = 0.005) and serious complications (Grades IIIb-IV: 0% vs 20%, P = 0.046), operating time (73 minutes vs 183 minutes, P < 0.001), hospital stay (6 days vs 9 days, P = 0.016), and lower in-hospital costs (US $16,717 vs US $24,014) were significantly reduced in the PA group. CONCLUSIONS: : This is the first randomized clinical trial favoring PA with diverting ileostomy over HP in patients with perforated diverticulitis.
Resumo:
STUDY DESIGN: Prospective, controlled, observational outcome study using clinical, radiographic, and patient/physician-based questionnaire data, with patient outcomes at 12 months follow-up. OBJECTIVE: To validate appropriateness criteria for low back surgery. SUMMARY OF BACKGROUND DATA: Most surgical treatment failures are attributed to poor patient selection, but no widely accepted consensus exists on detailed indications for appropriate surgery. METHODS: Appropriateness criteria for low back surgery have been developed by a multispecialty panel using the RAND appropriateness method. Based on panel criteria, a prospective study compared outcomes of patients appropriately and inappropriately treated at a single institution with 12 months follow-up assessment. Included were patients with low back pain and/or sciatica referred to the neurosurgical department. Information about symptoms, neurologic signs, the health-related quality of life (SF-36), disability status (Roland-Morris), and pain intensity (VAS) was assessed at baseline, at 6 months, and at 12 months follow-up. The appropriateness criteria were administered prospectively to each clinical situation and outside of the clinical setting, with the surgeon and patients blinded to the results of the panel decision. The patients were further stratified into 2 groups: appropriate treatment group (ATG) and inappropriate treatment group (ITG). RESULTS: Overall, 398 patients completed all forms at 12 months. Treatment was considered appropriate for 365 participants and inappropriate for 33 participants. The mean improvement in the SF-36 physical component score at 12 months was significantly higher in the ATG (mean: 12.3 points) than in the ITG (mean: 6.8 points) (P = 0.01), as well as the mean improvement in the SF-36 mental component score (ATG mean: 5.0 points; ITG mean: -0.5 points) (P = 0.02). Improvement was also significantly higher in the ATG for the mean VAS back pain (ATG mean: 2.3 points; ITG mean: 0.8 points; P = 0.02) and Roland-Morris disability score (ATG mean: 7.7 points; ITG mean: 4.2 points; P = 0.004). The ATG also had a higher improvement in mean VAS for sciatica (4.0 points) than the ITG (2.8 points), but the difference was not significant (P = 0.08). The SF-36 General Health score declined in both groups after 12 months, however, the decline was worse in the ITG (mean decline: 8.2 points) than in the ATG (mean decline: 1.2 points) (P = 0.04). Overall, in comparison to ITG patients, ATG patients had significantly higher improvement at 12 months, both statistically and clinically. CONCLUSION: In comparison to previously reported literature, our study is the first to assess the utility of appropriateness criteria for low back surgery at 1-year follow-up with multiple outcome dimensions. Our results confirm the hypothesis that application of appropriateness criteria can significantly improve patient outcomes.
Resumo:
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been introduced in diagnostic microbiology laboratories for the identification of bacterial and yeast strains isolated from clinical samples. In the present study, we prospectively compared MALDI-TOF MS to the conventional phenotypic method for the identification of routine isolates. Colonies were analyzed by MALDI-TOF MS either by direct deposition on the target plate or after a formic acid-acetonitrile extraction step if no valid result was initially obtained. Among 1,371 isolates identified by conventional methods, 1,278 (93.2%) were putatively identified to the species level by MALDI-TOF MS and 73 (5.3%) were identified to the genus level, but no reliable identification was obtained for 20 (1.5%). Among the 1,278 isolates identified to the species level by MALDI-TOF MS, 63 (4.9%) discordant results were initially identified. Most discordant results (42/63) were due to systematic database-related taxonomical differences, 14 were explained by poor discrimination of the MALDI-TOF MS spectra obtained, and 7 were due to errors in the initial conventional identification. An extraction step was required to obtain a valid MALDI-TOF MS identification for 25.6% of the 1,278 valid isolates. In conclusion, our results show that MALDI-TOF MS is a fast and reliable technique which has the potential to replace conventional phenotypic identification for most bacterial strains routinely isolated in clinical microbiology laboratories.
Resumo:
OBJECTIVE: We examined the correlation between clinical wear rates of restorative materials and enamel (TRAC Research Foundation, Provo, USA) and the results of six laboratory test methods (ACTA, Alabama (generalized, localized), Ivoclar (vertical, volumetric), Munich, OHSU (abrasion, attrition), Zurich). METHODS: Individual clinical wear data were available from clinical trials that were conducted by TRAC Research Foundation (formerly CRA) together with general practitioners. For each of the n=28 materials (21 composite resins for intra-coronal restorations [20 direct and 1 indirect], 5 resin materials for crowns, 1 amalgam, enamel) a minimum of 30 restorations had been placed in posterior teeth, mainly molars. The recall intervals were up to 5 years with the majority of materials (n=27) being monitored, however, only for up to 2 years. For the laboratory data, the databases MEDLINE and IADR abstracts were searched for wear data on materials which were also clinically tested by TRAC Research Foundation. Only those data for which the same test parameters (e.g. number of cycles, loading force, type of antagonist) had been published were included in the study. A different quantity of data was available for each laboratory method: Ivoclar (n=22), Zurich (n=20), Alabama (n=17), OHSU and ACTA (n=12), Munich (n=7). The clinical results were summed up in an index and a linear mixed model was fitted to the log wear measurements including the following factors: material, time (0.5, 1, 2 and 3 years), tooth (premolar/molar) and gender (male/female) as fixed effects, and patient as random effect. Relative ranks were created for each material and method; the same was performed with the clinical results. RESULTS: The mean age of the subjects was 40 (±12) years. The materials had been mostly applied in molars (81%) and 95% of the intracoronal restorations were Class II restorations. The mean number of individual wear data per material was 25 (range 14-42). The mean coefficient of variation of clinical wear data was 53%. The only significant correlation was reached by OHSU (abrasion) with a Spearman r of 0.86 (p=0.001). Zurich, ACTA, Alabama generalized wear and Ivoclar (volume) had correlation coefficients between 0.3 and 0.4. For Zurich, Alabama generalized wear and Munich, the correlation coefficient improved if only composites for direct use were taken into consideration. The combination of different laboratory methods did not significantly improve the correlation. SIGNIFICANCE: The clinical wear of composite resins is mainly dependent on differences between patients and less on the differences between materials. Laboratory methods to test conventional resins for wear are therefore less important, especially since most of them do not reflect the clinical wear.
Impact of low-level viremia on clinical and virological outcomes in treated HIV-1-infected patients.
Resumo:
BACKGROUND: The goal of antiretroviral therapy (ART) is to reduce HIV-related morbidity and mortality by suppressing HIV replication. The prognostic value of persistent low-level viremia (LLV), particularly for clinical outcomes, is unknown. OBJECTIVE: Assess the association of different levels of LLV with virological failure, AIDS event, and death among HIV-infected patients receiving combination ART. METHODS: We analyzed data from 18 cohorts in Europe and North America, contributing to the ART Cohort Collaboration. Eligible patients achieved viral load below 50 copies/ml within 3-9 months after ART initiation. LLV50-199 was defined as two consecutive viral loads between 50 and 199 copies/ml and LLV200-499 as two consecutive viral loads between 50 and 499 copies/ml, with at least one between 200 and 499 copies/ml. We used Cox models to estimate the association of LLV with virological failure (two consecutive viral loads at least 500 copies/ml or one viral load at least 500 copies/ml, followed by a modification of ART) and AIDS event/death. RESULTS: Among 17 902 patients, 624 (3.5%) experienced LLV50-199 and 482 (2.7%) LLV200-499. Median follow-up was 2.3 and 3.1 years for virological and clinical outcomes, respectively. There were 1903 virological failure, 532 AIDS events and 480 deaths. LLV200-499 was strongly associated with virological failure [adjusted hazard ratio (aHR) 3.97, 95% confidence interval (CI) 3.05-5.17]. LLV50-199 was weakly associated with virological failure (aHR 1.38, 95% CI 0.96-2.00). LLV50-199 and LLV200-499 were not associated with AIDS event/death (aHR 1.19, 95% CI 0.78-1.82; and aHR 1.11, 95% CI 0.72-1.71, respectively). CONCLUSION: LLV200-499 was strongly associated with virological failure, but not with AIDS event/death. Our results support the US guidelines, which define virological failure as a confirmed viral load above 200 copies/ml.
Resumo:
BACKGROUND: The aim of this study was to evaluate the efficacy and tolerability of fulvestrant, an estrogen receptor antagonist, in postmenopausal women with hormone-responsive tumors progressing after aromatase inhibitor (AI) treatment. PATIENTS AND METHODS: This is a phase II, open, multicenter, noncomparative study. Two patient groups were prospectively considered: group A (n=70) with AI-responsive disease and group B (n=20) with AI-resistant disease. Fulvestrant 250 mg was administered as intramuscular injection every 28 (+/-3) days. RESULTS: All patients were pretreated with AI and 84% also with tamoxifen or toremifene; 67% had bone metastases and 45% liver metastases. Fulvestrant administration was well tolerated and yielded a clinical benefit (CB; defined as objective response or stable disease [SD] for >or=24 weeks) in 28% (90% confidence interval [CI] 19% to 39%) of patients in group A and 37% (90% CI 19% to 58%) of patients in group B. Median time to progression (TTP) was 3.6 (95% CI 3.0 to 4.8) months in group A and 3.4 (95% CI 2.5 to 6.7) months in group B. CONCLUSIONS: Overall, 30% of patients who had progressed following prior AI treatment gained CB with fulvestrant, thereby delaying indication to start chemotherapy. Prior response to an AI did not appear to be predictive for benefit with fulvestrant.
Resumo:
PURPOSE: Investigation of the incidence and distribution of congenital structural cardiac malformations among the offspring of mothers with diabetes type 1 and of the influence of periconceptional glycemic control. METHODS: Multicenter retrospective clinical study, literature review, and meta-analysis. The incidence and pattern of congenital heart disease in the own study population and in the literature on the offspring of type 1 diabetic mothers were compared with the incidence and spectrum of the various cardiovascular defects in the offspring of nondiabetic mothers as registered by EUROCAT Northern Netherlands. Medical records were, in addition, reviewed for HbA(1c) during the 1st trimester. RESULTS: The distribution of congenital heart anomalies in the own diabetic study population was in accordance with the distribution encountered in the literature. This distribution differed considerably from that in the nondiabetic population. Approximately half the cardiovascular defects were conotruncal anomalies. The authors' study demonstrated a remarkable increase in the likelihood of visceral heterotaxia and variants of single ventricle among these patients. As expected, elevated HbA(1c) values during the 1st trimester were associated with offspring fetal cardiovascular defects. CONCLUSION: This study shows an increased likelihood of specific heart anomalies, namely transposition of the great arteries, persistent truncus arteriosus, visceral heterotaxia and single ventricle, among offspring of diabetic mothers. This suggests a profound teratogenic effect at a very early stage in cardiogenesis. The study emphasizes the frequency with which the offspring of diabetes-complicated pregnancies suffer from complex forms of congenital heart disease. Pregnancies with poor 1st-trimester glycemic control are more prone to the presence of fetal heart disease.
Resumo:
BACKGROUND: The Adolescent Drug Abuse Diagnosis (ADAD) and Health of Nation Outcome Scales for Children and Adolescents (HoNOSCA) are both measures of outcome for adolescent mental health services. AIMS: To compare the ADAD with HoNOSCA; to examine their clinical usefulness. METHODS: Comparison of the ADAD and HoNOSCA outcome measures of 20 adolescents attending a psychiatric day care unit. RESULTS: ADAD change was positively correlated with HoNOSCA change. HoNOSCA assesses the clinic's day-care programme more positively than the ADAD. The ADAD detects a group for which the mean score remains unchanged whereas HoNOSCA does not. CONCLUSIONS: A good convergent validity emerges between the two assessment tools. The ADAD allows an evidence-based assessment and generally enables a better subject discrimination than HoNOSCA. HoNOSCA gives a less refined evaluation but is more economic in time and possibly more sensitive to change. Both assessment tools give useful information and enabled the Day-care Unit for Adolescents to rethink the process of care and of outcome, which benefited both the institution and the patients.
Resumo:
The technique of sentinel lymph node (SLN) dissection is a reliable predictor of metastatic disease in the lymphatic basin draining the primary melanoma. Reverse transcription-polymerase chain reaction (RT-PCR) is emerging as a highly sensitive technique to detect micrometastases in SLNs, but its specificity has been questioned. A prospective SLN study in melanoma patients was undertaken to compare in detail immunopathological versus molecular detection methods. Sentinel lymphadenectomy was performed on 57 patients, with a total of 71 SLNs analysed. SLNs were cut in slices, which were alternatively subjected to parallel multimarker analysis by microscopy (haematoxylin and eosin and immunohistochemistry for HMB-45, S100, tyrosinase and Melan-A/MART-1) and RT-PCR (for tyrosinase and Melan-A/MART-1). Metastases were detected by both methods in 23% of the SLNs (28% of the patients). The combined use of Melan-A/MART-1 and tyrosinase amplification increased the sensitivity of PCR detection of microscopically proven micrometastases. Of the 55 immunopathologically negative SLNs, 25 were found to be positive on RT-PCR. Notably, eight of these SLNs contained naevi, all of which were positive for tyrosinase and/or Melan-A/MART-1, as detected at both mRNA and protein level. The remaining 41% of the SLNs were negative on both immunohistochemistry and RT-PCR. Analysis of a series of adjacent non-SLNs by RT-PCR confirmed the concept of orderly progression of metastasis. Clinical follow-up showed disease recurrence in 12% of the RT-PCR-positive immunopathology-negative SLNs, indicating that even an extensive immunohistochemical analysis may underestimate the presence of micrometastases. However, molecular analyses, albeit more sensitive, need to be further improved in order to attain acceptable specificity before they can be applied diagnostically.