9 resultados para Patrimonial value
em Université de Lausanne, Switzerland
Resumo:
PURPOSE: To determine the value of applying finger trap distraction during direct MR arthrography of the wrist to assess intrinsic ligament and triangular fibrocartilage complex (TFCC) tears. MATERIALS AND METHODS: Twenty consecutive patients were prospectively investigated by three-compartment wrist MR arthrography. Imaging was performed with 3-T scanners using a three-dimensional isotropic (0.4 mm) T1-weighted gradient-recalled echo sequence, with and without finger trap distraction (4 kg). In a blind and independent fashion, two musculoskeletal radiologists measured the width of the scapholunate (SL), lunotriquetral (LT) and ulna-TFC (UTFC) joint spaces. They evaluated the amount of contrast medium within these spaces using a four-point scale, and assessed SL, LT and TFCC tears, as well as the disruption of Gilula's carpal arcs. RESULTS: With finger trap distraction, both readers found a significant increase in width of the SL space (mean Δ = +0.1mm, p ≤ 0.040), and noticed more contrast medium therein (p ≤ 0.035). In contrast, the differences in width of the LT (mean Δ = +0.1 mm, p ≥ 0.057) and UTFC (mean Δ = 0mm, p ≥ 0.728) spaces, as well as the amount of contrast material within these spaces were not statistically significant (p = 0.607 and ≥ 0.157, respectively). Both readers detected more SL (Δ = +1, p = 0.157) and LT (Δ = +2, p = 0.223) tears, although statistical significance was not reached, and Gilula's carpal arcs were more frequently disrupted during finger trap distraction (Δ = +5, p = 0.025). CONCLUSION: The application of finger trap distraction during direct wrist MR arthrography may enhance both detection and characterisation of SL and LT ligament tears by widening the SL space and increasing the amount of contrast within the SL and LT joint spaces.
Resumo:
Although not specific, an increased in peripheral blood eosinophils may contribute substantially to the diagnosis of numerous infectious, allergic and inflammatory diseases. The scope of this article is to detail pathologies associated with peripheral eosinophilia by order of frequency and to guide further investigations.
Resumo:
AIM: To confirm the accuracy of sentinel node biopsy (SNB) procedure and its morbidity, and to investigate predictive factors for SN status and prognostic factors for disease-free survival (DFS) and disease-specific survival (DSS). MATERIALS AND METHODS: Between October 1997 and December 2004, 327 consecutive patients in one centre with clinically node-negative primary skin melanoma underwent an SNB by the triple technique, i.e. lymphoscintigraphy, blue-dye and gamma-probe. Multivariate logistic regression analyses as well as the Kaplan-Meier were performed. RESULTS: Twenty-three percent of the patients had at least one metastatic SN, which was significantly associated with Breslow thickness (p<0.001). The success rate of SNB was 99.1% and its morbidity was 7.6%. With a median follow-up of 33 months, the 5-year DFS/DSS were 43%/49% for patients with positive SN and 83.5%/87.4% for patients with negative SN, respectively. The false-negative rate of SNB was 8.6% and sensitivity 91.4%. On multivariate analysis, DFS was significantly worsened by Breslow thickness (RR=5.6, p<0.001), positive SN (RR=5.0, p<0.001) and male sex (RR=2.9, p=0.001). The presence of a metastatic SN (RR=8.4, p<0.001), male sex (RR=6.1, p<0.001), Breslow thickness (RR=3.2, p=0.013) and ulceration (RR=2.6, p=0.015) were significantly associated with a poorer DSS. CONCLUSION: SNB is a reliable procedure with high sensitivity (91.4%) and low morbidity. Breslow thickness was the only statistically significant parameter predictive of SN status. DFS was worsened in decreasing order by Breslow thickness, metastatic SN and male gender. Similarly DSS was significantly worsened by a metastatic SN, male gender, Breslow thickness and ulceration. These data reinforce the SN status as a powerful staging procedure
Resumo:
Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.