38 resultados para Statistical analysis techniques
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
OBJECTIVES In dental research multiple site observations within patients or taken at various time intervals are commonplace. These clustered observations are not independent; statistical analysis should be amended accordingly. This study aimed to assess whether adjustment for clustering effects during statistical analysis was undertaken in five specialty dental journals. METHODS Thirty recent consecutive issues of Orthodontics (OJ), Periodontology (PJ), Endodontology (EJ), Maxillofacial (MJ) and Paediatric Dentristry (PDJ) journals were hand searched. Articles requiring adjustment accounting for clustering effects were identified and statistical techniques used were scrutinized. RESULTS Of 559 studies considered to have inherent clustering effects, adjustment for this was made in the statistical analysis in 223 (39.1%). Studies published in the Periodontology specialty accounted for clustering effects in the statistical analysis more often than articles published in other journals (OJ vs. PJ: OR=0.21, 95% CI: 0.12, 0.37, p<0.001; MJ vs. PJ: OR=0.02, 95% CI: 0.00, 0.07, p<0.001; PDJ vs. PJ: OR=0.14, 95% CI: 0.07, 0.28, p<0.001; EJ vs. PJ: OR=0.11, 95% CI: 0.06, 0.22, p<0.001). A positive correlation was found between increasing prevalence of clustering effects in individual specialty journals and correct statistical handling of clustering (r=0.89). CONCLUSIONS The majority of studies in 5 dental specialty journals (60.9%) examined failed to account for clustering effects in statistical analysis where indicated, raising the possibility of inappropriate decreases in p-values and the risk of inappropriate inferences.
Resumo:
The aim of the study was to examine the clinical forensic findings of strangulation according to their ability to differentiate between life-threatening and non-life-threatening strangulation, compare clinical and MRI findings of the neck and discuss a simple score for life-threatening strangulation (SLS).
Resumo:
A method for quantifying nociceptive withdrawal reflex receptive fields in human volunteers and patients is described. The reflex receptive field (RRF) for a specific muscle denotes the cutaneous area from which a muscle contraction can be evoked by a nociceptive stimulus. The method is based on random stimulations presented in a blinded sequence to 10 stimulation sites. The sensitivity map is derived by interpolating the reflex responses evoked from the 10 sites. A set of features describing the size and location of the RRF is presented based on statistical analysis of the sensitivity map within every subject. The features include RRF area, volume, peak location and center of gravity. The method was applied to 30 healthy volunteers. Electrical stimuli were applied to the sole of the foot evoking reflexes in the ankle flexor tibialis anterior. The RRF area covered a fraction of 0.57+/-0.06 (S.E.M.) of the foot and was located on the medial, distal part of the sole of the foot. An intramuscular injection into flexor digitorum brevis of capsaicin was performed in one spinal cord injured subject to attempt modulation of the reflex receptive field. The RRF area, RRF volume and location of the peak reflex response appear to be the most sensitive measures for detecting modulation of spinal nociceptive processing. This new method has important potential applications for exploring aspects of central plasticity in volunteers and patients. It may be utilized as a new diagnostic tool for central hypersensitivity and quantification of therapeutic interventions.
Resumo:
Exposimeters are increasingly applied in bioelectromagnetic research to determine personal radiofrequency electromagnetic field (RF-EMF) exposure. The main advantages of exposimeter measurements are their convenient handling for study participants and the large amount of personal exposure data, which can be obtained for several RF-EMF sources. However, the large proportion of measurements below the detection limit is a challenge for data analysis. With the robust ROS (regression on order statistics) method, summary statistics can be calculated by fitting an assumed distribution to the observed data. We used a preliminary sample of 109 weekly exposimeter measurements from the QUALIFEX study to compare summary statistics computed by robust ROS with a naïve approach, where values below the detection limit were replaced by the value of the detection limit. For the total RF-EMF exposure, differences between the naïve approach and the robust ROS were moderate for the 90th percentile and the arithmetic mean. However, exposure contributions from minor RF-EMF sources were considerably overestimated with the naïve approach. This results in an underestimation of the exposure range in the population, which may bias the evaluation of potential exposure-response associations. We conclude from our analyses that summary statistics of exposimeter data calculated by robust ROS are more reliable and more informative than estimates based on a naïve approach. Nevertheless, estimates of source-specific medians or even lower percentiles depend on the assumed data distribution and should be considered with caution.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O&Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O&Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath.
Resumo:
Statistical shape analysis techniques commonly employed in the medical imaging community, such as active shape models or active appearance models, rely on principal component analysis (PCA) to decompose shape variability into a reduced set of interpretable components. In this paper we propose principal factor analysis (PFA) as an alternative and complementary tool to PCA providing a decomposition into modes of variation that can be more easily interpretable, while still being a linear efficient technique that performs dimensionality reduction (as opposed to independent component analysis, ICA). The key difference between PFA and PCA is that PFA models covariance between variables, rather than the total variance in the data. The added value of PFA is illustrated on 2D landmark data of corpora callosa outlines. Then, a study of the 3D shape variability of the human left femur is performed. Finally, we report results on vector-valued 3D deformation fields resulting from non-rigid registration of ventricles in MRI of the brain.
Resumo:
BACKGROUND: Harvesting techniques can affect cellular parameters of autogenous bone grafts in vitro. Whether these differences translate to in vivo bone formation, however, remains unknown. OBJECTIVE: The purpose of this study was to assess the impact of different harvesting techniques on bone formation and graft resorption in vivo. MATERIAL AND METHODS: Four harvesting techniques were used: (i) corticocancellous blocks particulated by a bone mill; (ii) bone scraper; (iii) piezosurgery; and (iv) bone slurry collected from a filter device upon drilling. The grafts were placed into bone defects in the mandibles of 12 minipigs. The animals were sacrificed after 1, 2, 4 and 8 weeks of healing. Histology and histomorphometrical analyses were performed to assess bone formation and graft resorption. An explorative statistical analysis was performed. RESULTS: The amount of new bone increased, while the amount of residual bone decreased over time with all harvesting techniques. At all given time points, no significant advantage of any harvesting technique on bone formation was observed. The harvesting technique, however, affected bone formation and the amount of residual graft within the overall healing period. Friedman test revealed an impact of the harvesting technique on residual bone graft after 2 and 4 weeks. At the later time point, post hoc testing showed more newly formed bone in association with bone graft processed by bone mill than harvested by bone scraper and piezosurgery. CONCLUSIONS: Transplantation of autogenous bone particles harvested with four techniques in the present model resulted in moderate differences in terms of bone formation and graft resorption.