988 resultados para Statistical correlation
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
Coastal sand dunes represent a richness first of all in terms of defense from the sea storms waves and the saltwater ingression; moreover these morphological elements constitute an unique ecosystem of transition between the sea and the land environment. The research about dune system is a strong part of the coastal sciences, since the last century. Nowadays this branch have assumed even more importance for two reasons: on one side the born of brand new technologies, especially related to the Remote Sensing, have increased the researcher possibilities; on the other side the intense urbanization of these days have strongly limited the dune possibilities of development and fragmented what was remaining from the last century. This is particularly true in the Ravenna area, where the industrialization united to the touristic economy and an intense subsidence, have left only few dune ridges residual still active. In this work three different foredune ridges, along the Ravenna coast, have been studied with Laser Scanner technology. This research didn’t limit to analyze volume or spatial difference, but try also to find new ways and new features to monitor this environment. Moreover the author planned a series of test to validate data from Terrestrial Laser Scanner (TLS), with the additional aim of finalize a methodology to test 3D survey accuracy. Data acquired by TLS were then applied on one hand to test some brand new applications, such as Digital Shore Line Analysis System (DSAS) and Computational Fluid Dynamics (CFD), to prove their efficacy in this field; on the other hand the author used TLS data to find any correlation with meteorological indexes (Forcing Factors), linked to sea and wind (Fryberger's method) applying statistical tools, such as the Principal Component Analysis (PCA).
Resumo:
One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn
Resumo:
Qualitative assessment of spontaneous motor activity in early infancy is widely used in clinical practice. It enables the description of maturational changes of motor behavior in both healthy infants and infants who are at risk for later neurological impairment. These assessments are, however, time-consuming and are dependent upon professional experience. Therefore, a simple physiological method that describes the complex behavior of spontaneous movements (SMs) in infants would be helpful. In this methodological study, we aimed to determine whether time series of motor acceleration measurements at 40-44 weeks and 50-55 weeks gestational age in healthy infants exhibit fractal-like properties and if this self-affinity of the acceleration signal is sensitive to maturation. Healthy motor state was ensured by General Movement assessment. We assessed statistical persistence in the acceleration time series by calculating the scaling exponent α via detrended fluctuation analysis of the time series. In hand trajectories of SMs in infants we found a mean α value of 1.198 (95 % CI 1.167-1.230) at 40-44 weeks. Alpha changed significantly (p = 0.001) at 50-55 weeks to a mean of 1.102 (1.055-1.149). Complementary multilevel regression analysis confirmed a decreasing trend of α with increasing age. Statistical persistence of fluctuation in hand trajectories of SMs is sensitive to neurological maturation and can be characterized by a simple parameter α in an automated and observer-independent fashion. Future studies including children at risk for neurological impairment should evaluate whether this method could be used as an early clinical screening tool for later neurological compromise.
Resumo:
PURPOSE: To test the hypothesis that the extension of areas with increased fundus autofluorescence (FAF) outside atrophic patches correlates with the rate of spread of geographic atrophy (GA) over time in eyes with age-related macular degeneration (AMD). METHODS: The database of the multicenter longitudinal natural history Fundus Autofluorescence in AMD (FAM) Study was reviewed for patients with GA recruited through the end of August 2003, with follow-up examinations within at least 1 year. Only eyes with sufficient image quality and with diffuse patterns of increased FAF surrounding atrophy were chosen. In standardized digital FAF images (excitation, 488 nm; emission, >500 nm), total size and spread of GA was measured. The convex hull (CH) of increased FAF as the minimum polygon encompassing the entire area of increased FAF surrounding the central atrophic patches was quantified at baseline. Statistical analysis was performed with the Spearman's rank correlation coefficient (rho). RESULTS: Thirty-nine eyes of 32 patients were included (median age, 75.0 years; interquartile range [IQR], 67.8-78.9); median follow-up, 1.87 years; IQR, 1.43-3.37). At baseline, the median total size of atrophy was 7.04 mm2 (IQR, 4.20-9.88). The median size of the CH was 21.47 mm2 (IQR, 15.19-28.26). The median rate of GA progression was 1.72 mm2 per year (IQR, 1.10-2.83). The area of increased FAF around the atrophy (difference between the CH and the total GA size at baseline) showed a positive correlation with GA enlargement over time (rho=0.60; P=0.0002). CONCLUSIONS: FAF characteristics that are not identified by fundus photography or fluorescein angiography may serve as a prognostic determinant in advanced atrophic AMD. As the FAF signal originates from lipofuscin (LF) in postmitotic RPE cells and since increased FAF indicates excessive LF accumulation, these findings would underscore the pathophysiological role of RPE-LF in AMD pathogenesis.
Resumo:
This clinical study prospectively evaluated the healing outcome 1 year after apical surgery in relation to bony crypt dimensions measured intraoperatively. The study cohort included 183 teeth in an equal number of patients. For statistical analysis, results were dichotomized (healed versus non-healed cases). The overall success rate was 83% (healed cases). Healing outcome was not significantly related to the level and height of the facial bone plate. In contrast, a significant difference was found for the mean size of the bony crypt when healed cases (395 mm(3)) were compared with non-healed cases (554 mm(3)). In addition, healed cases had a significantly shorter mean distance (4.30 mm) from the facial bone surface to the root canal (horizontal access) compared with non-healed cases (5.13 mm). With logistic regression, however, the only parameter found to be significantly related to healing outcome was the length of the access window to the bony crypt.
Resumo:
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.
Resumo:
PURPOSE: To evaluate the diagnostic accuracy of in situ postmortem multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) in the detection of primary traumatic extra-axial hemorrhage. MATERIALS AND METHODS: Thirty forensic neurotrauma cases and 10 nontraumatic controls who underwent both in situ postmortem cranial MSCT and MR imaging before autopsy were retrospectively reviewed. Both imaging modalities were analyzed in view of their accuracy, sensitivity, and specificity concerning the detection of extra-axial hemorrhage. Statistical significance was calculated using the McNemar test. kappa values for interobserver agreement were calculated for extra-axial hemorrhage types and to quantify the agreement between both modalities as well as MRI, CT, and forensics, respectively. RESULTS: Analysis of the detection of hemorrhagic localizations showed an accuracy, sensitivity, and specificity of 89%, 82%, and 92% using CT, and 90%, 83%, and 94% using MRI, respectively. MRI was more sensitive than CT in the detection of subarachnoid hemorrhagic localizations (P = 0.001), whereas no significant difference resulted from the detection of epidural and subdural hemorrhagic findings (P = 0.248 and P = 0.104, respectively). Interobserver agreement for all extra-axial hemorrhage types was substantial (CT kappa = 0.76; MRI kappa = 0.77). The agreement of both modalitites was almost perfect (readers 1 and 2 kappa = 0.88). CONCLUSION: CT and MRI are of comparable potential as forensic diagnostic tools for traumatic extra-axial hemorrhage. Not only of forensic, but also of clinical interest is the observation that most thin blood layers escape the radiological evaluation.
Resumo:
PURPOSE: To evaluate multislice spiral computed tomography (MSCT) and magnetic resonance imaging (MRI) findings in hanging and manual strangulation cases and compare them with forensic autopsy results. MATERIALS AND METHODS: Postmortem MSCT and MRI of nine persons who died from hanging or manual strangulation were performed. The neck findings were compared with those discovered during forensic autopsy. In addition, two living patients underwent imaging and clinical examination following severe manual strangulation and near-hanging, respectively. For evaluation, the findings were divided into "primary" (strangulation mark and subcutaneous desiccation (i.e., soft-tissue thinning as a result of tissue fluids being driven out by mechanical compression) in hanging, and subcutaneous and intramuscular hemorrhage in manual strangulation) and "collateral" signs. The Wilcoxon two-tailed test was used for statistical analysis of the lymph node and salivary gland findings. RESULTS: In hanging, the primary and most frequent collateral signs were revealed by imaging. In manual strangulation, the primary findings were accurately depicted, with the exception of one slight hemorrhage. Apart from a vocal cord hemorrhage, all frequent collateral signs could be diagnosed radiologically. Traumatic lymph node hemorrhage (P = 0.031) was found in all of the manual strangulation cases. CONCLUSION: MSCT and MRI revealed strangulation signs concordantly with forensic pathology findings. Imaging offers a great potential for the forensic examination of lesions due to strangulation in both clinical and postmortem settings.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
OBJECTIVES In dental research multiple site observations within patients or taken at various time intervals are commonplace. These clustered observations are not independent; statistical analysis should be amended accordingly. This study aimed to assess whether adjustment for clustering effects during statistical analysis was undertaken in five specialty dental journals. METHODS Thirty recent consecutive issues of Orthodontics (OJ), Periodontology (PJ), Endodontology (EJ), Maxillofacial (MJ) and Paediatric Dentristry (PDJ) journals were hand searched. Articles requiring adjustment accounting for clustering effects were identified and statistical techniques used were scrutinized. RESULTS Of 559 studies considered to have inherent clustering effects, adjustment for this was made in the statistical analysis in 223 (39.1%). Studies published in the Periodontology specialty accounted for clustering effects in the statistical analysis more often than articles published in other journals (OJ vs. PJ: OR=0.21, 95% CI: 0.12, 0.37, p<0.001; MJ vs. PJ: OR=0.02, 95% CI: 0.00, 0.07, p<0.001; PDJ vs. PJ: OR=0.14, 95% CI: 0.07, 0.28, p<0.001; EJ vs. PJ: OR=0.11, 95% CI: 0.06, 0.22, p<0.001). A positive correlation was found between increasing prevalence of clustering effects in individual specialty journals and correct statistical handling of clustering (r=0.89). CONCLUSIONS The majority of studies in 5 dental specialty journals (60.9%) examined failed to account for clustering effects in statistical analysis where indicated, raising the possibility of inappropriate decreases in p-values and the risk of inappropriate inferences.
Resumo:
Finite element (FE) analysis is an important computational tool in biomechanics. However, its adoption into clinical practice has been hampered by its computational complexity and required high technical competences for clinicians. In this paper we propose a supervised learning approach to predict the outcome of the FE analysis. We demonstrate our approach on clinical CT and X-ray femur images for FE predictions ( FEP), with features extracted, respectively, from a statistical shape model and from 2D-based morphometric and density information. Using leave-one-out experiments and sensitivity analysis, comprising a database of 89 clinical cases, our method is capable of predicting the distribution of stress values for a walking loading condition with an average correlation coefficient of 0.984 and 0.976, for CT and X-ray images, respectively. These findings suggest that supervised learning approaches have the potential to leverage the clinical integration of mechanical simulations for the treatment of musculoskeletal conditions.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^