7 resultados para Markov-chain Monte Carlo
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
Resumo:
The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.
Resumo:
Monte Carlo (MC) simulation techniques are becoming very common in the Medical Physicists community. MC can be used for modeling Single Photon Emission Computed Tomography (SPECT) and for dosimetry calculations. 188Re, is a promising candidate for radiotherapeutic production and understanding the mechanisms of the radioresponse of tumor cells "in vitro" is of crucial importance as a first step before "in vivo" studies. The dosimetry of 188Re, used to target different lines of cancer cells, has been evaluated by the MC code GEANT4. The simulations estimate the average energy deposition/per event in the biological samples. The development of prototypes for medical imaging, based on LaBr3:Ce scintillation crystals coupled with a position sensitive photomultiplier, have been studied using GEANT4 simulations. Having tested, in the simulation, surface treatments different from the one applied to the crystal used in our experimental measurements, we found out that the Energy Resolution (ER) and the Spatial Resolution (SR) could be improved, in principle, by machining in a different way the lateral surfaces of the crystal. We have then studied a system able to acquire both echographic and scintigraphic images to let the medical operator obtain the complete anatomic and functional information for tumor diagnosis. The scintigraphic part of the detector is simulated by GEANT4 and first attempts to reconstruct tomographic images have been made using as method of reconstruction a back-projection standard algorithm. The proposed camera is based on slant collimators and LaBr3:Ce crystals. Within the Field of View (FOV) of the camera, it possible to distinguish point sources located in air at a distance of about 2 cm from each other. In particular conditions of uptake, tumor depth and dimension, the preliminary results show that the Signal to Noise Ratio (SNR) values obtained are higher than the standard detection limit.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.