974 resultados para test de Monte Carlo
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
The availability of a high-intensity antiproton beam with momentum up to 15,GeV/c at the future FAIR will open a unique opportunity to investigate wide areas of nuclear physics with the $overline{P}$ANDA (anti{$overline{P}$}roton ANnihilations at DArmstadt) detector. Part of these investigations concern the Electromagnetic Form Factors of the proton in the time-like region and the study of the Transition Distribution Amplitudes, for which feasibility studies have been performed in this Thesis. rnMoreover, simulations to study the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter of $overline{P}$ANDA are presented. This detector is crucial especially for the reconstruction of processes like $bar pprightarrow e^+ e^- pi^0$, investigated in this work. Different arrangements of dead material were studied. The results show that both, the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter fullfill the requirements for the detection of backward particles, and that this detector is necessary for the reconstruction of the channels of interest. rnrnThe study of the annihilation channel $bar pprightarrow e^+ e^-$ will improve the knowledge of the Electromagnetic Form Factors in the time-like region, and will help to understand their connection with the Electromagnetic Form Factors in the space-like region. In this Thesis the feasibility of a measurement of the $bar pprightarrow e^+ e^-$ cross section with $overline{P}$ANDA is studied using Monte-Carlo simulations. The major background channel $bar pprightarrow pi^+ pi^-$ is taken into account. The results show a $10^9$ background suppression factor, which assure a sufficiently clean signal with less than 0.1% background contamination. The signal can be measured with an efficiency greater than 30% up to $s=14$,(GeV/c)$^2$. The Electromagnetic Form Factors are extracted from the reconstructed signal and corrected angular distribution. Above this $s$ limit, the low cross section will not allow the direct extraction of the Electromagnetic Form Factors. However, the total cross section can still be measured and an extraction of the Electromagnetic Form Factors is possible considering certain assumptions on the ratio between the electric and magnetic contributions.rnrnThe Transition Distribution Amplitudes are new non-perturbative objects describing the transition between a baryon and a meson. They are accessible in hard exclusive processes like $bar pprightarrow e^+ e^- pi^0$. The study of this process with $overline{P}$ANDA will test the Transition Distribution Amplitudes approach. This work includes a feasibility study for measuring this channel with $overline{P}$ANDA. The main background reaction is here $bar pprightarrow pi^+ pi^- pi^0$. A background suppression factor of $10^8$ has been achieved while keeping a signal efficiency above 20%.rnrnrnPart of this work has been published in the European Physics Journal A 44, 373-384 (2010).rn
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Il continuo sviluppo negli ultimi anni di diverse declinazioni della spettroscopia d'assorbimento a raggi X (XAS) con radiazione di sincrotrone ha permesso la determinazione della struttura locale di campioni di ogni tipo, dagli elementi puri, ai più moderni materiali, indagando e approfondendo la conoscenza di quei meccanismi che conferiscono a questi ultimi delle proprietà innovative e, a volte, rivoluzionarie. Il vantaggio di questa tecnica è quello di poter ottenere informazioni sulla struttura del campione soprattutto a livello locale, rendendo relativamente agevole l'analisi di sistemi senza ordine a lungo raggio, quali per esempio i film molecolari. Nell'elaborato verrà preliminarmente illustrata la fenomenologia della XAS e l’interpretazione teorica dell'origine della struttura fine. Saranno successivamente descritte le innovative tecniche di misura che permettono di studiare i cambiamenti della struttura locale indotti dall'illuminazione con luce visibile, inclusi gli esperimenti di tipo pump probe. Un capitolo della tesi è interamente dedicato alla descrizione dei campioni studiati, di cui sono stati analizzati alcuni dati acquisiti in condizioni statiche. Quest'analisi è stata compiuta sfruttando anche dei cammini di multiplo scattering dedicando particolare attenzione alla trattazione del fattore di Debye Waller. Nella parte principale della tesi verranno descritti la progettazione ed il test di un apparato sperimentale per l'acquisizione di spettri differenziali da utilizzare alla beamline BM08 dell'European Synchrotron Radiation Facility di Grenoble. Saranno presentate principalmente le modifiche apportate al software d'acquisizione della linea e la progettazione di un sistema ottico d'eccitazione da montare nella camera sperimentale. Nella fase di studio dell'ottica è stato creato in LabView un simulatore basato sul metodo Monte Carlo, capace di prevedere il comportamento del sistema di lenti.
Resumo:
Die vorliegende Dissertation dient dazu, das Verständnis des Exzitonentransports in organischen Halbleitern, wie sie in Leuchtdioden oder Solarzellen eingesetzt werden, zu vertiefen. Mithilfe von Computersimulationen wurde der Transport von Exzitonen in amorphen und kristallinen organischen Materialien beschrieben, angefangen auf mikroskopischer Ebene, auf der quantenmechanische Prozesse ablaufen, bis hin zur makroskopischen Ebene, auf welcher physikalisch bestimmbare Größen wie der Diffusionskoeffizient extrahierbar werden. Die Modellbildung basiert auf dem inkohärenten elektronischen Energietransfer. In diesem Rahmen wird der Transport des Exzitons als Hüpfprozess aufgefasst, welcher mit kinetischen Monte-Carlo Methoden simuliert wurde. Die notwendigen quantenmechanischen Übergangsraten zwischen den Molekülen wurden anhand der molekularen Struktur fester Phasen berechnet. Die Übergangsraten lassen sich in ein elektronisches Kopplungselement und die Franck-Condon-gewichtete Zustandsdichte aufteilen. Der Fokus dieser Arbeit lag einerseits darauf die Methoden zu evaluieren, die zur Berechnung der Übergangsraten in Frage kommen und andererseits den Hüpftransport zu simulieren und eine atomistische Interpretation der makroskopischen Transporteigenschaften der Exzitonen zu liefern. rnrnVon den drei untersuchten organischen Systemen, diente Aluminium-tris-(8-hydroxychinolin) der umfassenden Prüfung des Verfahrens. Es wurde gezeigt, dass stark vereinfachte Modelle wie die Marcus-Theorie die Übergangsraten und damit das Transportverhalten der Exzitonen oftmals qualitativ korrekt wiedergeben. Die meist deutlich größeren Diffusionskonstanten von Singulett- im Vergleich zu Triplett-Exzitonen haben ihren Ursprung in der längeren Reichweite der Kopplungselemente der Singulett-Exzitonen, wodurch ein stärker verzweigtes Netzwerk gebildet wird. Der Verlauf des zeitabhängigen Diffusionskoeffizienten zeigt subdiffusives Verhalten für kurze Beobachtungszeiten. Für Singulett-Exzitonen wechselt dieses Verhalten meist innerhalb der Lebensdauer des Exzitons in ein normales Diffusionsregime, während Triplett-Exzitonen das normale Regime deutlich langsamer erreichen. Das stärker anomale Verhalten der Triplett-Exzitonen wird auf eine ungleichmäßige Verteilung der Übergangsraten zurückgeführt. Beim Vergleich mit experimentell bestimmten Diffusionskonstanten muss das anomale Verhalten der Exzitonen berücksichtigt werden. Insgesamt stimmten simulierte und experimentelle Diffusionskonstanten für das Testsystem gut überein. Das Modellierungsverfahren sollte sich somit zur Charakterisierung des Exzitonentransports in neuen organischen Halbleitermaterialien eignen.
Resumo:
Am Mainzer Mikrotron können Lambda-Hyperkerne in (e,e'K^+)-Reaktionen erzeugt werden. Durch den Nachweis des erzeugten Kaons im KAOS-Spektrometer lassen sich Reaktionen markieren, bei denen ein Hyperon erzeugt wurde. Die Spektroskopie geladener Pionen, die aus schwachen Zweikörperzerfällen leichter Hyperkerne stammen, erlaubt es die Bindungsenergie des Hyperons im Kern mit hoher Präzision zu bestimmen. Neben der direkten Produktion von Hyperkernen ist auch die Erzeugung durch die Fragmentierung eines hoch angeregten Kontinuumszustands möglich. Dadurch können unterschiedliche Hyperkerne in einem Experiment untersucht werden. Für die Spektroskopie der Zerfallspionen stehen hochauflösende Magnetspektrometer zur Verfügung. Um die Grundzustandsmasse der Hyperkerne aus dem Pionimpuls zu berechnen, ist es erforderlich, dass das Hyperfragment vor dem Zerfall im Target abgebremst wird. Basierend auf dem bekannten Wirkungsquerschnitt der elementaren Kaon-Photoproduktion wurde eine Berechnung der zu erwartenden Ereignisrate vorgenommen. Es wurde eine Monte-Carlo-Simulation entwickelt, die den Fragmentierungsprozess und das Abbremsen der Hyperfragmente im Target beinhaltet. Diese nutzt ein statistisches Aufbruchsmodell zur Beschreibung der Fragmentierung. Dieser Ansatz ermöglicht für Wasserstoff-4-Lambda-Hyperkerne eine Vorhersage der zu erwartenden Zählrate an Zerfallspionen. In einem Pilotexperiment im Jahr 2011 wurde erstmalig an MAMI der Nachweis von Hadronen mit dem KAOS-Spektrometer unter einem Streuwinkel von 0° demonstriert, und koinzident dazu Pionen nachgewiesen. Es zeigte sich, dass bedingt durch die hohen Untergrundraten von Positronen in KAOS eine eindeutige Identifizierung von Hyperkernen in dieser Konfiguration nicht möglich war. Basierend auf diesen Erkenntnissen wurde das KAOS-Spektrometer so modifiziert, dass es als dedizierter Kaonenmarkierer fungierte. Zu diesem Zweck wurde ein Absorber aus Blei im Spektrometer montiert, in dem Positronen durch Schauerbildung abgestoppt werden. Die Auswirkung eines solchen Absorbers wurde in einem Strahltest untersucht. Eine Simulation basierend auf Geant4 wurde entwickelt mittels derer der Aufbau von Absorber und Detektoren optimiert wurde, und die Vorhersagen über die Auswirkung auf die Datenqualität ermöglichte. Zusätzlich wurden mit der Simulation individuelle Rückrechnungsmatrizen für Kaonen, Pionen und Protonen erzeugt, die die Wechselwirkung der Teilchen mit der Bleiwand beinhalteten, und somit eine Korrektur der Auswirkungen ermöglichen. Mit dem verbesserten Aufbau wurde 2012 eine Produktionsstrahlzeit durchgeführt, wobei erfolgreich Kaonen unter 0° Streuwinkel koninzident mit Pionen aus schwachen Zerfällen detektiert werden konnten. Dabei konnte im Impulsspektrum der Zerfallspionen eine Überhöhung mit einer Signifikanz, die einem p-Wert von 2,5 x 10^-4 entspricht, festgestellt werden. Diese Ereignisse können aufgrund ihres Impulses, den Zerfällen von Wasserstoff-4-Lambda-Hyperkernen zugeordnet werden, wobei die Anzahl detektierter Pionen konsistent mit der berechneten Ausbeute ist.
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.
Resumo:
L’elaborato affronta la tematica della detonazione nel motore a combustione interna, al fine di individuare un modello che sia in grado di riprodurre il fenomeno in modo accurato, con la prospettiva di un uso a scopo predittivo. A tal proposito vengono presentati modelli basati su svariate metodologie: in particolar modo, accanto ai metodi basati sulle grandezze direttamente o indirettamente misurabili del motore ad accensione comandata, vengono presentati un metodo basato sull’applicazione delle reti neurali, una metodologia di controllo basata sull’approccio True Digital Control, e due metodi che si avvalgono di procedimenti di tipo puramente statistico (metodo dei minimi quadrati e metodo Monte Carlo) per ricavare alcune delle grandezze fondamentali per il calcolo della detonazione. Successivamente, dopo una breve parentesi sulle simulazioni di tipo 3D, vengono introdotti i modelli fisici zero-dimensionali. Uno di questi, basato su un indice (definito dal simbolo Kn) capace di dare una valutazione quantitativa del fenomeno, viene applicato ad un insieme di dati sperimentali provenienti dai test al banco di un motore Ducati 1200. I risultati dell’analisi vengono confrontati con le evidenze sperimentali, sottolineando la buona rispondenza delle simulazioni ad essi e di conseguenza la potenzialità di tali metodi, computazionalmente non onerosi e di rapida applicazione.
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2
Resumo:
A dynamic deterministic simulation model was developed to assess the impact of different putative control strategies on the seroprevalence of Neospora caninum in female Swiss dairy cattle. The model structure comprised compartments of "susceptible" and "infected" animals (SI-model) and the cattle population was divided into 12 age classes. A reference model (Model 1) was developed to simulate the current (status quo) situation (present seroprevalence in Switzerland 12%), taking into account available demographic and seroprevalence data of Switzerland. Model 1 was modified to represent four putative control strategies: testing and culling of seropositive animals (Model 2), discontinued breeding with offspring from seropositive cows (Model 3), chemotherapeutic treatment of calves from seropositive cows (Model 4), and vaccination of susceptible and infected animals (Model 5). Models 2-4 considered different sub-scenarios with regard to the frequency of diagnostic testing. Multivariable Monte Carlo sensitivity analysis was used to assess the impact of uncertainty in input parameters. A policy of annual testing and culling of all seropositive cattle in the population reduced the seroprevalence effectively and rapidly from 12% to <1% in the first year of simulation. The control strategies with discontinued breeding with offspring from all seropositive cows, chemotherapy of calves and vaccination of all cattle reduced the prevalence more slowly than culling but were still very effective (reduction of prevalence below 2% within 11, 23 and 3 years of simulation, respectively). However, sensitivity analyses revealed that the effectiveness of these strategies depended strongly on the quality of the input parameters used, such as the horizontal and vertical transmission factors, the sensitivity of the diagnostic test and the efficacy of medication and vaccination. Finally, all models confirmed that it was not possible to completely eradicate N. caninum as long as the horizontal transmission process was not interrupted.
Resumo:
Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.
Resumo:
This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.
Resumo:
The main goal of the AEgIS experiment at CERN is to test the weak equivalence principle for antimatter. AEgIS will measure the free-fall of an antihydrogen beam traversing a moir'e deflectometer. The goal is to determine the gravitational acceleration with an initial relative accuracy of 1% by using an emulsion detector combined with a silicon μ-strip detector to measure the time of flight. Nuclear emulsions can measure the annihilation vertex of antihydrogen atoms with a precision of ~ 1–2 μm r.m.s. We present here results for emulsion detectors operated in vacuum using low energy antiprotons from the CERN antiproton decelerator. We compare with Monte Carlo simulations, and discuss the impact on the AEgIS project.
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.