954 resultados para maximized monte Carlo test
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Il continuo sviluppo negli ultimi anni di diverse declinazioni della spettroscopia d'assorbimento a raggi X (XAS) con radiazione di sincrotrone ha permesso la determinazione della struttura locale di campioni di ogni tipo, dagli elementi puri, ai più moderni materiali, indagando e approfondendo la conoscenza di quei meccanismi che conferiscono a questi ultimi delle proprietà innovative e, a volte, rivoluzionarie. Il vantaggio di questa tecnica è quello di poter ottenere informazioni sulla struttura del campione soprattutto a livello locale, rendendo relativamente agevole l'analisi di sistemi senza ordine a lungo raggio, quali per esempio i film molecolari. Nell'elaborato verrà preliminarmente illustrata la fenomenologia della XAS e l’interpretazione teorica dell'origine della struttura fine. Saranno successivamente descritte le innovative tecniche di misura che permettono di studiare i cambiamenti della struttura locale indotti dall'illuminazione con luce visibile, inclusi gli esperimenti di tipo pump probe. Un capitolo della tesi è interamente dedicato alla descrizione dei campioni studiati, di cui sono stati analizzati alcuni dati acquisiti in condizioni statiche. Quest'analisi è stata compiuta sfruttando anche dei cammini di multiplo scattering dedicando particolare attenzione alla trattazione del fattore di Debye Waller. Nella parte principale della tesi verranno descritti la progettazione ed il test di un apparato sperimentale per l'acquisizione di spettri differenziali da utilizzare alla beamline BM08 dell'European Synchrotron Radiation Facility di Grenoble. Saranno presentate principalmente le modifiche apportate al software d'acquisizione della linea e la progettazione di un sistema ottico d'eccitazione da montare nella camera sperimentale. Nella fase di studio dell'ottica è stato creato in LabView un simulatore basato sul metodo Monte Carlo, capace di prevedere il comportamento del sistema di lenti.
Resumo:
Die vorliegende Dissertation dient dazu, das Verständnis des Exzitonentransports in organischen Halbleitern, wie sie in Leuchtdioden oder Solarzellen eingesetzt werden, zu vertiefen. Mithilfe von Computersimulationen wurde der Transport von Exzitonen in amorphen und kristallinen organischen Materialien beschrieben, angefangen auf mikroskopischer Ebene, auf der quantenmechanische Prozesse ablaufen, bis hin zur makroskopischen Ebene, auf welcher physikalisch bestimmbare Größen wie der Diffusionskoeffizient extrahierbar werden. Die Modellbildung basiert auf dem inkohärenten elektronischen Energietransfer. In diesem Rahmen wird der Transport des Exzitons als Hüpfprozess aufgefasst, welcher mit kinetischen Monte-Carlo Methoden simuliert wurde. Die notwendigen quantenmechanischen Übergangsraten zwischen den Molekülen wurden anhand der molekularen Struktur fester Phasen berechnet. Die Übergangsraten lassen sich in ein elektronisches Kopplungselement und die Franck-Condon-gewichtete Zustandsdichte aufteilen. Der Fokus dieser Arbeit lag einerseits darauf die Methoden zu evaluieren, die zur Berechnung der Übergangsraten in Frage kommen und andererseits den Hüpftransport zu simulieren und eine atomistische Interpretation der makroskopischen Transporteigenschaften der Exzitonen zu liefern. rnrnVon den drei untersuchten organischen Systemen, diente Aluminium-tris-(8-hydroxychinolin) der umfassenden Prüfung des Verfahrens. Es wurde gezeigt, dass stark vereinfachte Modelle wie die Marcus-Theorie die Übergangsraten und damit das Transportverhalten der Exzitonen oftmals qualitativ korrekt wiedergeben. Die meist deutlich größeren Diffusionskonstanten von Singulett- im Vergleich zu Triplett-Exzitonen haben ihren Ursprung in der längeren Reichweite der Kopplungselemente der Singulett-Exzitonen, wodurch ein stärker verzweigtes Netzwerk gebildet wird. Der Verlauf des zeitabhängigen Diffusionskoeffizienten zeigt subdiffusives Verhalten für kurze Beobachtungszeiten. Für Singulett-Exzitonen wechselt dieses Verhalten meist innerhalb der Lebensdauer des Exzitons in ein normales Diffusionsregime, während Triplett-Exzitonen das normale Regime deutlich langsamer erreichen. Das stärker anomale Verhalten der Triplett-Exzitonen wird auf eine ungleichmäßige Verteilung der Übergangsraten zurückgeführt. Beim Vergleich mit experimentell bestimmten Diffusionskonstanten muss das anomale Verhalten der Exzitonen berücksichtigt werden. Insgesamt stimmten simulierte und experimentelle Diffusionskonstanten für das Testsystem gut überein. Das Modellierungsverfahren sollte sich somit zur Charakterisierung des Exzitonentransports in neuen organischen Halbleitermaterialien eignen.
Resumo:
Am Mainzer Mikrotron können Lambda-Hyperkerne in (e,e'K^+)-Reaktionen erzeugt werden. Durch den Nachweis des erzeugten Kaons im KAOS-Spektrometer lassen sich Reaktionen markieren, bei denen ein Hyperon erzeugt wurde. Die Spektroskopie geladener Pionen, die aus schwachen Zweikörperzerfällen leichter Hyperkerne stammen, erlaubt es die Bindungsenergie des Hyperons im Kern mit hoher Präzision zu bestimmen. Neben der direkten Produktion von Hyperkernen ist auch die Erzeugung durch die Fragmentierung eines hoch angeregten Kontinuumszustands möglich. Dadurch können unterschiedliche Hyperkerne in einem Experiment untersucht werden. Für die Spektroskopie der Zerfallspionen stehen hochauflösende Magnetspektrometer zur Verfügung. Um die Grundzustandsmasse der Hyperkerne aus dem Pionimpuls zu berechnen, ist es erforderlich, dass das Hyperfragment vor dem Zerfall im Target abgebremst wird. Basierend auf dem bekannten Wirkungsquerschnitt der elementaren Kaon-Photoproduktion wurde eine Berechnung der zu erwartenden Ereignisrate vorgenommen. Es wurde eine Monte-Carlo-Simulation entwickelt, die den Fragmentierungsprozess und das Abbremsen der Hyperfragmente im Target beinhaltet. Diese nutzt ein statistisches Aufbruchsmodell zur Beschreibung der Fragmentierung. Dieser Ansatz ermöglicht für Wasserstoff-4-Lambda-Hyperkerne eine Vorhersage der zu erwartenden Zählrate an Zerfallspionen. In einem Pilotexperiment im Jahr 2011 wurde erstmalig an MAMI der Nachweis von Hadronen mit dem KAOS-Spektrometer unter einem Streuwinkel von 0° demonstriert, und koinzident dazu Pionen nachgewiesen. Es zeigte sich, dass bedingt durch die hohen Untergrundraten von Positronen in KAOS eine eindeutige Identifizierung von Hyperkernen in dieser Konfiguration nicht möglich war. Basierend auf diesen Erkenntnissen wurde das KAOS-Spektrometer so modifiziert, dass es als dedizierter Kaonenmarkierer fungierte. Zu diesem Zweck wurde ein Absorber aus Blei im Spektrometer montiert, in dem Positronen durch Schauerbildung abgestoppt werden. Die Auswirkung eines solchen Absorbers wurde in einem Strahltest untersucht. Eine Simulation basierend auf Geant4 wurde entwickelt mittels derer der Aufbau von Absorber und Detektoren optimiert wurde, und die Vorhersagen über die Auswirkung auf die Datenqualität ermöglichte. Zusätzlich wurden mit der Simulation individuelle Rückrechnungsmatrizen für Kaonen, Pionen und Protonen erzeugt, die die Wechselwirkung der Teilchen mit der Bleiwand beinhalteten, und somit eine Korrektur der Auswirkungen ermöglichen. Mit dem verbesserten Aufbau wurde 2012 eine Produktionsstrahlzeit durchgeführt, wobei erfolgreich Kaonen unter 0° Streuwinkel koninzident mit Pionen aus schwachen Zerfällen detektiert werden konnten. Dabei konnte im Impulsspektrum der Zerfallspionen eine Überhöhung mit einer Signifikanz, die einem p-Wert von 2,5 x 10^-4 entspricht, festgestellt werden. Diese Ereignisse können aufgrund ihres Impulses, den Zerfällen von Wasserstoff-4-Lambda-Hyperkernen zugeordnet werden, wobei die Anzahl detektierter Pionen konsistent mit der berechneten Ausbeute ist.
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.
Resumo:
L’elaborato affronta la tematica della detonazione nel motore a combustione interna, al fine di individuare un modello che sia in grado di riprodurre il fenomeno in modo accurato, con la prospettiva di un uso a scopo predittivo. A tal proposito vengono presentati modelli basati su svariate metodologie: in particolar modo, accanto ai metodi basati sulle grandezze direttamente o indirettamente misurabili del motore ad accensione comandata, vengono presentati un metodo basato sull’applicazione delle reti neurali, una metodologia di controllo basata sull’approccio True Digital Control, e due metodi che si avvalgono di procedimenti di tipo puramente statistico (metodo dei minimi quadrati e metodo Monte Carlo) per ricavare alcune delle grandezze fondamentali per il calcolo della detonazione. Successivamente, dopo una breve parentesi sulle simulazioni di tipo 3D, vengono introdotti i modelli fisici zero-dimensionali. Uno di questi, basato su un indice (definito dal simbolo Kn) capace di dare una valutazione quantitativa del fenomeno, viene applicato ad un insieme di dati sperimentali provenienti dai test al banco di un motore Ducati 1200. I risultati dell’analisi vengono confrontati con le evidenze sperimentali, sottolineando la buona rispondenza delle simulazioni ad essi e di conseguenza la potenzialità di tali metodi, computazionalmente non onerosi e di rapida applicazione.
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2
Resumo:
A dynamic deterministic simulation model was developed to assess the impact of different putative control strategies on the seroprevalence of Neospora caninum in female Swiss dairy cattle. The model structure comprised compartments of "susceptible" and "infected" animals (SI-model) and the cattle population was divided into 12 age classes. A reference model (Model 1) was developed to simulate the current (status quo) situation (present seroprevalence in Switzerland 12%), taking into account available demographic and seroprevalence data of Switzerland. Model 1 was modified to represent four putative control strategies: testing and culling of seropositive animals (Model 2), discontinued breeding with offspring from seropositive cows (Model 3), chemotherapeutic treatment of calves from seropositive cows (Model 4), and vaccination of susceptible and infected animals (Model 5). Models 2-4 considered different sub-scenarios with regard to the frequency of diagnostic testing. Multivariable Monte Carlo sensitivity analysis was used to assess the impact of uncertainty in input parameters. A policy of annual testing and culling of all seropositive cattle in the population reduced the seroprevalence effectively and rapidly from 12% to <1% in the first year of simulation. The control strategies with discontinued breeding with offspring from all seropositive cows, chemotherapy of calves and vaccination of all cattle reduced the prevalence more slowly than culling but were still very effective (reduction of prevalence below 2% within 11, 23 and 3 years of simulation, respectively). However, sensitivity analyses revealed that the effectiveness of these strategies depended strongly on the quality of the input parameters used, such as the horizontal and vertical transmission factors, the sensitivity of the diagnostic test and the efficacy of medication and vaccination. Finally, all models confirmed that it was not possible to completely eradicate N. caninum as long as the horizontal transmission process was not interrupted.
Resumo:
Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.
Resumo:
This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.
Resumo:
The main goal of the AEgIS experiment at CERN is to test the weak equivalence principle for antimatter. AEgIS will measure the free-fall of an antihydrogen beam traversing a moir'e deflectometer. The goal is to determine the gravitational acceleration with an initial relative accuracy of 1% by using an emulsion detector combined with a silicon μ-strip detector to measure the time of flight. Nuclear emulsions can measure the annihilation vertex of antihydrogen atoms with a precision of ~ 1–2 μm r.m.s. We present here results for emulsion detectors operated in vacuum using low energy antiprotons from the CERN antiproton decelerator. We compare with Monte Carlo simulations, and discuss the impact on the AEgIS project.
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.
Resumo:
Introduction: Schizophrenia patients frequently suffer from complex motor abnormalities including fine and gross motor disturbances, abnormal involuntary movements, neurological soft signs and parkinsonism. These symptoms occur early in the course of the disease, continue in chronic patients and may deteriorate with antipsychotic medication. Furthermore gesture performance is impaired in patients, including the pantomime of tool use. Whether schizophrenia patients would show difficulties of actual tool use has not yet been investigated. Human tool use is complex and relies on a network of distinct and distant brain areas. We therefore aim to test if schizophrenia patients had difficulties in tool use and to assess associations with structural brain imaging using voxel based morphometry (VBM) and tract based spatial statistics (TBSS). Methode: In total, 44 patients with schizophrenia (DSM-5 criteria; 59% men, mean age 38) underwent structural MR imaging and performed the Tool-Use test. The test examines the use of a scoop and a hammer in three conditions: pantomime (without the tool), demonstration (with the tool) and actual use (with a recipient object). T1-weighted images were processed using SPM8 and DTI-data using FSL TBSS routines. To assess structural alterations of impaired tool use we first compared gray matter (GM) volume in VBM and white matter (WM) integrity in TBSS data of patients with and without difficulties of actual tool use. Next we explored correlations of Tool use scores and VBM and TBSS data. Group comparisons were family wise error corrected for multiple tests. Correlations were uncorrected (p < 0.001) with a minimum cluster threshold of 17 voxels (equivalent to a map-wise false positive rate of alpha < 0.0001 using a Monte Carlo procedure). Results: Tool use was impaired in schizophrenia (43.2% pantomime, 11.6% demonstration, 11.6% use). Impairment was related to reduced GM volume and WM integrity. Whole brain analyses detected an effect in the SMA in group analysis. Correlations of tool use scores and brain structure revealed alterations in brain areas of the dorso-dorsal pathway (superior occipital gyrus, superior parietal lobule, and dorsal premotor area) and the ventro-dorsal pathways (middle occipital gyrus, inferior parietal lobule) the action network, as well as the insula and the left hippocampus. Furthermore, significant correlations within connecting fiber tracts - particularly alterations within the bilateral corona radiata superior and anterior as well as the corpus callosum -were associated with Tool use performance. Conclusions: Tool use performance was impaired in schizophrenia, which was associated with reduced GM volume in the action network. Our results are in line with reports of impaired tool use in patients with brain lesions particularly of the dorso-dorsal and ventro-dorsal stream of the action network. In addition an effect of tool use on WM integrity was shown within fiber tracts connecting regions important for planning and executing tool use. Furthermore, hippocampus is part of a brain system responsible for spatial memory and navigation.The results suggest that structural brain alterations in the common praxis network contribute to impaired tool use in schizophrenia.
Resumo:
I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.