974 resultados para test de Monte Carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This document introduces the planned new search for the neutron Electric Dipole Moment at the Spallation Neutron Source at the Oak Ridge National Laboratory. A spin precession measurement is to be carried out using Ultracold neutrons diluted in a superfluid Helium bath at T = 0.5 K, where spin polarized 3He atoms act as detector of the neutron spin polarization. This manuscript describes some of the key aspects of the planned experiment with the contributions from Caltech to the development of the project.

Techniques used in the design of magnet coils for Nuclear Magnetic Resonance were adapted to the geometry of the experiment. Described is an initial design approach using a pair of coils tuned to shield outer conductive elements from resistive heat loads, while inducing an oscillating field in the measurement volume. A small prototype was constructed to test the model of the field at room temperature.

A large scale test of the high voltage system was carried out in a collaborative effort at the Los Alamos National Laboratory. The application and amplification of high voltage to polished steel electrodes immersed in a superfluid Helium bath was studied, as well as the electrical breakdown properties of the electrodes at low temperatures. A suite of Monte Carlo simulation software tools to model the interaction of neutrons, 3He atoms, and their spins with the experimental magnetic and electric fields was developed and implemented to further the study of expected systematic effects of the measurement, with particular focus on the false Electric Dipole Moment induced by a Geometric Phase akin to Berry’s phase.

An analysis framework was developed and implemented using unbinned likelihood to fit the time modulated signal expected from the measurement data. A collaborative Monte Carlo data set was used to test the analysis methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Doutoramento em Economia

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The response of zooplankton assemblages to variations in the water quality of four man-made lakes, caused by eutrophication and siltation, was investigated by means of canonical correspondence analysis. Monte Carlo simulations using the CCA eingenvalues as test statistics revealed that changes in zooplankton species composition along the environmental gradients of trophic state and abiogenic turbidity were highly significant. The species Brachionus calyciflorus, Thermocyclops sp. and Argyrodiaptomus sp. were good indicators of eutrophic conditions while the species Brachionus dolabratus, Keratella tropica and Hexarthra sp. were good indicators of high turbidity due to suspended sediments. The rotifer genus Brachionus was the most species-rich taxon, comprising five species which were associated with different environmental conditions. Therefore, we tested whether this genus alone could potentially be a better biological indicator of these environmental gradients than the entire zooplankton assemblages or any other random set of five species. The ordination results show that the five Brachionus species alone did not explain better the observed pattern of environmental variation than most random sets of five species. Therefore, this genus could not be selected as a target taxon for more intensive environmental monitoring as has been previously suggested by Attayde and Bozelli (1998). Overall, our results show that changes in the water quality of man-made lakes in a tropical semi-arid region have significant effects on the structure of zooplankton assemblages that can potentially affect the functioning of these ecosystems

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Detecting change points in epidemic models has been studied by many scholars. Yao (1993) summarized five existing test statistics in the literature. Out of those test statistics, it was observed that the likelihood ratio statistic showed its standout power. However, all of the existing test statistics are based on an assumption that population variance is known, which is an unrealistic assumption in practice. To avoid assuming known population variance, a new test statistic for detecting epidemic models is studied in this thesis. The new test statistic is a parameter-free test statistic which is more powerful compared to the existing test statistics. Different sample sizes and lengths of epidemic durations are used for the power comparison purpose. Monte Carlo simulation is used to find the critical values of the new test statistic and to perform the power comparison. Based on the Monte Carlo simulation result, it can be concluded that the sample size and the length of the duration have some effect on the power of the tests. It can also be observed that the new test statistic studied in this thesis has higher power than the existing test statistics do in all of cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente documento analiza los determinantes del margen de intermediación para el sistema financiero colombiano entre 1989 y 2003. Bajo una estimación dinámica de los efectos generados por variables específicas de actividad, impuestos y estructura de mercado, se presenta un seguimiento del margen de intermediación financiero, para un período que presenta elementos de liberalización y crisis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hadrontherapy is a medical treatment based on the use of charged particles beams accelerated towards deep-seated tumors on clinical patients. The reason why it is increasingly used is the favorable depth dose profile following the Bragg Peak distribution, where the release of dose is almost sharply focused near the end of the beam path. However, nuclear interactions between the beam and the human body constituents occur, generating nuclear fragments which modify the dose profile. To overcome the lack of experimental data on nuclear fragmentation reactions in the energy range of hadrontherapy interest, the FOOT (FragmentatiOn Of Target) experiment has been conceived with the main aim of measuring differential nuclear fragmentation cross sections with an uncertainty lower than 5\%. The same results are of great interest also in the radioprotection field, studying similar processes. Long-term human missions outside the Earth’s orbit are going to be planned in the next years, among which the NASA foreseen travel to Mars, and it is fundamental to protect astronauts health and electronics from radiation exposure .\\ In this thesis, a first analysis of the data taken at the GSI with a beam of $^{16}O$ at 400 $MeV/u$ impinging on a target of graphite ($C$) will be presented, showing the first preliminary results of elemental cross section and angular differential cross section. A Monte Carlo dataset was first studied to test the performance of the tracking reconstruction algorithm and to check the reliability of the full analysis chain, from hit reconstruction to cross section measurement. An high agreement was found between generated and reconstructed fragments, thus validating the adopted procedure. A preliminary experimental cross section was measured and compared with MC results, highlighting a good consistency for all the fragments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work assessed the environmental impacts of the production and use of 1 MJ of hydrous ethanol (E100) in Brazil in prospective scenarios (2020-2030), considering the deployment of technologies currently under development and better agricultural practices. The life cycle assessment technique was employed using the CML method for the life cycle impact assessment and the Monte Carlo method for the uncertainty analysis. Abiotic depletion, global warming, human toxicity, ecotoxicity, photochemical oxidation, acidification, and eutrophication were the environmental impacts categories analyzed. Results indicate that the proposed improvements (especially no-til farming-scenarios s2 and s4) would lead to environmental benefits in prospective scenarios compared to the current ethanol production (scenario s0). Combined first and second generation ethanol production (scenarios s3 and s4) would require less agricultural land but would not perform better than the projected first generation ethanol, although the uncertainties are relatively high. The best use of 1 ha of sugar cane was also assessed, considering the displacement of the conventional products by ethanol and electricity. No-til practices combined with the production of first generation ethanol and electricity (scenario s2) would lead to the largest mitigation effects for global warming and abiotic depletion. For the remaining categories, emissions would not be mitigated with the utilization of the sugar cane products. However, this conclusion is sensitive to the displaced electricity sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A combination of the variational principle, expectation value and Quantum Monte Carlo method is used to solve the Schrödinger equation for some simple systems. The results are accurate and the simplicity of this version of the Variational Quantum Monte Carlo method provides a powerful tool to teach alternative procedures and fundamental concepts in quantum chemistry courses. Some numerical procedures are described in order to control accuracy and computational efficiency. The method was applied to the ground state energies and a first attempt to obtain excited states is described.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interplay between the biocolloidal characteristics (especially size and charge), pH, salt concentration and the thermal energy results in a unique collection of mesoscopic forces of importance to the molecular organization and function in biological systems. By means of Monte Carlo simulations and semi-quantitative analysis in terms of perturbation theory, we describe a general electrostatic mechanism that gives attraction at low electrolyte concentrations. This charge regulation mechanism due to titrating amino acid residues is discussed in a purely electrostatic framework. The complexation data reported here for interaction between a polyelectrolyte chain and the proteins albumin, goat and bovine alpha-lactalbumin, beta-lactoglobulin, insulin, k-casein, lysozyme and pectin methylesterase illustrate the importance of the charge regulation mechanism. Special attention is given to pH congruent to pI where ion-dipole and charge regulation interactions could overcome the repulsive ion-ion interaction. By means of protein mutations, we confirm the importance of the charge regulation mechanism, and quantify when the complexation is dominated either by charge regulation or by the ion-dipole term.