963 resultados para Monte-Carlo Simulation Method
Resumo:
Goodness-of-fit tests have been studied by many researchers. Among them, an alternative statistical test for uniformity was proposed by Chen and Ye (2009). The test was used by Xiong (2010) to test normality for the case that both location parameter and scale parameter of the normal distribution are known. The purpose of the present thesis is to extend the result to the case that the parameters are unknown. A table for the critical values of the test statistic is obtained using Monte Carlo simulation. The performance of the proposed test is compared with the Shapiro-Wilk test and the Kolmogorov-Smirnov test. Monte-Carlo simulation results show that proposed test performs better than the Kolmogorov-Smirnov test in many cases. The Shapiro Wilk test is still the most powerful test although in some cases the test proposed in the present research performs better.
Resumo:
Recent research indicates that characteristics of El Niño and the Southern Oscillation (ENSO) have changed over the past several decades. Here, I examined different flavors of El Niño in the observational record and the recent changes in the character of El Niño events. The fundamental physical processes that drive ENSO were described and the Eastern Pacific (EP) and Central Pacific (CP) types or flavors of El Niño were defined. Using metrics from the peer-reviewed literature, I examined several historical data sets to interpret El Niño behavior from 1950-2010. A Monte Carlo Simulation was then applied to output from coupled model simulations to test the statistical significance of recent observations surrounding EP and CP El Niño. Results suggested that EP and CP El Niño had been occurring in a similar fashion over the past 60 years with natural variability, but no significant increase in CP El Niño behavior.
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.
Resumo:
Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.
Resumo:
The control of radioactive backgrounds will be key in the search for neutrinoless double beta decay at the SNO+ experiment. Several aspects of the SNO+ back- grounds have been studied. The SNO+ tellurium purification process may require ultra low background ethanol as a reagent. A low background assay technique for ethanol was developed and used to identify a source of ethanol with measured 238U and 232Th concentrations below 2.8 10^-13 g/g and 10^-14 g/g respectively. It was also determined that at least 99:997% of the ethanol can be removed from the purified tellurium using forced air ow in order to reduce 14C contamination. In addition, a quality-control technique using an oxygen sensor was studied to monitor 222Rn contamination due to air leaking into the SNO+ scintillator during transport. The expected sensitivity of the technique is 0.1mBq/L or better depending on the oxygen sensor used. Finally, the dependence of SNO+ neutrinoless double beta decay sensitivity on internal background levels was studied using Monte Carlo simulation. The half-life limit to neutrinoless double beta decay of 130Te after 3 years of operation was found to be 4.8 1025 years under default conditions.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Particle filtering has proven to be an effective localization method for wheeled autonomous vehicles. For a given map, a sensor model, and observations, occasions arise where the vehicle could equally likely be in many locations of the map. Because particle filtering algorithms may generate low confidence pose estimates under these conditions, more robust localization strategies are required to produce reliable pose estimates. This becomes more critical if the state estimate is an integral part of system control. We investigate the use of particle filter estimation techniques on a hovercraft vehicle. The marginally stable dynamics of a hovercraft require reliable state estimates for proper stability and control. We use the Monte Carlo localization method, which implements a particle filter in a recursive state estimate algorithm. An H-infinity controller, designed to accommodate the latency inherent in our state estimation, provides stability and controllability to the hovercraft. In order to eliminate the low confidence estimates produced in certain environments, a multirobot system is designed to introduce mobile environment features. By tracking and controlling the secondary robot, we can position the mobile feature throughout the environment to ensure a high confidence estimate, thus maintaining stability in the system. A laser rangefinder is the sensor the hovercraft uses to track the secondary robot, observe the environment, and facilitate successful localization and stability in motion.
Resumo:
The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.
Resumo:
The occurrence frequency of failure events serve as critical indexes representing the safety status of dam-reservoir systems. Although overtopping is the most common failure mode with significant consequences, this type of event, in most cases, has a small probability. Estimation of such rare event risks for dam-reservoir systems with crude Monte Carlo (CMC) simulation techniques requires a prohibitively large number of trials, where significant computational resources are required to reach the satisfied estimation results. Otherwise, estimation of the disturbances would not be accurate enough. In order to reduce the computation expenses and improve the risk estimation efficiency, an importance sampling (IS) based simulation approach is proposed in this dissertation to address the overtopping risks of dam-reservoir systems. Deliverables of this study mainly include the following five aspects: 1) the reservoir inflow hydrograph model; 2) the dam-reservoir system operation model; 3) the CMC simulation framework; 4) the IS-based Monte Carlo (ISMC) simulation framework; and 5) the overtopping risk estimation comparison of both CMC and ISMC simulation. In a broader sense, this study meets the following three expectations: 1) to address the natural stochastic characteristics of the dam-reservoir system, such as the reservoir inflow rate; 2) to build up the fundamental CMC and ISMC simulation frameworks of the dam-reservoir system in order to estimate the overtopping risks; and 3) to compare the simulation results and the computational performance in order to demonstrate the ISMC simulation advantages. The estimation results of overtopping probability could be used to guide the future dam safety investigations and studies, and to supplement the conventional analyses in decision making on the dam-reservoir system improvements. At the same time, the proposed methodology of ISMC simulation is reasonably robust and proved to improve the overtopping risk estimation. The more accurate estimation, the smaller variance, and the reduced CPU time, expand the application of Monte Carlo (MC) technique on evaluating rare event risks for infrastructures.
Resumo:
As rápidas alterações sociais, económicas, culturais e ambientais determinaram mudanças significativas nos estilos de vida e contribuíram para o crescimento e generalização do consumo de alimentos e refeições fora de casa. Portugal acompanha a tendência de aumento do consumo alimentar fora de casa, assim, as refeições fora de casa, que há uns anos eram um acontecimento fortuito, são hoje uma prática habitual das famílias portuguesas, não só durante a semana de trabalho, mas também nos fins-de-semana. As, visitas aos centros comerciais que se tornaram um hábito no nosso país incluem uma paragem nas Praças de Alimentação, espaços de excelência pela diversidade alimentar onde predominam as refeições de fast-food. Porém é fundamental a escolha adequada/equilibrada dos alimentos que se vão consumir. O presente trabalho procurou avaliar os hábitos e percepção dos consumidores de refeições rápidas com base numa ementa específica cujo alimento principal é o pão. Posteriormente e de acordo com as preferências de consumo procedeu-se à avaliação nutricional das escolhas. Neste estudo participaram 150 indivíduos que frequentaram as instalações de um restaurante de comida rápida situada na praça de alimentação de um centro comercial situado em Viseu. Foi aplicado um questionário de autopreenchimento, por nós elaborado dividido em 4 partes: caracterização sociodemográfica; hábitos de consumo dos inquiridos; produtos escolhidos pelos inquiridos; grau de satisfação face aos produtos escolhidos. As análises estatísticas foram efectuadas com recurso ao Programa informático Statistical Package for the Social Sciences - SPSS® for Windows, versão 22. Realizam-se testes de Qui-quadrado com simulação de Monte Carlo, considerando o nível de significância de 0,05. Com base nas escolhas mais frequentes feitas pelos inquiridos procedeu-se à avaliação nutricional dos menus recorrendo ao programa DIAL 1.19 versão 1 e quando não se encontrou informação neste utilizou-se a tabela de composição de alimentos portugueses on line (INSA, 2010). Compararam-se os valores obtidos para o Valor Calórico Total, os macronutrientes, a fibra, o colesterol e o sódio com as Doses Diárias Recomendadas. A amostra era composta por 68,7% mulheres e 31,3% homens, com uma média de idades de 29,9 ± 3 anos e, maioritariamente empregados (64,7%). O grau de instrução da maioria dos inquiridos (54,7%) era o ensino superior. Grande parte da amostra não se considera consumidora habitual de fast-food,referindo ainda efectuar frequentemente uma alimentação equilibrada. Sendo que apenas 5 % frequenta as instalações mais de uma vez por semana. De entre os produtos disponíveis, a preferência fez-se pela sandes e batata-frita, sendo o momento de maior consumo o almoçoA avaliação nutricional das escolhas preferenciais dos inquiridos mostrou que o VCT do menu que inclui água como bebida está dentro dos limites calóricos preconizados para o almoço excepção feita ao menu que inclui sandes quente de frango em pão de orégãos e sandes fria de queijo fresco que se destacam por apresentar um valor inferior ao limite mínimo recomendado. Pelo contrário, a inclusão no menu do refrigerante faz com que haja um aumento do VCT, independentemente da sandes considerada, em 18%. Uma análise detalhada mostra que estas ementas são desequilibradas, apresentando 33,3% delas valores de proteínas superiores à DDR enquanto que os valores de HC e lípidos se encontram maioritariamente dentro dos limites havendo apenas 13,3% das ementas fora desses valores. Relativamente ao aporte de fibra e de sódio 86,7% das ementas aparecem desenquadradas com valores excessivos de sódio e valores de fibra 33% abaixo do limite mínimo recomendado. Tratando-se de um estudo de caso em que apenas se inclui um único restaurante de uma praça de alimentação, que fornece ementas à base de pão (sandes) os resultados são interpretados de forma cautelosa e sem generalização. Podemos no entanto concluir, face aos resultados obtidos a necessidade de redução do teor de sal das ementas. Para além disso parece-nos fundamental, para que o consumidor possa comparar opções alimentares e tomar decisões informadas, a disponibilização da informação nutricional das ementas propostas.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Estatística, 2015.
Resumo:
The present work proposes a Hypothesis Test to detect a shift in the variance of a series of independent normal observations using a statistic based on the p-values of the F distribution. Since the probability distribution function of this statistic is intractable, critical values were we estimated numerically through extensive simulation. A regression approach was used to simplify the quantile evaluation and extrapolation. The power of the test was simulated using Monte Carlo simulation, and the results were compared with the Chen test (1997) to prove its efficiency. Time series analysts might find the test useful to address homoscedasticity studies were at most one change might be involved.
Resumo:
This document introduces the planned new search for the neutron Electric Dipole Moment at the Spallation Neutron Source at the Oak Ridge National Laboratory. A spin precession measurement is to be carried out using Ultracold neutrons diluted in a superfluid Helium bath at T = 0.5 K, where spin polarized 3He atoms act as detector of the neutron spin polarization. This manuscript describes some of the key aspects of the planned experiment with the contributions from Caltech to the development of the project.
Techniques used in the design of magnet coils for Nuclear Magnetic Resonance were adapted to the geometry of the experiment. Described is an initial design approach using a pair of coils tuned to shield outer conductive elements from resistive heat loads, while inducing an oscillating field in the measurement volume. A small prototype was constructed to test the model of the field at room temperature.
A large scale test of the high voltage system was carried out in a collaborative effort at the Los Alamos National Laboratory. The application and amplification of high voltage to polished steel electrodes immersed in a superfluid Helium bath was studied, as well as the electrical breakdown properties of the electrodes at low temperatures. A suite of Monte Carlo simulation software tools to model the interaction of neutrons, 3He atoms, and their spins with the experimental magnetic and electric fields was developed and implemented to further the study of expected systematic effects of the measurement, with particular focus on the false Electric Dipole Moment induced by a Geometric Phase akin to Berry’s phase.
An analysis framework was developed and implemented using unbinned likelihood to fit the time modulated signal expected from the measurement data. A collaborative Monte Carlo data set was used to test the analysis methods.
Resumo:
Purpose - The purpose of this paper is to analyze what transaction costs are acceptable for customers in different investments. In this study, two life insurance contracts, a mutual fund and a risk-free investment, as alternative investment forms are considered. The first two products under scrutiny are a life insurance investment with a point-to-point capital guarantee and a participating contract with an annual interest rate guarantee and participation in the insurer's surplus. The policyholder assesses the various investment opportunities using different utility measures. For selected types of risk profiles, the utility position and the investor's preference for the various investments are assessed. Based on this analysis, the authors study which cost levels can make all of the products equally rewarding for the investor. Design/methodology/approach - The paper notes the risk-neutral valuation calibration using empirical data utility and performance measurement dynamics underlying: geometric Brownian motion numerical examples via Monte Carlo simulation. Findings - In the first step, the financial performance of the various saving opportunities under different assumptions of the investor's utility measurement is studied. In the second step, the authors calculate the level of transaction costs that are allowed in the various products to make all of the investment opportunities equally rewarding from the investor's point of view. A comparison of these results with transaction costs that are common in the market shows that insurance companies must be careful with respect to the level of transaction costs that they pass on to their customers to provide attractive payoff distributions. Originality/value - To the best of the authors' knowledge, their research question - i.e. which transaction costs for life insurance products would be acceptable from the customer's point of view - has not been studied in the above described context so far.