973 resultados para Monte - Carlo study


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A high incidence of waterborne diseases is observed worldwide and in order to address contamination problems prior to an outbreak, quantitative microbial risk assessment is a useful tool for estimating the risk of infection. The objective of this paper was to assess the probability of Giardia infection from consuming water from shallow wells in a peri-urban area. Giardia has been described as an important waterborne pathogen and reported in several water sources, including ground waters. Sixteen water samples were collected and examined according to the US EPA (1623, 2005). A Monte Carlo method was used to address the potential risk as described by the exponential dose response model. Giardia cysts occurred in 62.5% of the samples (0.1-36.1 cysts/l). A median risk of 10-1 for the population was estimated and the adult ingestion was the highest risk driver. This study illustrates the vulnerability of shallow well water supply systems in peri-urban areas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this article is to find out the influence of the parameters of the ARIMA-GARCH models in the prediction of artificial neural networks (ANN) of the feed forward type, trained with the Levenberg-Marquardt algorithm, through Monte Carlo simulations. The paper presents a study of the relationship between ANN performance and ARIMA-GARCH model parameters, i.e. the fact that depending on the stationarity and other parameters of the time series, the ANN structure should be selected differently. Neural networks have been widely used to predict time series and their capacity for dealing with non-linearities is a normally outstanding advantage. However, the values of the parameters of the models of generalized autoregressive conditional heteroscedasticity have an influence on ANN prediction performance. The combination of the values of the GARCH parameters with the ARIMA autoregressive terms also implies in ANN performance variation. Combining the parameters of the ARIMA-GARCH models and changing the ANN`s topologies, we used the Theil inequality coefficient to measure the prediction of the feed forward ANN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nested by linear cointegration first provided in Granger (1981), the definition of nonlinear cointegration is presented in this paper. Sequentially, a nonlinear cointegrated economic system is introduced. What we mainly study is testing no nonlinear cointegration against nonlinear cointegration by residual-based test, which is ready for detecting stochastic trend in nonlinear autoregression models. We construct cointegrating regression along with smooth transition components from smooth transition autoregression model. Some properties are analyzed and discussed during the estimation procedure for cointegrating regression, including description of transition variable. Autoregression of order one is considered as the model of estimated residuals for residual-based test, from which the teststatistic is obtained. Critical values and asymptotic distribution of the test statistic that we request for different cointegrating regressions with different sample sizes are derived based on Monte Carlo simulation. The proposed theoretical methods and models are illustrated by an empirical example, comparing the results with linear cointegration application in Hamilton (1994). It is concluded that there exists nonlinear cointegration in our system in the final results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presented work deals with the calibration of a 2D numerical model for the simulation of long term bed load transport. A settled basin along an alpine stream was used as a case study. The focus is to parameterise the used multi fractional transport model such that a dynamically balanced behavior regarding erosion and deposition is reached. The used 2D hydrodynamic model utilizes a multi-fraction multi-layer approach to simulate morphological changes and bed load transport. The mass balancing is performed between three layers: a top mixing layer, an intermediate subsurface layer and a bottom layer. Using this approach bears computational limitations in calibration. Due to the high computational demands, the type of calibration strategy is not only crucial for the result, but as well for the time required for calibration. Brute force methods such as Monte Carlo type methods may require a too large number of model runs. All here tested calibration strategies used multiple model runs utilising the parameterization and/or results from previous run. One concept was to reset to initial bed elevations after each run, allowing the resorting process to convert to stable conditions. As an alternative or in combination, the roughness was adapted, based on resulting nodal grading curves, from the previous run. Since the adaptations are a spatial process, the whole model domain is subdivided in homogeneous sections regarding hydraulics and morphological behaviour. For a faster optimization, the adaptation of the parameters is made section wise. Additionally, a systematic variation was done, considering results from previous runs and the interaction between sections. The used approach can be considered as similar to evolutionary type calibration approaches, but using analytical links instead of random parameter changes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neste trabalho analisamos alguns processos com longa dependência sazonais, denotados por SARFIMA(0,D, 0)s, onde s é a sazonalidade. Os estudos de estimação e previsão estão baseados em simulações de Monte Carlo para diferentes tamanhos amostrais e diferentes sazonalidades. Para estimar o parâmetro D de diferenciação sazonal utilizamos os estimadores propostos por Geweke e Porter-Hudak (1983), Reisen (1994) e Fox e Taqqu (1986). Para os dois primeiros procedimentos de estimação consideramos seis diferentes maneiras de compor o número de regressores necessários na análise de regressão, com o intuito de melhor comparar seus desempenhos. Apresentamos um estudo sobre previsão h-passos à frente utilizando os processos SARFIMA(0,D, 0)s no qual analisamos o erro de previsão, as variâncias teórica e amostral, o vício, o pervício e o erro quadrático médio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While it is recognized that output fuctuations are highly persistent over certain range, less persistent results are also found around very long horizons (Conchrane, 1988), indicating the existence of local or temporary persistency. In this paper, we study time series with local persistency. A test for stationarity against locally persistent alternative is proposed. Asymptotic distributions of the test statistic are provided under both the null and the alternative hypothesis of local persistency. Monte Carlo experiment is conducted to study the power and size of the test. An empirical application reveals that many US real economic variables may exhibit local persistency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is composed of three essays referent to the subjects of macroeconometrics and Önance. In each essay, which corresponds to one chapter, the objective is to investigate and analyze advanced econometric techniques, applied to relevant macroeconomic questions, such as the capital mobility hypothesis and the sustainability of public debt. A Önance topic regarding portfolio risk management is also investigated, through an econometric technique used to evaluate Value-at-Risk models. The Örst chapter investigates an intertemporal optimization model to analyze the current account. Based on Campbell & Shillerís (1987) approach, a Wald test is conducted to analyze a set of restrictions imposed to a VAR used to forecast the current account. The estimation is based on three di§erent procedures: OLS, SUR and the two-way error decomposition of Fuller & Battese (1974), due to the presence of global shocks. A note on Granger causality is also provided, which is shown to be a necessary condition to perform the Wald test with serious implications to the validation of the model. An empirical exercise for the G-7 countries is presented, and the results substantially change with the di§erent estimation techniques. A small Monte Carlo simulation is also presented to investigate the size and power of the Wald test based on the considered estimators. The second chapter presents a study about Öscal sustainability based on a quantile autoregression (QAR) model. A novel methodology to separate periods of nonstationarity from stationary ones is proposed, which allows one to identify trajectories of public debt that are not compatible with Öscal sustainability. Moreover, such trajectories are used to construct a debt ceiling, that is, the largest value of public debt that does not jeopardize long-run Öscal sustainability. An out-of-sample forecast of such a ceiling is also constructed, and can be used by policy makers interested in keeping the public debt on a sustainable path. An empirical exercise by using Brazilian data is conducted to show the applicability of the methodology. In the third chapter, an alternative backtest to evaluate the performance of Value-at-Risk (VaR) models is proposed. The econometric methodology allows one to directly test the overall performance of a VaR model, as well as identify periods of an increased risk exposure, which seems to be a novelty in the literature. Quantile regressions provide an appropriate environment to investigate VaR models, since they can naturally be viewed as a conditional quantile function of a given return series. An empirical exercise is conducted for daily S&P500 series, and a Monte Carlo simulation is also presented, revealing that the proposed test might exhibit more power in comparison to other backtests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following Burgess (1999), we use the “stepwise regression” model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability. Unlike Burgess (1999), we carry out White’s Reality Check (2000) in order to verify the existence of positive returns for the period outside the sample. We use the strategies proposed by Sullivan, Timmermann & White (1999) and Hsu & Kuan (2005) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1,000 simulations, we find strong evidence of predictability in the models, including transaction costs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A determinação da taxa de juros estrutura a termo é um dos temas principais da gestão de ativos financeiros. Considerando a grande importância dos ativos financeiros para a condução das políticas econômicas, é fundamental para compreender a estrutura que é determinado. O principal objetivo deste estudo é estimar a estrutura a termo das taxas de juros brasileiras, juntamente com taxa de juros de curto prazo. A estrutura a termo será modelado com base em um modelo com uma estrutura afim. A estimativa foi feita considerando a inclusão de três fatores latentes e duas variáveis ​​macroeconômicas, através da técnica Bayesiana da Cadeia de Monte Carlo Markov (MCMC).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is composed of three articles with the subjects of macroeconomics and - nance. Each article corresponds to a chapter and is done in paper format. In the rst article, which was done with Axel Simonsen, we model and estimate a small open economy for the Canadian economy in a two country General Equilibrium (DSGE) framework. We show that it is important to account for the correlation between Domestic and Foreign shocks and for the Incomplete Pass-Through. In the second chapter-paper, which was done with Hedibert Freitas Lopes, we estimate a Regime-switching Macro-Finance model for the term-structure of interest rates to study the US post-World War II (WWII) joint behavior of macro-variables and the yield-curve. We show that our model tracks well the US NBER cycles, the addition of changes of regime are important to explain the Expectation Theory of the term structure, and macro-variables have increasing importance in recessions to explain the variability of the yield curve. We also present a novel sequential Monte-Carlo algorithm to learn about the parameters and the latent states of the Economy. In the third chapter, I present a Gaussian A ne Term Structure Model (ATSM) with latent jumps in order to address two questions: (1) what are the implications of incorporating jumps in an ATSM for Asian option pricing, in the particular case of the Brazilian DI Index (IDI) option, and (2) how jumps and options a ect the bond risk-premia dynamics. I show that jump risk-premia is negative in a scenario of decreasing interest rates (my sample period) and is important to explain the level of yields, and that gaussian models without jumps and with constant intensity jumps are good to price Asian options.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian inflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in different measures of forecasting accuracy are substantial, especially for short horizons.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).