830 resultados para Bayesian risk prediction models
Resumo:
In the last years extreme hydrometeorological phenomena have increased in number and intensity affecting the inhabitants of various regions, an example of these effects are the central basins of the Gulf of Mexico (CBGM) that they have been affected by 55.2% with floods and especially the state of Veracruz (1999-2013), leaving economic, social and environmental losses. Mexico currently lacks sufficient hydrological studies for the measurement of volumes in rivers, since is convenient to create a hydrological model (HM) suited to the quality and quantity of the geographic and climatic information that is reliable and affordable. Therefore this research compares the semi-distributed hydrological model (SHM) and the global hydrological model (GHM), with respect to the volumes of runoff and achieve to predict flood areas, furthermore, were analyzed extreme hydrometeorological phenomena in the CBGM, by modeling the Hydrologic Modeling System (HEC-HMS) which is a SHM and the Modèle Hydrologique Simplifié à I'Extrême (MOHYSE) which is a GHM, to evaluate the results and compare which model is suitable for tropical conditions to propose public policies for integrated basins management and flood prevention. Thus it was determined the temporal and spatial framework of the analyzed basins according to hurricanes and floods. It were developed the SHM and GHM models, which were calibrated, validated and compared the results to identify the sensitivity to the real model. It was concluded that both models conform to tropical conditions of the CBGM, having MOHYSE further approximation to the real model. Worth mentioning that in Mexico there is not enough information, besides there are no records of MOHYSE use in Mexico, so it can be a useful tool for determining runoff volumes. Finally, with the SHM and the GHM were generated climate change scenarios to develop risk studies creating a risk map for urban planning, agro-hydrological and territorial organization.
Resumo:
The Delaware River provides half of New York City's drinking water, is a habitat for wild trout, American shad and the federally endangered dwarf wedge mussel. It has suffered four 100‐year floods in the last seven years. A drought during the 1960s stands as a warning of the potential vulnerability of the New York City area to severe water shortages if a similar drought were to recur. The water releases from three New York City dams on the Delaware River's headwaters impact not only the reliability of the city’s water supply, but also the potential impact of floods, and the quality of the aquatic habitat in the upper river. The goal of this work is to influence the Delaware River water release policies (FFMP/OST) to further benefit river habitat and fisheries without increasing New York City's drought risk, or the flood risk to down basin residents. The Delaware water release policies are constrained by the dictates of two US Supreme Court Decrees (1931 and 1954) and the need for unanimity among four states: New York, New Jersey, Pennsylvania, and Delaware ‐‐ and New York City. Coordination of their activities and the operation under the existing decrees is provided by the Delaware River Basin Commission (DRBC). Questions such as the probability of the system approaching drought state based on the current FFMP plan and the severity of the 1960s drought are addressed using long record paleo‐reconstructions of flows. For this study, we developed reconstructed total annual flows (water year) for 3 reservoir inflows using regional tree rings going back upto 1754 (a total of 246 years). The reconstructed flows are used with a simple reservoir model to quantify droughts. We observe that the 1960s drought is by far the worst drought based on 246 years of simulations (since 1754).
Resumo:
Standard models of moral hazard predict a negative relationship between risk and incentives, but the empirical work has not confirmed this prediction. In this paper, we propose a model with adverse selection followed by moral hazard, where effort and the degree of risk aversion are private information of an agent who can control the mean and the variance of profits. For a given contract, more risk-averse agents suppIy more effort in risk reduction. If the marginal utility of incentives decreases with risk aversion, more risk-averse agents prefer lower-incentive contractsj thus, in the optimal contract, incentives are positively correlated with endogenous risk. In contrast, if risk aversion is high enough, the possibility of reduction in risk makes the marginal utility of incentives increasing in risk aversion and, in this case, risk and incentives are negatively related.
Resumo:
This paper is concerned with evaluating value at risk estimates. It is well known that using only binary variables to do this sacrifices too much information. However, most of the specification tests (also called backtests) avaliable in the literature, such as Christoffersen (1998) and Engle and Maganelli (2004) are based on such variables. In this paper we propose a new backtest that does not realy solely on binary variable. It is show that the new backtest provides a sufficiant condition to assess the performance of a quantile model whereas the existing ones do not. The proposed methodology allows us to identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Our theorical findings are corroborated through a monte Carlo simulation and an empirical exercise with daily S&P500 time series.
Resumo:
In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.
Resumo:
Genomewide marker information can improve the reliability of breeding value predictions for young selection candidates in genomic selection. However, the cost of genotyping limits its use to elite animals, and how such selective genotyping affects predictive ability of genomic selection models is an open question. We performed a simulation study to evaluate the quality of breeding value predictions for selection candidates based on different selective genotyping strategies in a population undergoing selection. The genome consisted of 10 chromosomes of 100 cM each. After 5,000 generations of random mating with a population size of 100 (50 males and 50 females), generation G(0) (reference population) was produced via a full factorial mating between the 50 males and 50 females from generation 5,000. Different levels of selection intensities (animals with the largest yield deviation value) in G(0) or random sampling (no selection) were used to produce offspring of G(0) generation (G(1)). Five genotyping strategies were used to choose 500 animals in G(0) to be genotyped: 1) Random: randomly selected animals, 2) Top: animals with largest yield deviation values, 3) Bottom: animals with lowest yield deviations values, 4) Extreme: animals with the 250 largest and the 250 lowest yield deviations values, and 5) Less Related: less genetically related animals. The number of individuals in G(0) and G(1) was fixed at 2,500 each, and different levels of heritability were considered (0.10, 0.25, and 0.50). Additionally, all 5 selective genotyping strategies (Random, Top, Bottom, Extreme, and Less Related) were applied to an indicator trait in generation G(0), and the results were evaluated for the target trait in generation G(1), with the genetic correlation between the 2 traits set to 0.50. The 5 genotyping strategies applied to individuals in G(0) (reference population) were compared in terms of their ability to predict the genetic values of the animals in G(1) (selection candidates). Lower correlations between genomic-based estimates of breeding values (GEBV) and true breeding values (TBV) were obtained when using the Bottom strategy. For Random, Extreme, and Less Related strategies, the correlation between GEBV and TBV became slightly larger as selection intensity decreased and was largest when no selection occurred. These 3 strategies were better than the Top approach. In addition, the Extreme, Random, and Less Related strategies had smaller predictive mean squared errors (PMSE) followed by the Top and Bottom methods. Overall, the Extreme genotyping strategy led to the best predictive ability of breeding values, indicating that animals with extreme yield deviations values in a reference population are the most informative when training genomic selection models.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
Linear mixed effects models are frequently used to analyse longitudinal data, due to their flexibility in modelling the covariance structure between and within observations. Further, it is easy to deal with unbalanced data, either with respect to the number of observations per subject or per time period, and with varying time intervals between observations. In most applications of mixed models to biological sciences, a normal distribution is assumed both for the random effects and for the residuals. This, however, makes inferences vulnerable to the presence of outliers. Here, linear mixed models employing thick-tailed distributions for robust inferences in longitudinal data analysis are described. Specific distributions discussed include the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted, and the Gibbs sampler and the Metropolis-Hastings algorithms are used to carry out the posterior analyses. An example with data on orthodontic distance growth in children is discussed to illustrate the methodology. Analyses based on either the Student-t distribution or on the usual Gaussian assumption are contrasted. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process for modelling distributions of the random effects and of residuals in linear mixed models, and the MCMC implementation allows the computations to be performed in a flexible manner.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Linear mixed effects models have been widely used in analysis of data where responses are clustered around some random effects, so it is not reasonable to assume independence between observations in the same cluster. In most biological applications, it is assumed that the distributions of the random effects and of the residuals are Gaussian. This makes inferences vulnerable to the presence of outliers. Here, linear mixed effects models with normal/independent residual distributions for robust inferences are described. Specific distributions examined include univariate and multivariate versions of the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted and Markov chain Monte Carlo is used to carry out the posterior analysis. The procedures are illustrated using birth weight data on rats in a texicological experiment. Results from the Gaussian and robust models are contrasted, and it is shown how the implementation can be used for outlier detection. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process in linear mixed models, and they are easily implemented using data augmentation and MCMC techniques.
Resumo:
Forecasting, for obvious reasons, often become the most important goal to be achieved. For spatially extended systems (e.g. atmospheric system) where the local nonlinearities lead to the most unpredictable chaotic evolution, it is highly desirable to have a simple diagnostic tool to identify regions of predictable behaviour. In this paper, we discuss the use of the bred vector (BV) dimension, a recently introduced statistics, to identify the regimes where a finite time forecast is feasible. Using the tools from dynamical systems theory and Bayesian modelling, we show the finite time predictability in two-dimensional coupled map lattices in the regions of low BV dimension. © Indian Academy of Sciences.