898 resultados para Bayesian shared component model
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
This paper introduces a new model of trend (or underlying) inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. The bounds placed on trend inflation mean that standard econometric methods for estimating linear Gaussian state space models cannot be used and we develop a posterior simulation algorithm for estimating the bounded trend inflation model. In an empirical exercise with CPI inflation we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model.
Resumo:
The Conservative Party emerged from the 2010 United Kingdom General Election as the largest single party, but their support was not geographically uniform. In this paper, we estimate a hierarchical Bayesian spatial probit model that tests for the presence of regional voting effects. This model allows for the estimation of individual region-specic effects on the probability of Conservative Party success, incorporating information on the spatial relationships between the regions of the mainland United Kingdom. After controlling for a range of important covariates, we find that these spatial relationships are significant and that our individual region-specic effects estimates provide additional evidence of North-South variations in Conservative Party support.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
Ecosystems are faced with high rates of species loss which has consequences for their functions and services. To assess the effects of plant species diversity on the nitrogen (N) cycle, we developed a model for monthly mean nitrate (NO3-N) concentrations in soil solution in 0-30 cm mineral soil depth using plant species and functional group richness and functional composition as drivers and assessing the effects of conversion of arable land to grassland, spatially heterogeneous soil properties, and climate. We used monthly mean NO3-N concentrations from 62 plots of a grassland plant diversity experiment from 2003 to 2006. Plant species richness (1-60) and functional group composition (1-4 functional groups: legumes, grasses, non-leguminous tall herbs, non-leguminous small herbs) were manipulated in a factorial design. Plant community composition, time since conversion from arable land to grassland, soil texture, and climate data (precipitation, soil moisture, air and soil temperature) were used to develop one general Bayesian multiple regression model for the 62 plots to allow an in-depth evaluation using the experimental design. The model simulated NO3-N concentrations with an overall Bayesian coefficient of determination of 0.48. The temporal course of NO3-N concentrations was simulated differently well for the individual plots with a maximum plot-specific Nash-Sutcliffe Efficiency of 0.57. The model shows that NO3-N concentrations decrease with species richness, but this relation reverses if more than approx. 25 % of legume species are included in the mixture. Presence of legumes increases and presence of grasses decreases NO3-N concentrations compared to mixtures containing only small and tall herbs. Altogether, our model shows that there is a strong influence of plant community composition on NO3-N concentrations.
Resumo:
A simple theoretical framework is presented for bioassay studies using three component in vitro systems. An equilibrium model is used to derive equations useful for predicting changes in biological response after addition of hormone-binding-protein or as a consequence of increased hormone affinity. Sets of possible solutions for receptor occupancy and binding protein occupancy are found for typical values of receptor and binding protein affinity constants. Unique equilibrium solutions are dictated by the initial condition of total hormone concentration. According to the occupancy theory of drug action, increasing the affinity of a hormone for its receptor will result in a proportional increase in biological potency. However, the three component model predicts that the magnitude of increase in biological potency will be a small fraction of the proportional increase in affinity. With typical initial conditions a two-fold increase in hormone affinity for its receptor is predicted to result in only a 33% increase in biological response. Under the same conditions an Ii-fold increase in hormone affinity for receptor would be needed to produce a two-fold increase in biological potency. Some currently used bioassay systems may be unrecognized three component systems and gross errors in biopotency estimates will result if the effect of binding protein is not calculated. An algorithm derived from the three component model is used to predict changes in biological response after addition of binding protein to in vitro systems. The algorithm is tested by application to a published data set from an experimental study in an in vitro system (Lim et al., 1990, Endocrinology 127, 1287-1291). Predicted changes show good agreement (within 8%) with experimental observations. (C) 1998 Academic Press Limited.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
This paper employs an unobserved component model that incorporates a set of economic fundamentals to obtain the Euro-Dollar permanent equilibrium exchange rates (PEER) for the period 1975Q1 to 2008Q4. The results show that for most of the sample period, the Euro-Dollar exchange rate closely followed the values implied by the PEER. The only significant deviations from the PEER occurred in the years immediately before and after the introduction of the single European currency. The forecasting exercise shows that incorporating economic fundamentals provides a better long-run exchange rate forecasting performance than a random walk process.
Resumo:
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.
Resumo:
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.
Resumo:
Purpose : To assess time trends of testicular cancer (TC) mortality in Spain for period 1985-2019 for age groups 15-74 years old through a Bayesian age-period-cohort (APC) analysis. Methods: A Bayesian age-drift model has been fitted to describe trends. Projections for 2005-2019 have been calculated by means of an autoregressive APC model. Prior precision for these parameters has been selected through evaluation of an adaptive precision parameter and 95% credible intervals (95% CRI) have been obtained for each model parameter. Results: A decrease of -2.41% (95% CRI: -3.65%; -1.13%) per year has been found for TC mortality rates in age groups 15-74 during 1985-2004, whereas mortality showed a lower annual decrease when data was restricted to age groups 15-54 (-1.18%; 95% CRI: -2.60%; -0.31%). During 2005-2019 is expected a decrease of TC mortality of 2.30% per year for men younger than 35, whereas a leveling off for TC mortality rates is expected for men older than 35. Conclusions: A Bayesian approach should be recommended to describe and project time trends for those diseases with low number of cases. Through this model it has been assessed that management of TC and advances in therapy led to decreasing trend of TC mortality during the period 1985-2004, whereas a leveling off for these trends can be considered during 2005-2019 among men older than 35.