118 resultados para Bayesian shared component model
Resumo:
Threshold Error Correction Models are used to analyse the term structure of interest Rates. The paper develops and uses a generalisation of existing models that encompasses both the Band and Equilibrium threshold models of [Balke and Fomby ((1997) Threshold cointegration. Int Econ Rev 38(3):627–645)] and estimates this model using a Bayesian approach. Evidence is found for threshold effects in pairs of longer rates but not in pairs of short rates. The Band threshold model is supported in preference to the Equilibrium model.
Resumo:
We introduce a modified conditional logit model that takes account of uncertainty associated with mis-reporting in revealed preference experiments estimating willingness-to-pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239-269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non-pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis-reporting, are substantially revised downwards. We find a significant proportion of respondents mis-reporting in favour of the non-pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
The objective of this paper is to revisit the von Liebig hypothesis by reexamining five samples of experimental data and by applying to it recent advances in Bayesian techniques. The samples were published by Hexem and Heady as described in a further section. Prior to outlining the estimation strategy, we discuss the intuition underlying our approach and, briefly, the literature on which it is based. We present an algorithm for the basic von Liebig formulation and demonstrate its application using simulated data (table 1). We then discuss the modifications needed to the basic model that facilitate estimation of a von Liebig frontier and we demonstrate the extended algorithm using simulated data (table 2). We then explore, empirically, the relationships between limiting water and nitrogen in the Hexem and Heady corn samples and compare the results between the two formulations (table 3). Finally, some conclusions and suggestions for further research are offered.
Resumo:
It is well established that crop production is inherently vulnerable to variations in the weather and climate. More recently the influence of vegetation on the state of the atmosphere has been recognized. The seasonal growth of crops can influence the atmosphere and have local impacts on the weather, which in turn affects the rate of seasonal crop growth and development. Considering the coupled nature of the crop-climate system, and the fact that a significant proportion of land is devoted to the cultivation of crops, important interactions may be missed when studying crops and the climate system in isolation, particularly in the context of land use and climate change. To represent the two-way interactions between seasonal crop growth and atmospheric variability, we integrate a crop model developed specifically to operate at large spatial scales (General Large Area Model for annual crops) into the land surface component of a global climate model (GCM; HadAM3). In the new coupled crop-climate model, the simulated environment (atmosphere and soil states) influences growth and development of the crop, while simultaneously the temporal variations in crop leaf area and height across its growing season alter the characteristics of the land surface that are important determinants of surface fluxes of heat and moisture, as well as other aspects of the land-surface hydrological cycle. The coupled model realistically simulates the seasonal growth of a summer annual crop in response to the GCM's simulated weather and climate. The model also reproduces the observed relationship between seasonal rainfall and crop yield. The integration of a large-scale single crop model into a GCM, as described here, represents a first step towards the development of fully coupled crop and climate models. Future development priorities and challenges related to coupling crop and climate models are discussed.
Resumo:
Pollinators provide essential ecosystem services, and declines in some pollinator communities around the world have been reported. Understanding the fundamental components defining these communities is essential if conservation and restoration are to be successful. We examined the structure of plant-pollinator communities in a dynamic Mediterranean landscape, comprising a mosaic of post-fire regenerating habitats, and which is a recognized global hotspot for bee diversity. Each community was characterized by a highly skewed species abundance distribution, with a few dominant and many rare bee species, and was consistent with a log series model indicating that a few environmental factors govern the community. Floral community composition, the quantity and quality of forage resources present, and the geographic locality organized bee communities at various levels: (1) The overall structure of the bee community (116 species), as revealed through ordination, was dependent upon nectar resource diversity (defined as the variety of nectar volume-concentration combinations available), the ratio of pollen to nectar energy, floral diversity, floral abundance, and post-fire age. (2) Bee diversity, measured as species richness, was closely linked to floral diversity (especially of annuals), nectar resource diversity, and post-fire age of the habitat. (3) The abundance of the most common species was primarily related to post-fire age, grazing intensity, and nesting substrate availability. Ordination models based on age-characteristic post-fire floral community structure explained 39-50% of overall variation observed in bee community structure. Cluster analysis showed that all the communities shared a high degree of similarity in their species composition (27-59%); however, the geographical location of sites also contributed a smaller but significant component to bee community structure. We conclude that floral resources act in specific and previously unexplored ways to modulate the diversity of the local geographic species pool, with specific disturbance factors, superimposed upon these patterns, mainly affecting the dominant species.
Resumo:
Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.
Resumo:
This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.
Resumo:
MOTIVATION: The accurate prediction of the quality of 3D models is a key component of successful protein tertiary structure prediction methods. Currently, clustering or consensus based Model Quality Assessment Programs (MQAPs) are the most accurate methods for predicting 3D model quality; however they are often CPU intensive as they carry out multiple structural alignments in order to compare numerous models. In this study, we describe ModFOLDclustQ - a novel MQAP that compares 3D models of proteins without the need for CPU intensive structural alignments by utilising the Q measure for model comparisons. The ModFOLDclustQ method is benchmarked against the top established methods in terms of both accuracy and speed. In addition, the ModFOLDclustQ scores are combined with those from our older ModFOLDclust method to form a new method, ModFOLDclust2, that aims to provide increased prediction accuracy with negligible computational overhead. RESULTS: The ModFOLDclustQ method is competitive with leading clustering based MQAPs for the prediction of global model quality, yet it is up to 150 times faster than the previous version of the ModFOLDclust method at comparing models of small proteins (<60 residues) and over 5 times faster at comparing models of large proteins (>800 residues). Furthermore, a significant improvement in accuracy can be gained over the previous clustering based MQAPs by combining the scores from ModFOLDclustQ and ModFOLDclust to form the new ModFOLDclust2 method, with little impact on the overall time taken for each prediction. AVAILABILITY: The ModFOLDclustQ and ModFOLDclust2 methods are available to download from: http://www.reading.ac.uk/bioinf/downloads/ CONTACT: l.j.mcguffin@reading.ac.uk.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.
Resumo:
We describe a Bayesian method for investigating correlated evolution of discrete binary traits on phylogenetic trees. The method fits a continuous-time Markov model to a pair of traits, seeking the best fitting models that describe their joint evolution on a phylogeny. We employ the methodology of reversible-jump ( RJ) Markov chain Monte Carlo to search among the large number of possible models, some of which conform to independent evolution of the two traits, others to correlated evolution. The RJ Markov chain visits these models in proportion to their posterior probabilities, thereby directly estimating the support for the hypothesis of correlated evolution. In addition, the RJ Markov chain simultaneously estimates the posterior distributions of the rate parameters of the model of trait evolution. These posterior distributions can be used to test among alternative evolutionary scenarios to explain the observed data. All results are integrated over a sample of phylogenetic trees to account for phylogenetic uncertainty. We implement the method in a program called RJ Discrete and illustrate it by analyzing the question of whether mating system and advertisement of estrus by females have coevolved in the Old World monkeys and great apes.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on binary measures of undesirable events and continuous measures of therapeutic benefit. The methods generalize earlier approaches where undesirable events and therapeutic benefit are both binary. A logistic regression model is used to model the binary responses, while a linear regression model is used to model the continuous responses. Prior distributions for the unknown model parameters are suggested. A gain function is discussed and an optional safety constraint is included. Copyright (C) 2006 John Wiley & Sons, Ltd.