904 resultados para Berthelot Combining Rules
Resumo:
Scandals of selective reporting of clinical trial results by pharmaceutical firms have underlined the need for more transparency in clinical trials. We provide a theoretical framework which reproduces incentives for selective reporting and yields three key implications concerning regulation. First, a compulsory clinical trial registry complemented through a voluntary clinical trial results database can implement full transparency (the existence of all trials as well as their results is known). Second, full transparency comes at a price. It has a deterrence effect on the incentives to conduct clinical trials, as it reduces the firms'gains from trials. Third, in principle, a voluntary clinical trial results database without a compulsory registry is a superior regulatory tool; but we provide some qualified support for additional compulsory registries when medical decision-makers cannot anticipate correctly the drug companies' decisions whether to conduct trials. Keywords: pharmaceutical firms, strategic information transmission, clinical trials, registries, results databases, scientific knowledge JEL classification: D72, I18, L15
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
The objective of this paper is to correct and improve the results obtained by Van der Ploeg (1984a, 1984b) and utilized in the theoretical literature related to feedback stochastic optimal control sensitive to constant exogenous risk-aversion (see, Jacobson, 1973, Karp, 1987 and Whittle, 1981, 1989, 1990, among others) or to the classic context of risk-neutral decision-makers (see, Chow, 1973, 1976a, 1976b, 1977, 1978, 1981, 1993). More realistic and attractive, this new approach is placed in the context of a time-varying endogenous risk-aversion which is under the control of the decision-maker. It has strong qualitative implications on the agent's optimal policy during the entire planning horizon.
Resumo:
Landscape classification tackles issues related to the representation and analysis of continuous and variable ecological data. In this study, a methodology is created in order to define topo-climatic landscapes (TCL) in the north-west of Catalonia (north-east of the Iberian Peninsula). TCLs relate the ecological behaviour of a landscape in terms of topography, physiognomy and climate, which compound the main drivers of an ecosystem. Selected variables are derived from different sources such as remote sensing and climatic atlas. The proposed methodology combines unsupervised interative cluster classification with a supervised fuzzy classification. As a result, 28 TCLs have been found for the study area which may be differentiated in terms of vegetation physiognomy and vegetation altitudinal range type. Furthermore a hierarchy among TCLs is set, enabling the merging of clusters and allowing for changes of scale. Through the topo-climatic landscape map, managers may identify patches with similar environmental conditions and asses at the same time the uncertainty involved.
Resumo:
In this paper, we quantitatively assess the welfare implications of alternative public education spending rules. To this end, we employ a dynamic stochastic general equilibrium model in which human capital externalities and public education expenditures, nanced by distorting taxes, enhance the productivity of private education choices. We allow public education spending, as share of output, to respond to various aggregate indicators in an attempt to minimize the market imperfection due to human capital externalities. We also expose the economy to varying degrees of uncertainty via changes in the variance of total factor productivity shocks. Our results indicate that, in the face of increasing aggregate uncertainty, active policy can signi cantly outperform passive policy (i.e. maintaining a constant public education to output ratio) but only when the policy instrument is successful in smoothing the growth rate of human capital.
Resumo:
These notes try to clarify some discussions on the formulation of individual intertemporal behavior under adaptive learning in representative agent models. First, we discuss two suggested approaches and related issues in the context of a simple consumption-saving model. Second, we show that the analysis of learning in the NewKeynesian monetary policy model based on “Euler equations” provides a consistent and valid approach.
Resumo:
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear - quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of pol icy performance is a linear combinat ion of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Resumo:
This paper investigates underlying changes in the UK economy over the past thirtyfive years using a small open economy DSGE model. Using Bayesian analysis, we find UK monetary policy, nominal price rigidity and exogenous shocks, are all subject to regime shifting. A model incorporating these changes is used to estimate the realised monetary policy and derive the optimal monetary policy for the UK. This allows us to assess the effectiveness of the realised policy in terms of stabilising economic fluctuations, and, in turn, provide an indication of whether there is room for monetary authorities to further improve their policies.
Resumo:
The objective of this paper is to identify empirically the logic behind short-term interest rates setting
Resumo:
We consider negotiations selecting one-dimensional policies. Individuals have single-peaked preferences, and they are impatient. Decisions arise from a bargaining game with random proposers and (super) majority approval, ranging from the simple majority up to unanimity. The existence and uniqueness of stationary subgame perfect equilibrium is established, and its explicit characterization provided. We supply an explicit formula to determine the unique alternative that prevails, as impatience vanishes, for each majority. As an application, we examine the efficiency of majority rules. For symmetric distributions of peaks unanimity is the unanimously preferred majority rule. For asymmetric populations rules maximizing social surplus are characterized.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
This paper has three objectives. First, it aims at revealing the logic of interest rate setting pursued by monetary authorities of 12 new EU members. Using estimation of an augmented Taylor rule, we find that this setting was not always consistent with the official monetary policy. Second, we seek to shed light on the inflation process of these countries. To this end, we carry out an estimation of an open economy Philips curve (PC). Our main finding is that inflation rates were not only driven by backward persistency but also held a forward-looking component. Finally, we assess the viability of existing monetary arrangements for price stability. The analysis of the conditional inflation variance obtained from GARCH estimation of PC is used for this purpose. We conclude that inflation targeting is preferable to an exchange rate peg because it allowed decreasing the inflation rate and anchored its volatility.
Resumo:
The ability to model biodiversity patterns is of prime importance in this era of severe environmental crisis. Species assemblage along environmental gradient is subject to the interplay of biotic interactions in complement to abiotic environmental filtering. Accounting for complex biotic interactions for a wide array of species remains so far challenging. Here, we propose to use food web models that can infer the potential interaction links between species as a constraint in species distribution models. Using a plant-herbivore (butterfly) interaction dataset, we demonstrate that this combined approach is able to improve both species distribution and community forecasts. Most importantly, this combined approach is very useful in rendering models of more generalist species that have multiple potential interaction links, where gap in the literature may be recurrent. Our combined approach points a promising direction forward to model the spatial variation of entire species interaction networks. Our work has implications for studies of range shifting species and invasive species biology where it may be unknown how a given biota might interact with a potential invader or in future climate.