942 resultados para equilibrium asset pricing models with latent variables


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behavior of commodities is critical for developing and developed countries alike. This paper contributes to the empirical evidence on the co-movement and determinants of commodity prices. Using nonstationary panel methods, we document a statistically significant degree of co-movement due to a common factor. Within a Factor Augmented VAR approach, real interest rate and uncertainty, as postulated by a simple asset pricing model, are both found to be negatively related to this common factor. This evidence is robust to the inclusion of demand and supply shocks, which both positively impact on the co-movement of commodity prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a State Space approach to explain the dynamics of rent growth, expected returns and Price-Rent ratio in housing markets. According to the present value model, movements in price to rent ratio should be matched by movements in expected returns and expected rent growth. The state space framework assume that both variables follow an autoregression process of order one. The model is applied to the US and UK housing market, which yields series of the latent variables given the behaviour of the Price-Rent ratio. Resampling techniques and bootstrapped likelihood ratios show that expected returns tend to be highly persistent compared to rent growth. The filtered expected returns is considered in a simple predictability of excess returns model with high statistical predictability evidence for the UK. Overall, it is found that the present value model tends to have strong statistical predictability in the UK housing markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical studies on the determinants of industrial location typically use variables measured at the available administrative level (municipalities, counties, etc.). However, this amounts to assuming that the effects these determinants may have on the location process do not extent beyond the geographical limits of the selected site. We address the validity of this assumption by comparing results from standard count data models with those obtained by calculating the geographical scope of the spatially varying explanatory variables using a wide range of distances and alternative spatial autocorrelation measures. Our results reject the usual practice of using administrative records as covariates without making some kind of spatial correction. Keywords: industrial location, count data models, spatial statistics JEL classification: C25, C52, R11, R30

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines competition between generic and brand-name drugs in the regulated Spanish pharmaceutical market. A nested logit demand model is specified for the three most consumed therapeutic subgroups in Spain: statins (anticholesterol), selective serotonin reuptake inhibitors (antidepressants) and proton pump inhibitors (antiulcers). The model is estimated with instrumental variables from a panel of monthly prescription data from 1999 to 2005. The dataset distinguishes between three different levels of patients’ copayments within the prescriptions and the results show that the greater the level of insurance that the patient has (and therefore the lower the patient’s copayment), the lower the proportion of generic prescriptions made by physicians. It seems that the low level of copayment has delayed the penetration of generics into the Spanish market. Additionally, the estimation of the demand model suggests that the substitution rules and promotional efforts associated with the reference pricing system have increased generic market share, and that being among the first generic entrants has an additional positive effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I study large random assignment economies with a continuum of agents and a finite number of object types. I consider the existence of weak priorities discriminating among agents with respect to their rights concerning the final assignment. The respect for priorities ex ante (ex-ante stability) usually precludes ex-ante envy-freeness. Therefore I define a new concept of fairness, called no unjustified lower chances: priorities with respect to one object type cannot justify different achievable chances regarding another object type. This concept, which applies to the assignment mechanism rather than to the assignment itself, implies ex-ante envy-freeness among agents of the same priority type. I propose a variation of Hylland and Zeckhauser' (1979) pseudomarket that meets ex-ante stability, no unjustified lower chances and ex-ante efficiency among agents of the same priority type. Assuming enough richness in preferences and priorities, the converse is also true: any random assignment with these properties could be achieved through an equilibrium in a pseudomarket with priorities. If priorities are acyclical (the ordering of agents is the same for each object type), this pseudomarket achieves ex-ante efficient random assignments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: We analyze the realized stock-bond correlation. Gradual transitions between negative and positive stock-bond correlation is accommodated by the smooth transition regression (STR) model. The changes in regime are de…ned by economic and …financial transition variables. Both in sample and out-of- sample results document that STR models with multiple transition variables outperform STR models with a single transition variable. The most important transition variables are the short rate, the yield spread, and the VIX volatility index. Keywords: realized correlation; smooth transition regressions; stock-bond correlation; VIX index JEL Classifi…cations: C22; G11; G12; G17

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce wage setting via efficiency wages in the neoclassical one-sector growth model to study the growth effects of wage inertia. We compare the dynamic equilibrium of an economy with wage inertia with the equilibrium of an economy without wage inertia. We show that wage inertia affects the long run employment rate and that the transitional dynamics of the main economic variables will be different because wages are a state variable when wage inertia is introduced. In particular, we show non-monotonic transitions in the economy with wage inertia that do not arise in the economy with flexible wages. We also study the growth effects of permanent technological and fiscal policy shocks in these two economies. During the transition, the growth effects of technological shocks obtained when wages exhibit inertia may be the opposite from the ones obtained when wages are flexible. In the long run, these technological shocks may have long run effects if there is wage inertia. We also show that the growth effects of fiscal policies will be delayed when there is wage inertia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Since generic drugs have the same therapeutic effect as the original formulation but at generally lower costs, their use should be more heavily promoted. However, a considerable number of barriers to their wider use have been observed in many countries. The present study examines the influence of patients, physicians and certain characteristics of the generics' market on generic substitution in Switzerland.Methods: We used reimbursement claims' data submitted to a large health insurer by insured individuals living in one of Switzerland's three linguistic regions during 2003. All dispensed drugs studied here were substitutable. The outcome (use of a generic or not) was modelled by logistic regression, adjusted for patients' characteristics (gender, age, treatment complexity, substitution groups) and with several variables describing reimbursement incentives (deductible, co-payments) and the generics' market (prices, packaging, co-branded original, number of available generics, etc.).Results: The overall generics' substitution rate for 173,212 dispensed prescriptions was 31%, though this varied considerably across cantons. Poor health status (older patients, complex treatments) was associated with lower generic use. Higher rates were associated with higher out-of-pocket costs, greater price differences between the original and the generic, and with the number of generics on the market, while reformulation and repackaging were associated with lower rates. The substitution rate was 13% lower among hospital physicians. The adoption of the prescribing practices of the canton with the highest substitution rate would increase substitution in other cantons to as much as 26%.Conclusions: Patient health status explained a part of the reluctance to substitute an original formulation by a generic. Economic incentives were efficient, but with a moderate global effect. The huge interregional differences indicated that prescribing behaviours and beliefs are probably the main determinant of generic substitution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally accepted that most plant populations are locally adapted. Yet, understanding how environmental forces give rise to adaptive genetic variation is a challenge in conservation genetics and crucial to the preservation of species under rapidly changing climatic conditions. Environmental variation, phylogeographic history, and population demographic processes all contribute to spatially structured genetic variation, however few current models attempt to separate these confounding effects. To illustrate the benefits of using a spatially-explicit model for identifying potentially adaptive loci, we compared outlier locus detection methods with a recently-developed landscape genetic approach. We analyzed 157 loci from samples of the alpine herb Gentiana nivalis collected across the European Alps. Principle coordinates of neighbor matrices (PCNM), eigenvectors that quantify multi-scale spatial variation present in a data set, were incorporated into a landscape genetic approach relating AFLP frequencies with 23 environmental variables. Four major findings emerged. 1) Fifteen loci were significantly correlated with at least one predictor variable (R (adj) (2) > 0.5). 2) Models including PCNM variables identified eight more potentially adaptive loci than models run without spatial variables. 3) When compared to outlier detection methods, the landscape genetic approach detected four of the same loci plus 11 additional loci. 4) Temperature, precipitation, and solar radiation were the three major environmental factors driving potentially adaptive genetic variation in G. nivalis. Techniques presented in this paper offer an efficient method for identifying potentially adaptive genetic variation and associated environmental forces of selection, providing an important step forward for the conservation of non-model species under global change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General introductionThe Human Immunodeficiency/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic, despite recent encouraging announcements by the World Health Organization (WHO) is still today one of the world's major health care challenges.The present work lies in the field of health care management, in particular, we aim to evaluate the behavioural and non-behavioural interventions against HIV/AIDS in developing countries through a deterministic simulation model, both in human and economic terms. We will focus on assessing the effectiveness of the antiretroviral therapies (ART) in heterosexual populations living in lesser developed countries where the epidemic has generalized (formerly defined by the WHO as type II countries). The model is calibrated using Botswana as a case study, however our model can be adapted to other countries with similar transmission dynamics.The first part of this thesis consists of reviewing the main mathematical concepts describing the transmission of infectious agents in general but with a focus on human immunodeficiency virus (HIV) transmission. We also review deterministic models assessing HIV interventions with a focus on models aimed at African countries. This review helps us to recognize the need for a generic model and allows us to define a typical structure of such a generic deterministic model.The second part describes the main feed-back loops underlying the dynamics of HIV transmission. These loops represent the foundation of our model. This part also provides a detailed description of the model, including the various infected and non-infected population groups, the type of sexual relationships, the infection matrices, important factors impacting HIV transmission such as condom use, other sexually transmitted diseases (STD) and male circumcision. We also included in the model a dynamic life expectancy calculator which, to our knowledge, is a unique feature allowing more realistic cost-efficiency calculations. Various intervention scenarios are evaluated using the model, each of them including ART in combination with other interventions, namely: circumcision, campaigns aimed at behavioral change (Abstain, Be faithful or use Condoms also named ABC campaigns), and treatment of other STD. A cost efficiency analysis (CEA) is performed for each scenario. The CEA consists of measuring the cost per disability-adjusted life year (DALY) averted. This part also describes the model calibration and validation, including a sensitivity analysis.The third part reports the results and discusses the model limitations. In particular, we argue that the combination of ART and ABC campaigns and ART and treatment of other STDs are the most cost-efficient interventions through 2020. The main model limitations include modeling the complexity of sexual relationships, omission of international migration and ignoring variability in infectiousness according to the AIDS stage.The fourth part reviews the major contributions of the thesis and discusses model generalizability and flexibility. Finally, we conclude that by selecting the adequate interventions mix, policy makers can significantly reduce the adult prevalence in Botswana in the coming twenty years providing the country and its donors can bear the cost involved.Part I: Context and literature reviewIn this section, after a brief introduction to the general literature we focus in section two on the key mathematical concepts describing the transmission of infectious agents in general with a focus on HIV transmission. Section three provides a description of HIV policy models, with a focus on deterministic models. This leads us in section four to envision the need for a generic deterministic HIV policy model and briefly describe the structure of such a generic model applicable to countries with generalized HIV/AIDS epidemic, also defined as pattern II countries by the WHO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE. To evaluate potential risk factors for the development of multiple sclerosis in Brazilian patients. METHOD. A case control study was carried out in 81 patients enrolled at the Department of Neurology of the Hospital da Lagoa in Rio de Janeiro, and 81 paired controls. A standardized questionnaire on demographic, social and cultural variables, and medical and family history was used. Statistical analysis was performed using descriptive statistics and conditional logistic regression models with the SPSS for Windows software program. RESULTS. Having standard vaccinations (vaccinations specified by the Brazilian government) (OR=16.2; 95% CI=2.3-115.2), smoking (OR=7.6; 95% CI=2.1-28.2), being single (OR=4.7; 95% CI=1.4-15.6) and eating animal brain (OR=3.4; 95% CI=1.2-9.8) increased the risk of developing MS. CONCLUSIONS. RESULTS of this study may contribute towards better awareness of the epidemiological characteristics of Brazilian patients with multiple sclerosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For glioblastoma (GBM), survival classification has primarily relied on clinical criteria, exemplified by the Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA). We sought to improve tumor classification by combining tumor biomarkers with the clinical RPA data. To accomplish this, we first developed 4 molecular biomarkers derived from gene expression profiling, a glioma CpG island methylator phenotype, a novel MGMT promoter methylation assay, and IDH1 mutations. A molecular predictor (MP) model was created with these 4 biomarkers on a training set of 220 retrospectively collected archival GBMtumors. ThisMPwas further combined with RPA classification to develop a molecular-clinical predictor (MCP). The median survivals for the combined, 4-class MCP were 65 months, 31 months, 13 months, and 9 months, which was significantly improved when compared with the RPA alone. The MCP was then applied to 725 samples from the RTOG-0525 cohort, showing median survival for each risk group of NR, 26 months, 16 months, and 11 months. The MCP was significantly improved over the RPA at outcome prediction in the RTOG 0525 cohort with a 33%increase in explained variation with respect to survival, validating the result obtained in the training set. To illustrate the benefit of the MCP for patient stratification, we examined progression-free survival (PFS) for patients receiving standard-dose temozolomide (SD-TMZ) vs. dose-dense TMZ (DD-TMZ) in RPA and MCP risk groups. A significant difference between DD-TMZ and SD-TMZ was observed in the poorest surviving MCP risk group with a median PFS of 6 months vs. 3 months (p ¼ 0.048, log-rank test). This difference was not seen using the RPA classification alone. In summary, we have developed a combined molecular-clinical predictor that appears to improve outcome prediction when compared with clinical variables alone. This MCP may serve to better identify patients requiring intensive treatments beyond the standard of care.