955 resultados para Expectation-conditional Maximization (ecm)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze how unemployment, job finding and job separation rates reactto neutral and investment-specific technology shocks. Neutral shocks increaseunemployment and explain a substantial portion of it volatility; investment-specificshocks expand employment and hours worked and contribute to hoursworked volatility. Movements in the job separation rates are responsible for theimpact response of unemployment while job finding rates for movements alongits adjustment path. The evidence warns against using models with exogenousseparation rates and challenges the conventional way of modelling technologyshocks in search and sticky price models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlinear regression problems can often be reduced to linearity by transforming the response variable (e.g., using the Box-Cox family of transformations). The classic estimates of the parameter defining the transformation as well as of the regression coefficients are based on the maximum likelihood criterion, assuming homoscedastic normal errors for the transformed response. These estimates are nonrobust in the presence of outliers and can be inconsistent when the errors are nonnormal or heteroscedastic. This article proposes new robust estimates that are consistent and asymptotically normal for any unimodal and homoscedastic error distribution. For this purpose, a robust version of conditional expectation is introduced for which the prediction mean squared error is replaced with an M scale. This concept is then used to develop a nonparametric criterion to estimate the transformation parameter as well as the regression coefficients. A finite sample estimate of this criterion based on a robust version of smearing is also proposed. Monte Carlo experiments show that the new estimates compare favorably with respect to the available competitors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Report on a special investigation of the Engineering Communications and Marketing Department (ECM) of Iowa State University of Science and Technology for the period January 1, 2003 through December 31, 2007

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the analysis of cases in which the inclusion or exclusion of a particular suspect, as a possible contributor to a DNA mixture, depends on the value of a variable (the number of contributors) that cannot be determined with certainty. It offers alternative ways to deal with such cases, including sensitivity analysis and object-oriented Bayesian networks, that separate uncertainty about the inclusion of the suspect from uncertainty about other variables. The paper presents a case study in which the value of DNA evidence varies radically depending on the number of contributors to a DNA mixture: if there are two contributors, the suspect is excluded; if there are three or more, the suspect is included; but the number of contributors cannot be determined with certainty. It shows how an object-oriented Bayesian network can accommodate and integrate varying perspectives on the unknown variable and how it can reduce the potential for bias by directing attention to relevant considerations and distinguishing different sources of uncertainty. It also discusses the challenge of presenting such evidence to lay audiences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Brain-Derived Neurotrophic Factor (BDNF) is the main candidate for neuroprotective therapy for Huntington's disease (HD), but its conditional administration is one of its most challenging problems. Results Here we used transgenic mice that over-express BDNF under the control of the Glial Fibrillary Acidic Protein (GFAP) promoter (pGFAP-BDNF mice) to test whether up-regulation and release of BDNF, dependent on astrogliosis, could be protective in HD. Thus, we cross-mated pGFAP-BDNF mice with R6/2 mice to generate a double-mutant mouse with mutant huntingtin protein and with a conditional over-expression of BDNF, only under pathological conditions. In these R6/2:pGFAP-BDNF animals, the decrease in striatal BDNF levels induced by mutant huntingtin was prevented in comparison to R6/2 animals at 12 weeks of age. The recovery of the neurotrophin levels in R6/2:pGFAP-BDNF mice correlated with an improvement in several motor coordination tasks and with a significant delay in anxiety and clasping alterations. Therefore, we next examined a possible improvement in cortico-striatal connectivity in R62:pGFAP-BDNF mice. Interestingly, we found that the over-expression of BDNF prevented the decrease of cortico-striatal presynaptic (VGLUT1) and postsynaptic (PSD-95) markers in the R6/2:pGFAP-BDNF striatum. Electrophysiological studies also showed that basal synaptic transmission and synaptic fatigue both improved in R6/2:pGAP-BDNF mice. Conclusions These results indicate that the conditional administration of BDNF under the GFAP promoter could become a therapeutic strategy for HD due to its positive effects on synaptic plasticity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La terminologia i el concepte d'Error Congènit del Metabolisme (ECM), van ser establerts per A. Garrod a principis de segle. Avui día sabem que estan causats per errors o mutacions en els gens. Degut a la naturalesa del nostre codi genètic, segons el qual les instruccions del DNA són traduïdes a un producte gènic, les proteïnes, que seran les encarregades d'executar-lo; les mutacions del DNA es tradueixen en proteïnes anòmales amb la corresponent...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[cat] En aquest article estudiem estratègies “comprar i mantenir” per a problemes d’optimitzar la riquesa final en un context multi-període. Com que la riquesa final és una suma de variables aleatòries dependents, on cadascuna d’aquestes correspon a una quantitat de capital que s’ha invertit en un actiu particular en una data determinada, en primer lloc considerem aproximacions que redueixen l’aleatorietat multivariant al cas univariant. A continuació, aquestes aproximacions es fan servir per determinar les estratègies “comprar i mantenir” que optimitzen, per a un nivell de probabilitat donat, el VaR i el CLTE de la funció de distribució de la riquesa final. Aquest article complementa el treball de Dhaene et al. (2005), on es van considerar estratègies de reequilibri constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to express tightly controlled amounts of endogenous and recombinant proteins in plant cells is an essential tool for research and biotechnology. Here, the inducibility of the soybean heat-shock Gmhsp17.3B promoter was addressed in the moss Physcomitrella patens, using beta-glucuronidase (GUS) and an F-actin marker (GFP-talin) as reporter proteins. In stably transformed moss lines, Gmhsp17.3B-driven GUS expression was extremely low at 25 degrees C. In contrast, a short non-damaging heat-treatment at 38 degrees C rapidly induced reporter expression over three orders of magnitude, enabling GUS accumulation and the labelling of F-actin cytoskeleton in all cell types and tissues. Induction levels were tightly proportional to the temperature and duration of the heat treatment, allowing fine-tuning of protein expression. Repeated heating/cooling cycles led to the massive GUS accumulation, up to 2.3% of the total soluble proteins. The anti-inflammatory drug acetyl salicylic acid (ASA) and the membrane-fluidiser benzyl alcohol (BA) also induced GUS expression at 25 degrees C, allowing the production of recombinant proteins without heat-treatment. The Gmhsp17.3B promoter thus provides a reliable versatile conditional promoter for the controlled expression of recombinant proteins in the moss P. patens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[cat] En aquest article estudiem estratègies “comprar i mantenir” per a problemes d’optimitzar la riquesa final en un context multi-període. Com que la riquesa final és una suma de variables aleatòries dependents, on cadascuna d’aquestes correspon a una quantitat de capital que s’ha invertit en un actiu particular en una data determinada, en primer lloc considerem aproximacions que redueixen l’aleatorietat multivariant al cas univariant. A continuació, aquestes aproximacions es fan servir per determinar les estratègies “comprar i mantenir” que optimitzen, per a un nivell de probabilitat donat, el VaR i el CLTE de la funció de distribució de la riquesa final. Aquest article complementa el treball de Dhaene et al. (2005), on es van considerar estratègies de reequilibri constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Variable queen mating frequencies provide a unique opportunity to study the resolution of worker-queen conflict over sex ratio in social Hymenoptera, because the conflict is maximal in colonies headed by a singly mated queen and is weak or nonexistent in colonies headed by a multiply mated queen. In the wood ant Formica exsecta, workers in colonies with a singly mated queen, but not those in colonies with a multiply mated queen, altered the sex ratio of queen-laid eggs by eliminating males to preferentially raise queens. By this conditional response to queen mating frequency, workers enhance their inclusive fitness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.