42 resultados para Expectation-conditional Maximization (ecm)
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
This paper provides evidence on the sources of co-movement in monthly US and UK stock price movements by investigating the role of macroeconomic and financial variables in a bivariate system with time-varying conditional correlations. Crosscountry communality in response is uncovered, with changes in the US Federal Funds rate, UK bond yields and oil prices having similar negative effects in both markets. Other variables also play a role, especially for the UK market. These effects do not, however, explain the marked increase in cross-market correlations observed from around 2000, which we attribute to time variation in the correlations of shocks to these markets. A regime-switching smooth transition model captures this time variation well and shows the correlations increase dramatically around 1999-2000. JEL classifications: C32, C51, G15 Keywords: international stock returns, DCC-GARCH model, smooth transition conditional correlation GARCH model, model evaluation.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
The biplot has proved to be a powerful descriptive and analytical tool in many areasof applications of statistics. For compositional data the necessary theoreticaladaptation has been provided, with illustrative applications, by Aitchison (1990) andAitchison and Greenacre (2002). These papers were restricted to the interpretation ofsimple compositional data sets. In many situations the problem has to be described insome form of conditional modelling. For example, in a clinical trial where interest isin how patients’ steroid metabolite compositions may change as a result of differenttreatment regimes, interest is in relating the compositions after treatment to thecompositions before treatment and the nature of the treatments applied. To study thisthrough a biplot technique requires the development of some form of conditionalcompositional biplot. This is the purpose of this paper. We choose as a motivatingapplication an analysis of the 1992 US President ial Election, where interest may be inhow the three-part composition, the percentage division among the three candidates -Bush, Clinton and Perot - of the presidential vote in each state, depends on the ethniccomposition and on the urban-rural composition of the state. The methodology ofconditional compositional biplots is first developed and a detailed interpretation of the1992 US Presidential Election provided. We use a second application involving theconditional variability of tektite mineral compositions with respect to major oxidecompositions to demonstrate some hazards of simplistic interpretation of biplots.Finally we conjecture on further possible applications of conditional compositionalbiplots
Resumo:
In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.
Resumo:
We analyze how unemployment, job finding and job separation rates react to neutral and investment-specific technology shocks. Neutral shocks increase unemployment and explain a substantial portion of unemployment volatility; investment-specific shocks expand employment and hours worked and mostly contribute to hours worked volatility. Movements in the job separation rates are responsible for the impact response of unemployment while job finding rates for movements along its adjustment path. Our evidence qualifies the conclusions by Hall (2005) and Shimer (2007) and warns against using search models with exogenous separation rates to analyze the effects of technology shocks.
Resumo:
We analyze how unemployment, job finding and job separation rates reactto neutral and investment-specific technology shocks. Neutral shocks increaseunemployment and explain a substantial portion of it volatility; investment-specificshocks expand employment and hours worked and contribute to hoursworked volatility. Movements in the job separation rates are responsible for theimpact response of unemployment while job finding rates for movements alongits adjustment path. The evidence warns against using models with exogenousseparation rates and challenges the conventional way of modelling technologyshocks in search and sticky price models.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
Background Brain-Derived Neurotrophic Factor (BDNF) is the main candidate for neuroprotective therapy for Huntington's disease (HD), but its conditional administration is one of its most challenging problems. Results Here we used transgenic mice that over-express BDNF under the control of the Glial Fibrillary Acidic Protein (GFAP) promoter (pGFAP-BDNF mice) to test whether up-regulation and release of BDNF, dependent on astrogliosis, could be protective in HD. Thus, we cross-mated pGFAP-BDNF mice with R6/2 mice to generate a double-mutant mouse with mutant huntingtin protein and with a conditional over-expression of BDNF, only under pathological conditions. In these R6/2:pGFAP-BDNF animals, the decrease in striatal BDNF levels induced by mutant huntingtin was prevented in comparison to R6/2 animals at 12 weeks of age. The recovery of the neurotrophin levels in R6/2:pGFAP-BDNF mice correlated with an improvement in several motor coordination tasks and with a significant delay in anxiety and clasping alterations. Therefore, we next examined a possible improvement in cortico-striatal connectivity in R62:pGFAP-BDNF mice. Interestingly, we found that the over-expression of BDNF prevented the decrease of cortico-striatal presynaptic (VGLUT1) and postsynaptic (PSD-95) markers in the R6/2:pGFAP-BDNF striatum. Electrophysiological studies also showed that basal synaptic transmission and synaptic fatigue both improved in R6/2:pGAP-BDNF mice. Conclusions These results indicate that the conditional administration of BDNF under the GFAP promoter could become a therapeutic strategy for HD due to its positive effects on synaptic plasticity.
Resumo:
La terminologia i el concepte d'Error Congènit del Metabolisme (ECM), van ser establerts per A. Garrod a principis de segle. Avui día sabem que estan causats per errors o mutacions en els gens. Degut a la naturalesa del nostre codi genètic, segons el qual les instruccions del DNA són traduïdes a un producte gènic, les proteïnes, que seran les encarregades d'executar-lo; les mutacions del DNA es tradueixen en proteïnes anòmales amb la corresponent...
Resumo:
[cat] En aquest article estudiem estratègies “comprar i mantenir” per a problemes d’optimitzar la riquesa final en un context multi-període. Com que la riquesa final és una suma de variables aleatòries dependents, on cadascuna d’aquestes correspon a una quantitat de capital que s’ha invertit en un actiu particular en una data determinada, en primer lloc considerem aproximacions que redueixen l’aleatorietat multivariant al cas univariant. A continuació, aquestes aproximacions es fan servir per determinar les estratègies “comprar i mantenir” que optimitzen, per a un nivell de probabilitat donat, el VaR i el CLTE de la funció de distribució de la riquesa final. Aquest article complementa el treball de Dhaene et al. (2005), on es van considerar estratègies de reequilibri constant.
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call