869 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
Consumption of addictive substances poses a challenge to economic models of rational, forward-looking agents. This dissertation presents a theoretical and empirical examination of consumption of addictive goods.
The theoretical model draws on evidence from psychology and neurobiology to improve on the standard assumptions used in intertemporal consumption studies. I model agents who may misperceive the severity of the future consequences from consuming addictive substances and allow for an agent's environment to shape her preferences in a systematic way suggested by numerous studies that have found craving to be induced by the presence of environmental cues associated with past substance use. The behavior of agents in this behavioral model of addiction can mimic the pattern of quitting and relapsing that is prevalent among addictive substance users.
Chapter 3 presents an empirical analysis of the Becker and Murphy (1988) model of rational addiction using data on grocery store sales of cigarettes. This essay empirically tests the model's predictions concerning consumption responses to future and past price changes as well as the prediction that the response to an anticipated price change differs from the response to an unanticipated price change. In addition, I consider the consumption effects of three institutional changes that occur during the time period 1996 through 1999.
Resumo:
A generalized Bayesian population dynamics model was developed for analysis of historical mark-recapture studies. The Bayesian approach builds upon existing maximum likelihood methods and is useful when substantial uncertainties exist in the data or little information is available about auxiliary parameters such as tag loss and reporting rates. Movement rates are obtained through Markov-chain Monte-Carlo (MCMC) simulation, which are suitable for use as input in subsequent stock assessment analysis. The mark-recapture model was applied to English sole (Parophrys vetulus) off the west coast of the United States and Canada and migration rates were estimated to be 2% per month to the north and 4% per month to the south. These posterior parameter distributions and the Bayesian framework for comparing hypotheses can guide fishery scientists in structuring the spatial and temporal complexity of future analyses of this kind. This approach could be easily generalized for application to other species and more data-rich fishery analyses.
Resumo:
Recently there has been interest in structured discriminative models for speech recognition. In these models sentence posteriors are directly modelled, given a set of features extracted from the observation sequence, and hypothesised word sequence. In previous work these discriminative models have been combined with features derived from generative models for noise-robust speech recognition for continuous digits. This paper extends this work to medium to large vocabulary tasks. The form of the score-space extracted using the generative models, and parameter tying of the discriminative model, are both discussed. Update formulae for both conditional maximum likelihood and minimum Bayes' risk training are described. Experimental results are presented on small and medium to large vocabulary noise-corrupted speech recognition tasks: AURORA 2 and 4. © 2011 IEEE.
Resumo:
This paper develops an algorithm for finding sparse signals from limited observations of a linear system. We assume an adaptive Gaussian model for sparse signals. This model results in a least square problem with an iteratively reweighted L2 penalty that approximates the L0-norm. We propose a fast algorithm to solve the problem within a continuation framework. In our examples, we show that the correct sparsity map and sparsity level are gradually learnt during the iterations even when the number of observations is reduced, or when observation noise is present. In addition, with the help of sophisticated interscale signal models, the algorithm is able to recover signals to a better accuracy and with reduced number of observations than typical L1-norm and reweighted L1 norm methods. ©2010 IEEE.
Resumo:
The prediction of time-changing variances is an important task in the modeling of financial data. Standard econometric models are often limited as they assume rigid functional relationships for the evolution of the variance. Moreover, functional parameters are usually learned by maximum likelihood, which can lead to over-fitting. To address these problems we introduce GP-Vol, a novel non-parametric model for time-changing variances based on Gaussian Processes. This new model can capture highly flexible functional relationships for the variances. Furthermore, we introduce a new online algorithm for fast inference in GP-Vol. This method is much faster than current offline inference procedures and it avoids overfitting problems by following a fully Bayesian approach. Experiments with financial data show that GP-Vol performs significantly better than current standard alternatives.
Resumo:
文章讲述了交通监控系统中应用视频图像流来跟踪运动目标并对目标进行分类的具体过程和原则.基于目标检测提出了双差分的目标检测算法,目标分类应用到了连续时间限制和最大可能性估计的原则,目标跟踪则结合检测到的运动目标图像和当前模板进行相关匹配.实验结果表明,该过程能够很好地探测和分类目标,去除背景信息的干扰,并能够在运动目标部分被遮挡、外观改变和运动停止等情况下连续地跟踪目标.
Resumo:
Given n noisy observations g; of the same quantity f, it is common use to give an estimate of f by minimizing the function Eni=1(gi-f)2. From a statistical point of view this corresponds to computing the Maximum likelihood estimate, under the assumption of Gaussian noise. However, it is well known that this choice leads to results that are very sensitive to the presence of outliers in the data. For this reason it has been proposed to minimize the functions of the form Eni=1V(gi-f), where V is a function that increases less rapidly than the square. Several choices for V have been proposed and successfully used to obtain "robust" estimates. In this paper we show that, for a class of functions V, using these robust estimators corresponds to assuming that data are corrupted by Gaussian noise whose variance fluctuates according to some given probability distribution, that uniquely determines the shape of V.
Resumo:
In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.
Resumo:
A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and based on predictions of the Markov model. The evolution of the skin color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and re-sampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. Quantitative evaluation of the method was conducted on labeled ground-truth video sequences taken from popular movies.
Resumo:
This paper provides mutual information performance analysis of multiple-symbol differential WSK (M-phase shift keying) over time-correlated, time-varying flat-fading communication channels. A state space approach is used to model time correlation of time varying channel phase. This approach captures the dynamics of time correlated, time-varying channels and enables exploitation of the forward-backward algorithm for mutual information performance analysis. It is shown that the differential decoding implicitly uses a sequence of innovations of the channel process time correlation and this sequence is essentially uncorrelated. It enables utilization of multiple-symbol differential detection, as a form of block-by-block maximum likelihood sequence detection for capacity achieving mutual information performance. It is shown that multiple-symbol differential ML detection of BPSK and QPSK practically achieves the channel information capacity with observation times only on the order of a few symbol intervals
Resumo:
We demonstrate genuine three-mode nonlocality based on phase-space formalism. A Svetlichny-type Bell inequality is formulated in terms of the s-parametrized quasiprobability function. We test such a tool using exemplary forms of three-mode entangled states, identifying the ideal measurement settings required for each state. We thus verify the presence of genuine three-mode nonlocality that cannot be reproduced by local or nonlocal hidden variable models between any two out of three modes. In our results, GHZ- and W-type nonlocality can be fully discriminated. We also study the behavior of genuine tripartite nonlocality under the effects of detection inefficiency and dissipation induced by local thermal environments. Our formalism can be useful to test the sharing of genuine multipartite quantum correlations among the elements of some interesting physical settings, including arrays of trapped ions and intracavity ultracold atoms. DOI: 10.1103/PhysRevA.87.022123
Resumo:
In a companion paper, Seitenzahl et al. have presented a set of three-dimensional delayed detonation models for thermonuclear explosions of near-Chandrasekhar-mass white dwarfs (WDs). Here,we present multidimensional radiative transfer simulations that provide synthetic light curves and spectra for those models. The model sequence explores both changes in the strength of the deflagration phase (which is controlled by the ignition configuration in our models) and the WD central density. In agreement with previous studies, we find that the strength of the deflagration significantly affects the explosion and the observables. Variations in the central density also have an influence on both brightness and colour, but overall it is a secondary parameter in our set of models. In many respects, the models yield a good match to the observed properties of normal Type Ia supernovae (SNe Ia): peak brightness, rise/decline time-scales and synthetic spectra are all in reasonable agreement. There are, however, several differences. In particular, the models are systematically too red around maximum light, manifest spectral line velocities that are a little too high and yield I-band light curves that do not match observations. Although some of these discrepancies may simply relate to approximations made in the modelling, some pose real challenges to the models. If viewed as a complete sequence, our models do not reproduce the observed light-curve width- luminosity relation (WLR) of SNe Ia: all our models show rather similar B-band decline rates, irrespective of peak brightness. This suggests that simple variations in the strength of the deflagration phase in Chandrasekhar-mass deflagration-to-detonation models do not readily explain the observed diversity of normal SNe Ia. This may imply that some other parameter within the Chandrasekhar-mass paradigm is key to the WLR, or that a substantial fraction of normal SNe Ia arise from an alternative explosion scenario.
Resumo:
The design and VLSI implementation of two key components of the class-IV partial response maximum likelihood channel (PR-IV) the adaptive filter and the Viterbi decoder are described. These blocks are implemented using parameterised VHDL modules, from a library of common digital signal processing (DSP) and arithmetic functions. Design studies, based on 0.6 micron 3.3V standard cell processes, indicate that worst case sampling rates of 49 mega-samples per second are achievable for this system, with proportionally high sampling rates for full custom designs and smaller dimension processes. Significant increases in the sampling rate, from 49 MHz to approximately 180 MHz, can be achieved by operating four filter modules in parallel, and this implementation has 50% lower power consumption than a pipelined filter operating at the same speed.