31 resultados para hidden semi markov models
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.
Resumo:
Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.
Resumo:
Functional and non-functional concerns require different programming effort, different techniques and different methodologies when attempting to program efficient parallel/distributed applications. In this work we present a "programmer oriented" methodology based on formal tools that permits reasoning about parallel/distributed program development and refinement. The proposed methodology is semi-formal in that it does not require the exploitation of highly formal tools and techniques, while providing a palatable and effective support to programmers developing parallel/distributed applications, in particular when handling non-functional concerns.
Resumo:
In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
Resumo:
A scalable large vocabulary, speaker independent speech recognition system is being developed using Hidden Markov Models (HMMs) for acoustic modeling and a Weighted Finite State Transducer (WFST) to compile sentence, word, and phoneme models. The system comprises a software backend search and an FPGA-based Gaussian calculation which are covered here. In this paper, we present an efficient pipelined design implemented both as an embedded peripheral and as a scalable, parallel hardware accelerator. Both architectures have been implemented on an Alpha Data XRC-5T1, reconfigurable computer housing a Virtex 5 SX95T FPGA. The core has been tested and is capable of calculating a full set of Gaussian results from 3825 acoustic models in 9.03 ms which coupled with a backend search of 5000 words has provided an accuracy of over 80%. Parallel implementations have been designed with up to 32 cores and have been successfully implemented with a clock frequency of 133?MHz.
Resumo:
This paper considers the separation and recognition of overlapped speech sentences assuming single-channel observation. A system based on a combination of several different techniques is proposed. The system uses a missing-feature approach for improving crosstalk/noise robustness, a Wiener filter for speech enhancement, hidden Markov models for speech reconstruction, and speaker-dependent/-independent modeling for speaker and speech recognition. We develop the system on the Speech Separation Challenge database, involving a task of separating and recognizing two mixing sentences without assuming advanced knowledge about the identity of the speakers nor about the signal-to-noise ratio. The paper is an extended version of a previous conference paper submitted for the challenge.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
Observations of ? Eri (K2 V) have been made with the Space Telescope Imaging Spectrograph on the Hubble Space Telescope. The spectra obtained show a number of emission lines which can be used to determine, or place limits on, the electron density and pressure. Values of the electron pressure are required in order to make quantitative models of the transition region and inner corona from absolute line fluxes, and to constrain semi-empirical models of the chromosphere. Using line flux ratios in Si II and O IV a mean electron pressure of P = NT = 4.8 × 10 cm K is derived. This value is compatible with the lower and upper limits to P found from flux ratios in C III, O V and Fe XII. Some inconsistencies which may be because of small uncertainties in the atomic data used are discussed.
Resumo:
Seldom have studies taken account of changes in lifestyle habits in the elderly, or investigated their impact on disease-free life expectancy (LE) and LE with cardiovascular disease (CVD). Using data on subjects aged 50+ years from three European cohorts (RCPH, ESTHER and Tromsø), we used multi-state Markov models to calculate the independent and joint effects of smoking, physical activity, obesity and alcohol consumption on LE with and without CVD. Men and women aged 50 years who have a favourable lifestyle (overweight but not obese, light/moderate drinker, non-smoker and participates in vigorous physical activity) lived between 7.4 (in Tromsø men) and 15.7 (in ESTHER women) years longer than those with an unfavourable lifestyle (overweight but not obese, light/moderate drinker, smoker and does not participate in physical activity). The greater part of the extra life years was in terms of "disease-free" years, though a healthy lifestyle was also associated with extra years lived after a CVD event. There are sizeable benefits to LE without CVD and also for survival after CVD onset when people favour a lifestyle characterized by salutary behaviours. Remaining a non-smoker yielded the greatest extra years in overall LE, when compared to the effects of routinely taking physical activity, being overweight but not obese, and drinking in moderation. The majority of the overall LE benefit is in disease free years. Therefore, it is important for policy makers and the public to know that prevention through maintaining a favourable lifestyle is "never too late".