906 resultados para Synchronous hidden Markov models
Resumo:
Yeast co-expressing rat APOBEC-1 and a fragment of human apolipoprotein B (apoB) mRNA assembled functional editosomes and deaminated C6666 to U in a mooring sequence-dependent fashion. The occurrence of APOBEC-1-complementing proteins suggested a naturally occurring mRNA editing mechanism in yeast. Previously, a hidden Markov model identified seven yeast genes encoding proteins possessing putative zinc-dependent deaminase motifs. Here, only CDD1, a cytidine deaminase, is shown to have the capacity to carry out C→U editing on a reporter mRNA. This is only the second report of a cytidine deaminase that can use mRNA as a substrate. CDD1-dependent editing was growth phase regulated and demonstrated mooring sequence-dependent editing activity. Candidate yeast mRNA substrates were identified based on their homology with the mooring sequence-containing tripartite motif at the editing site of apoB mRNA and their ability to be edited by ectopically expressed APOBEC-1. Naturally occurring yeast mRNAs edited to a significant extent by CDD1 were, however, not detected. We propose that CDD1 be designated an orphan C→U editase until its native RNA substrate, if any, can be identified and that it be added to the CDAR (cytidine deaminase acting on RNA) family of editing enzymes.
Resumo:
Parallel recordings of spike trains of several single cortical neurons in behaving monkeys were analyzed as a hidden Markov process. The parallel spike trains were considered as a multivariate Poisson process whose vector firing rates change with time. As a consequence of this approach, the complete recording can be segmented into a sequence of a few statistically discriminated hidden states, whose dynamics are modeled as a first-order Markov chain. The biological validity and benefits of this approach were examined in several independent ways: (i) the statistical consistency of the segmentation and its correspondence to the behavior of the animals; (ii) direct measurement of the collective flips of activity, obtained by the model; and (iii) the relation between the segmentation and the pair-wise short-term cross-correlations between the recorded spike trains. Comparison with surrogate data was also carried out for each of the above examinations to assure their significance. Our results indicated the existence of well-separated states of activity, within which the firing rates were approximately stationary. With our present data we could reliably discriminate six to eight such states. The transitions between states were fast and were associated with concomitant changes of firing rates of several neurons. Different behavioral modes and stimuli were consistently reflected by different states of neural activity. Moreover, the pair-wise correlations between neurons varied considerably between the different states, supporting the hypothesis that these distinct states were brought about by the cooperative action of many neurons.
Resumo:
This article uses data from the social survey Allbus 1998 to introduce a method of forecasting elections in a context of electoral volatility. The approach models the processes of change in electoral behaviour, exploring patterns in order to model the volatility expressed by voters. The forecast is based on the matrix of transition probabilities, following the logic of Markov chains. The power of the matrix, and the use of the mover-stayer model, is debated for alternative forecasts. As an example of high volatility, the model uses data from the German general election of 1998. The unification of two German states in 1990 caused the incorporation of around 15 million new voters from East Germany who had limited familiarity and no direct experience of the political culture in West Germany. Under these circumstances, voters were expected to show high volatility.
Resumo:
In this study, we propose a novel method to predict the solvent accessible surface areas of transmembrane residues. For both transmembrane alpha-helix and beta-barrel residues, the correlation coefficients between the predicted and observed accessible surface areas are around 0.65. On the basis of predicted accessible surface areas, residues exposed to the lipid environment or buried inside a protein can be identified by using certain cutoff thresholds. We have extensively examined our approach based on different definitions of accessible surface areas and a variety of sets of control parameters. Given that experimentally determining the structures of membrane proteins is very difficult and membrane proteins are actually abundant in nature, our approach is useful for theoretically modeling membrane protein tertiary structures, particularly for modeling the assembly of transmembrane domains. This approach can be used to annotate the membrane proteins in proteomes to provide extra structural and functional information.
Resumo:
The standard GTM (generative topographic mapping) algorithm assumes that the data on which it is trained consists of independent, identically distributed (iid) vectors. For time series, however, the iid assumption is a poor approximation. In this paper we show how the GTM algorithm can be extended to model time series by incorporating it as the emission density in a hidden Markov model. Since GTM has discrete hidden states we are able to find a tractable EM algorithm, based on the forward-backward algorithm, to train the model. We illustrate the performance of GTM through time using flight recorder data from a helicopter.
Resumo:
In this report we discuss the problem of combining spatially-distributed predictions from neural networks. An example of this problem is the prediction of a wind vector-field from remote-sensing data by combining bottom-up predictions (wind vector predictions on a pixel-by-pixel basis) with prior knowledge about wind-field configurations. This task can be achieved using the scaled-likelihood method, which has been used by Morgan and Bourlard (1995) and Smyth (1994), in the context of Hidden Markov modelling
Resumo:
WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.
Resumo:
This paper aims to reducing difference between sketches and photos by synthesizing sketches from photos, and vice versa, and then performing sketch-sketch/photo-photo recognition with subspace learning based methods. Pseudo-sketch/pseudo-photo patches are synthesized with embedded hidden Markov model. Because these patches are assembled by averaging their overlapping area in most of the local strategy based methods, which leads to blurring effect to the resulted pseudo-sketch/pseudo-photo, we integrate the patches with image quilting. Experiments are carried out to demonstrate that the proposed method is effective to produce pseudo-sketch/pseudo-photo with high quality and achieve promising recognition results. © 2009.
Resumo:
This paper details the development and evaluation of AstonTAC, an energy broker that successfully participated in the 2012 Power Trading Agent Competition (Power TAC). AstonTAC buys electrical energy from the wholesale market and sells it in the retail market. The main focus of the paper is on the broker’s bidding strategy in the wholesale market. In particular, it employs Markov Decision Processes (MDP) to purchase energy at low prices in a day-ahead power wholesale market, and keeps energy supply and demand balanced. Moreover, we explain how the agent uses Non-Homogeneous Hidden Markov Model (NHHMM) to forecast energy demand and price. An evaluation and analysis of the 2012 Power TAC finals show that AstonTAC is the only agent that can buy energy at low price in the wholesale market and keep energy imbalance low.
Resumo:
This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.
Resumo:
The Mitochondrial Carrier Family (MCF) is a signature group of integral membrane proteins that transport metabolites across the mitochondrial inner membrane in eukaryotes. MCF proteins are characterized by six transmembrane segments that assemble to form a highly-selective channel for metabolite transport. We discovered a novel MCF member, termed Legionellanucleotide carrier Protein (LncP), encoded in the genome of Legionella pneumophila, the causative agent of Legionnaire's disease. LncP was secreted via the bacterial Dot/Icm type IV secretion system into macrophages and assembled in the mitochondrial inner membrane. In a yeast cellular system, LncP induced a dominant-negative phenotype that was rescued by deleting an endogenous ATP carrier. Substrate transport studies on purified LncP reconstituted in liposomes revealed that it catalyzes unidirectional transport and exchange of ATP transport across membranes, thereby supporting a role for LncP as an ATP transporter. A hidden Markov model revealed further MCF proteins in the intracellular pathogens, Legionella longbeachae and Neorickettsia sennetsu, thereby challenging the notion that MCF proteins exist exclusively in eukaryotic organisms.
Resumo:
Entangled quantum states can be given a separable decomposition if we relax the restriction that the local operators be quantum states. Motivated by the construction of classical simulations and local hidden variable models, we construct `smallest' local sets of operators that achieve this. In other words, given an arbitrary bipartite quantum state we construct convex sets of local operators that allow for a separable decomposition, but that cannot be made smaller while continuing to do so. We then consider two further variants of the problem where the local state spaces are required to contain the local quantum states, and obtain solutions for a variety of cases including a region of pure states around the maximally entangled state. The methods involve calculating certain forms of cross norm. Two of the variants of the problem have a strong relationship to theorems on ensemble decompositions of positive operators, and our results thereby give those theorems an added interpretation. The results generalise those obtained in our previous work on this topic [New J. Phys. 17, 093047 (2015)].
Resumo:
We present and evaluate a novel supervised recurrent neural network architecture, the SARASOM, based on the associative self-organizing map. The performance of the SARASOM is evaluated and compared with the Elman network as well as with a hidden Markov model (HMM) in a number of prediction tasks using sequences of letters, including some experiments with a reduced lexicon of 15 words. The results were very encouraging with the SARASOM learning better and performing with better accuracy than both the Elman network and the HMM.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012