905 resultados para HIDDEN-MARKOV MODEL


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results on an extension to our approach for automatic sports video annotation. Sports video is augmented with accelerometer data from wrist bands worn by umpires in the game. We solve the problem of automatic segmentation and robust gesture classification using a hierarchical hidden Markov model in conjunction with a filler model. The hierarchical model allows us to consider gestures at different levels of abstraction and the filler model allows us to handle extraneous umpire movements. Results are presented for labeling video for a game of Cricket.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recognising daily activity patterns of people from low-level sensory data is an important problem. Traditional approaches typically rely on generative models such as the hidden Markov models and training on fully labelled data. While activity data can be readily acquired from pervasive sensors, e.g. in smart environments, providing manual labels to support fully supervised learning is often expensive. In this paper, we propose a new approach based on partially-supervised training of discriminative sequence models such as the conditional random field (CRF) and the maximum entropy Markov model (MEMM). We show that the approach can reduce labelling effort, and at the same time, provides us with the flexibility and accuracy of the discriminative framework. Our experimental results in the video surveillance domain illustrate that these models can perform better than their generative counterpart (i.e. the partially hidden Markov model), even when a substantial amount of labels are unavailable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we compare two generative models including Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM) with Support Vector Machine (SVM) classifier for the recognition of six human daily activity (i.e., standing, walking, running, jumping, falling, sitting-down) from a single waist-worn tri-axial accelerometer signals through 4-fold cross-validation and testing on a total of thirteen subjects, achieving an average recognition accuracy of 96.43% and 98.21% in the first experiment and 95.51% and 98.72% in the second, respectively. The results demonstrate that both HMM and GMM are not only able to learn but also capable of generalization while the former outperformed the latter in the recognition of daily activities from a single waist worn tri-axial accelerometer. In addition, these two generative models enable the assessment of human activities based on acceleration signals with varying lengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We used a geolocation method based on tidal amplitude and water depth to assess the horizontal movements of 14 cod Gadus morhua equipped with time-depth recorders (TDR) in the North Sea and English Channel. Tracks ranged from 40 to 468 d and showed horizontal movements of up to 455 km and periods of continuous localised residence of up to 360 d. Cod spent time both in midwater (43% of total time) and near the seabed (57% of total time). A variety of common vertical movement patterns were seen within periods of both residence and directed horizontal movement. Hence particular patterns of vertical movement could not unequivocally define periods of migration or localised residence. After long horizontal movements, cod tended to adopt resident behaviour for several months and then return to broadly the same location where they were tagged, indicating a geospatial instinct. The results suggest that residence and homing behaviour are important features of Atlantic cod behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In data science, anomaly detection is the process of identifying the items, events or observations which do not conform to expected patterns in a dataset. As widely acknowledged in the computer vision community and security management, discovering suspicious events is the key issue for abnormal detection in video surveil-lance. The important steps in identifying such events include stream data segmentation and hidden patterns discovery. However, the crucial challenge in stream data segmenta-tion and hidden patterns discovery are the number of coherent segments in surveillance stream and the number of traffic patterns are unknown and hard to specify. Therefore, in this paper we revisit the abnormality detection problem through the lens of Bayesian nonparametric (BNP) and develop a novel usage of BNP methods for this problem. In particular, we employ the Infinite Hidden Markov Model and Bayesian Nonparamet-ric Factor Analysis for stream data segmentation and pattern discovery. In addition, we introduce an interactive system allowing users to inspect and browse suspicious events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain Computer Interface (BCI) is playing a very important role in human machine communications. Recent communication systems depend on the brain signals for communication. In these systems, users clearly manipulate their brain activity rather than using motor movements in order to generate signals that could be used to give commands and control any communication devices, robots or computers. In this paper, the aim was to estimate the performance of a brain computer interface (BCI) system by detecting the prosthetic motor imaginary tasks by using only a single channel of electroencephalography (EEG). The participant is asked to imagine moving his arm up or down and our system detects the movement based on the participant brain signal. Some features are extracted from the brain signal using Mel-Frequency Cepstrum Coefficient and based on these feature a Hidden Markov model is used to help in knowing if the participant imagined moving up or down. The major advantage in our method is that only one channel is needed to take the decision. Moreover, the method is online which means that it can give the decision as soon as the signal is given to the system. Hundred signals were used for testing, on average 89 % of the up down prosthetic motor imaginary tasks were detected correctly. This method can be used in many different applications such as: moving artificial prosthetic limbs and wheelchairs due to it's high speed and accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service provisioning is a challenging research area for the design and implementation of autonomic service-oriented software systems. It includes automated QoS management for such systems and their applications. Monitoring, Diagnosis and Repair are three key features of QoS management. This work presents a self-healing Web service-based framework that manages QoS degradation at runtime. Our approach is based on proxies. Proxies act on meta-level communications and extend the HTTP envelope of the exchanged messages with QoS-related parameter values. QoS Data are filtered over time and analysed using statistical functions and the Hidden Markov Model. Detected QoS degradations are handled with proxies. We experienced our framework using an orchestrated electronic shop application (FoodShop).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sistema Texto-Fala (TTS) é atualmente uma tecnologia madura que é utilizada em muitas aplicações. Alguns módulos de um sistema TTS são dependentes do idioma e, enquanto existem muitos recursos disponíveis para a língua inglesa, os recursos para alguns idiomas ainda são limitados. Este trabalho descreve o desenvolvimento de um sistema TTS completo para português brasileiro (PB), o qual também apresenta os recursos já disponíveis. O sistema usa a plataforma MARY e o processo de síntese da voz é baseado em cadeias escondidas de Markov (HMM). Algumas das contribuições deste trabalho consistem na implementação de silabação, determinação da sílaba tônica e conversão grafema-fonema (G2P). O trabalho também descreve as etapas para a organização dos recursos desenvolvidos e a criação de uma voz em PB junto ao MARY. Estes recursos estão disponíveis e facilita a pesquisa na normalização de texto e síntese baseada em HMM par o PB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here, we describe a female patient with autism spectrum disorder and dysmorphic features that harbors a complex genetic alteration, involving a de novo balanced translocation t(2;X)(q11;q24), a 5q11 segmental trisomy and a maternally inherited isodisomy on chromosome 5. All the possibly damaging genetic effects of such alterations are discussed. In light of recent findings on ASD genetic causes, the hypothesis that all these alterations might be acting in orchestration and contributing to the phenotype is also considered. (C) 2012 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe