961 resultados para VARIABLE NEIGHBORHOOD RANDOM FIELDS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a method for vehicle tracking through video analysis based on Markov chain Monte Carlo (MCMC) particle filtering with metropolis sampling is proposed. The method handles multiple targets with low computational requirements and is, therefore, ideally suited for advanced-driver assistance systems that involve real-time operation. The method exploits the removed perspective domain given by inverse perspective mapping (IPM) to define a fast and efficient likelihood model. Additionally, the method encompasses an interaction model using Markov Random Fields (MRF) that allows treatment of dependencies between the motions of targets. The proposed method is tested in highway sequences and compared to state-of-the-art methods for vehicle tracking, i.e., independent target tracking with Kalman filtering (KF) and joint tracking with particle filtering. The results showed fewer tracking failures using the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physically based distributed models of catchment hydrology are likely to be made available as engineering tools in the near future. Although these models are based on theoretically acceptable equations of continuity, there are still limitations in the present modelling strategy. Of interest to this thesis are the current modelling assumptions made concerning the effects of soil spatial variability, including formations producing distinct zones of preferential flow. The thesis contains a review of current physically based modelling strategies and a field based assessment of soil spatial variability. In order to investigate the effects of soil nonuniformity a fully three dimensional model of variability saturated flow in porous media is developed. The model is based on a Galerkin finite element approximation to Richards equation. Accessibility to a vector processor permits numerical solutions on grids containing several thousand node points. The model is applied to a single hillslope segment under various degrees of soil spatial variability. Such variability is introduced by generating random fields of saturated hydraulic conductivity using the turning bands method. Similar experiments are performed under conditions of preferred soil moisture movement. The results show that the influence of soil variability on subsurface flow may be less significant than suggested in the literature, due to the integrating effects of three dimensional flow. Under conditions of widespread infiltration excess runoff, the results indicate a greater significance of soil nonuniformity. The recognition of zones of preferential flow is also shown to be an important factor in accurate rainfall-runoff modelling. Using the results of various fields of soil variability, experiments are carried out to assess the validity of the commonly used concept of `effective parameters'. The results of these experiments suggest that such a concept may be valid in modelling subsurface flow. However, the effective parameter is observed to be event dependent when the dominating mechanism is infiltration excess runoff.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. The Galatean Risk Screening Tool [1] is a psychological model for mental health risk assessment based on fuzzy sets. This paper details how the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. These semantics are formalised by a detailed specification for an XML structure used to represent the expertise. The component parts were then mapped to equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements. © Springer-Verlag 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data processing services for Meteosat geostationary satellite are presented. Implemented services correspond to the different levels of remote-sensing data processing, including noise reduction at preprocessing level, cloud mask extraction at low-level and fractal dimension estimation at high-level. Cloud mask obtained as a result of Markovian segmentation of infrared data. To overcome high computation complexity of Markovian segmentation parallel algorithm is developed. Fractal dimension of Meteosat data estimated using fractional Brownian motion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho é estudado o modelo de Kuramoto num grafo completo, em redes scale-free com uma distribuição de ligações P(q) ~ q-Y e na presença de campos aleatórios com magnitude constante e gaussiana. Para tal, foi considerado o método Ott-Antonsen e uma aproximação "annealed network". Num grafo completo, na presença de campos aleatórios gaussianos, e em redes scale-free com 2 < y < 5 na presença de ambos os campos aleatórios referidos, foram encontradas transições de fase contínuas. Considerando a presença de campos aleatórios com magnitude constante num grafo completo e em redes scale-free com y > 5, encontraram-se transições de fase contínua (h < √2) e descontínua (h > √2). Para uma rede SF com y = 3, foi observada uma transição de fase de ordem infinita. Os resultados do modelo de Kuramoto num grafo completo e na presença de campos aleatórios com magnitude constante foram comparados aos de simulações, tendo-se verificado uma boa concordância. Verifica-se que, independentemente da topologia de rede, a constante de acoplamento crítico aumenta com a magnitude do campo considerado. Na topologia de rede scale-free, concluiu-se que o valor do acoplamento crítico diminui à medida que valor de y diminui e que o grau de sincronização aumenta com o aumento do número médio das ligações na rede. A presença de campos aleatórios com magnitude gaussiana num grafo completo e numa rede scale-free com y > 2 não destrói a transição de fase contínua e não altera o comportamento crítico do modelo de Kuramoto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents our work on decomposing a specific nurse rostering problem by cyclically assigning blocks of shifts, which are designed considering both hard and soft constraints, to groups of nurses. The rest of the shifts are then assigned to the nurses to construct a schedule based on the one cyclically generated by blocks. The schedules obtained by decomposition and construction can be further improved by a variable neighborhood search. Significant results are obtained and compared with a genetic algorithm and a variable neighborhood search approach on a problem that was presented to us by our collaborator, ORTEC bv, The Netherlands. We believe that the approach has the potential to be further extended to solve a wider range of nurse rostering problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se introduce una variante del Problema del Agente Viajero Selectivo, también conocido en la literatura como Orienteering Problem (OP). En el OP se tiene un conjunto de clientes potenciales, a cada uno de los cuales se le asocia una puntuación o beneficio que recibe el agente al visitarlo, el objetivo es el de diseñar una ruta que comience y termine en el depósito y que maximice el puntaje colectado, tomando en cuenta que existe un límite máximo en la duración de la ruta. En este trabajo se consideran restricciones de conflictos entre clientes, es decir, si dos de ellos tienen conflicto, no pueden ser incluidos ambos en la ruta; por otra parte, existe un subconjunto de clientes que deben ser visitados de manera obligatoria. Se proponen dos modelos matemáticos del problema, cuya diferencia principal es la manera en que aborda la eliminación de ciclos. El primer modelo usa restricciones de tipo secuencial inspiradas en las propuestas por Miller et al. (1960) y el segundo utiliza restricciones basadas en flujo de múltiples productos y se basan en las restricciones propuestas por Wong (1980) y Claus (1984). Asimismo, se proponen dos algoritmos para la solución del problema planteado, el primero es de tipo heurístico y está basado en un esquema GRASP (Greedy Randomized Adaptive Search Procedure) reactivo, cuya fase de mejora es un método tipo VNS (Variable Neighborhood Search) general, el segundo es una estrategia de descomposición basada en generación de columnas. El desempeño de los algoritmos propuestos es evaluado a través de experimentos computacionales sobre un gran conjunto de instancias y los resultados obtenidos son comparados contra las soluciones ´optimas obtenidas al resolver los modelos matemáticos haciendo uso del solver Cplex 12.6.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional information extraction methods mainly rely on visual feature assisted techniques; but without considering the hierarchical dependencies within the paragraph structure, some important information is missing. This paper proposes an integrated approach for extracting academic information from conference Web pages. Firstly, Web pages are segmented into text blocks by applying a new hybrid page segmentation algorithm which combines visual feature and DOM structure together. Then, these text blocks are labeled by a Tree-structured Random Fields model, and the block functions are differentiated using various features such as visual features, semantic features and hierarchical dependencies. Finally, an additional post-processing is introduced to tune the initial annotation results. Our experimental results on real-world data sets demonstrated that the proposed method is able to effectively and accurately extract the needed academic information from conference Web pages. © 2013 Springer-Verlag.