940 resultados para MCMC, Metropolis Hastings, Gibbs, Bayesian, OBMC, slice sampler, Python
Resumo:
This work describes the probabilistic modelling af a Bayesian-based mechanism to improve location estimates of an already deployed location system by fusing its outputs with low-cost binary sensors. This mechanism takes advantege of the localization captabilities of different technologies usually present in smart environments deployments. The performance of the proposed algorithm over a real sensor deployment is evaluated using simulated and real experimental data.
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
This paper proposes the EvoBANE system. EvoBANE automatically generates Bayesian networks for solving special-purpose problems. EvoBANE evolves a population of individuals that codify Bayesian networks until it finds near optimal individual that solves a given classification problem. EvoBANE has the flexibility to modify the constraints that condition the solution search space, self-adapting to the specifications of the problem to be solved. The system extends the GGEAS architecture. GGEAS is a general-purpose grammar-guided evolutionary automatic system, whose modular structure favors its application to the automatic construction of intelligent systems. EvoBANE has been applied to two classification benchmark datasets belonging to different application domains, and statistically compared with a genetic algorithm performing the same tasks. Results show that the proposed system performed better, as it manages different complexity constraints in order to find the simplest solution that best solves every problem.
Resumo:
Belief propagation (BP) is a technique for distributed inference in wireless networks and is often used even when the underlying graphical model contains cycles. In this paper, we propose a uniformly reweighted BP scheme that reduces the impact of cycles by weighting messages by a constant ?edge appearance probability? rho ? 1. We apply this algorithm to distributed binary hypothesis testing problems (e.g., distributed detection) in wireless networks with Markov random field models. We demonstrate that in the considered setting the proposed method outperforms standard BP, while maintaining similar complexity. We then show that the optimal ? can be approximated as a simple function of the average node degree, and can hence be computed in a distributed fashion through a consensus algorithm.
Resumo:
In this paper we propose a new method for the automatic detection and tracking of road traffic signs using an on-board single camera. This method aims to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. The proposed approach exploits a combination of different features, such as color, appearance, and tracking information. This information is introduced into a recursive Bayesian decision framework, in which prior probabilities are dynamically adapted to tracking results. This decision scheme obtains a number of candidate regions in the image, according to their HS (Hue-Saturation). Finally, a Kalman filter with an adaptive noise tuning provides the required time and spatial coherence to the estimates. Results have shown that the proposed method achieves high detection rates in challenging scenarios, including illumination changes, rapid motion and significant perspective distortion
Resumo:
Many of the emerging telecom services make use of Outer Edge Networks, in particular Home Area Networks. The configuration and maintenance of such services may not be under full control of the telecom operator which still needs to guarantee the service quality experienced by the consumer. Diagnosing service faults in these scenarios becomes especially difficult since there may be not full visibility between different domains. This paper describes the fault diagnosis solution developed in the MAGNETO project, based on the application of Bayesian Inference to deal with the uncertainty. It also takes advantage of a distributed framework to deploy diagnosis components in the different domains and network elements involved, spanning both the telecom operator and the Outer Edge networks. In addition, MAGNETO features self-learning capabilities to automatically improve diagnosis knowledge over time and a partition mechanism that allows breaking down the overall diagnosis knowledge into smaller subsets. The MAGNETO solution has been prototyped and adapted to a particular outer edge scenario, and has been further validated on a real testbed. Evaluation of the results shows the potential of our approach to deal with fault management of outer edge networks.
Resumo:
En los últimos años es notable la proliferación de trabajos y estudios que tratan sobre las características del hormigón autocompactante. De ellos, la durabilidad es el aspecto menos tratado, siendo especialmente escasos los que se centran en un problema particular de esta durabilidad, como es la penetración de cloruros, un aspecto básico para todos los elementos estructurales sometidos a un ambiente marino. Esta será la línea básica del presente trabajo, que vendrá acompañada de otra serie de ensayo que permitan ratificar los resultados obtenidos. Debido a lo anteriormente expuesto, el objetivo general de esta investigación es estudiar la influencia de la adición de nano-sílice en aspectos tanto microestructurales como durables en hormigones autocompactantes. Dado que el objetivo general planteado es muy ambicioso y requiere tiempo y multitud de ensayos combinando numerosas variables, este trabajo de investigación se centra en los siguientes objetivos particulares dentro de la línea general de la investigación: Evaluar los cambios que se producen en las propiedades en estado fresco de los distintos hormigones ensayados; Evaluar los cambios que se producen en las propiedades mecánicas de los hormigones estudiados; Determinar los cambios de la matriz porosa de los distintos hormigones ensayados y determinar los cambios en los componentes hidratados de la matiz de cemento. Para cumplir con este objetivo, se ha procedido a comparar el comportamiento de cuatro tipos de hormigón con el mismo cemento: Un hormigón convencional sin adición, un hormigón autocompactante sin adición, un hormigón autocompactante con 2,5 % de adición de nano sílice y un hormigón autocompactante con 5 % de adición de nano sílice. Las etapas seguidas en este trabajo son las siguientes: Revisión bibliográfica relativa a los hormigones autocompactantes, y a la adición de nano-sílice.; Estudio y elección de las dosificaciones para los hormigones objeto de estudio: hormigón convencional, un hormigón autocompactante sin adiciones y hormigones autocompactantes con adición de nano-sílice; Evaluación de los hormigones, convencional y autocompactantes, en estado fresco en base a la normativa vigente y a las exigencias de la Instrucción del Hormigón Estructural (EHE-08); Evaluación de las propiedades mecánicas de los hormigones en estado endurecido mediante ensayo de resistencia a compresión; Caracterización microestructural de los hormigones mediante ensayos de porosimetría por intrusión de mercurio (PIM) y termoanálisis (TG-ATD); Evaluación del comportamiento de los hormigones frente a ensayos específicos enfocados a su durabilidad, como son los de resistividad eléctrica y de penetración de cloruros y estudio comparativo de los resultados obtenidos y establecimiento de relaciones entre la dosificación y el comportamiento de cada hormigón, de cara a poder fijar recomendaciones de uso.
Resumo:
The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies
Resumo:
El objetivo de la presente tesis se enmarca dentro del estudio del estado de hormigones de presas, desarrollado en los últimos años en el Laboratorio Central del CEDEX, en el que se ratifica que una de las causas más importantes del deterioro de obras hidráulicas en España es la reacción álcali-sílice. La tesis que se presenta pretende contribuir al mejor conocimiento de la reacción álcali sílice con fines normativos preventivos, abordando los aspectos relativos a la identificación de áridos reactivos en el hormigón. El conocimiento de los áridos reactivos en España (origen de la reactividad, tipos de reacción y su comportamiento, así como las herramientas disponibles para su detección) es imprescindible para evitar la futura aparición de esta patología en nuevas estructuras, ya sea evitando el uso de áridos reactivos o tomando las medidas preventivas necesarias si su utilización es inevitable. A partir del Estudio Bibliográfico realizado se han detectado diversas lagunas en la identificación y caracterización de áridos de reacción rápida, cuya característica principal es que son reactivos con concentraciones muy bajas de diferentes componentes reactivos. Para resolver las lagunas identificadas se ha planeado un estudio experimental, consistente en el análisis de áridos cuya reactividad es conocida porque han sido empleados en obras afectadas por la reacción álcali sílice. Sobre el árido grueso extraído de estas estructuras se han realizado una serie de ensayos normalizados (estudio petrográfico, ensayo acelerado de probetas de mortero, ensayo Gel Pat y ensayos químicos). El análisis de los resultados experimentales ha permitido conocer las limitaciones reales en áridos reactivos españoles de las diferentes técnicas existentes, tratando de minimizarlas para áridos cuya reactividad es debida a componentes minoritarios (áridos de reacción rápida). Además, se ha evaluado la utilización de la difracción de rayos X (no normalizada) y la creación de un nuevo ensayo (Gel Pat Modificado). Finalmente, el estudio experimental ha permitido fijar una metodología de ensayo para el estudio de áridos reactivos por su contenido en componentes minoritarios (áridos de reacción rápida). The objective of this Thesis fits into the research program developed in CEDEX the last years and focused on the durability of concrete in Dams. This research work confirms that one of the main problems related to the deterioration of hydraulic structures is the alkali silica reaction. This Thesis aims to contribute to a better understanding of alkali-silica reaction, for preventive regulation purposes, considering the aspects related to the identification of reactive aggregates. The knowledge of Spanish reactive aggregates (origin of the reactivity, types of reaction and their behavior, and the tools available to detect and describe them) is essential to avoid the appearance of this pathology in new structures, either not using the reactive aggregate or taking the necessary preventive measures available in bibliography if the use of the reactive aggregate is inevitable. From the State-of –the-Art developed, several gaps have been detected in the detection and description of rapid reactive aggregates, which main characteristic if that they are reactive with low content of some reactive components. An experimental programme has been designed to solve these gaps, consisting on studying the reactivity of aggregates used in Spanish structures affected by the alkali silica reaction. Several Standard Tests have been carried out on coarse aggregates removed from the affected structures (Petrographic description, Accelerated Mortar Bar Test, Gel Pat Test and Chemical Tests). The analysis of the results obtained in Spanish reactive aggregates allows to know the advantages and limitations of each test, trying to minimize the disadvantages to detect Spanish reactive aggregates because of the minority content of rapid reactive components (rapid reactive aggregates). Moreover, X ray diffraction (not standardized) has been tested to detect rapid reactive aggregates and also a new test has been developed for the same purpose (Optimized Gel Pat Test). Finally, the experimental programme has made possible to define a methodology for detection of Spanish rapid reactive aggregates.
Resumo:
In the presence of a river flood, operators in charge of control must take decisions based on imperfect and incomplete sources of information (e.g., data provided by a limited number sensors) and partial knowledge about the structure and behavior of the river basin. This is a case of reasoning about a complex dynamic system with uncertainty and real-time constraints where bayesian networks can be used to provide an effective support. In this paper we describe a solution with spatio-temporal bayesian networks to be used in a context of emergencies produced by river floods. In the paper we describe first a set of types of causal relations for hydrologic processes with spatial and temporal references to represent the dynamics of the river basin. Then we describe how this was included in a computer system called SAIDA to provide assistance to operators in charge of control in a river basin. Finally the paper shows experimental results about the performance of the model.
Resumo:
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.
Resumo:
We present a computing model based on the DNA strand displacement technique which performs Bayesian inference. The model will take single stranded DNA as input data, representing the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease playing with a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single stranded DNA species whose relative proportion represents the application of Bayes? Law: the conditional probability of the disease given the signal. The models presented in this paper can empower the application of probabilistic reasoning in genetic diagnosis in vitro.
Resumo:
Simulación mediante dinámica molecular de la irradiación de sílice con iones rápidos pesados
Resumo:
In this paper, an innovative approach to perform distributed Bayesian inference using a multi-agent architecture is presented. The final goal is dealing with uncertainty in network diagnosis, but the solution can be of applied in other fields. The validation testbed has been a P2P streaming video service. An assessment of the work is presented, in order to show its advantages when it is compared with traditional manual processes and other previous systems.