27 resultados para Markov Chain Monte Carlo

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a method for vehicle tracking through video analysis based on Markov chain Monte Carlo (MCMC) particle filtering with metropolis sampling is proposed. The method handles multiple targets with low computational requirements and is, therefore, ideally suited for advanced-driver assistance systems that involve real-time operation. The method exploits the removed perspective domain given by inverse perspective mapping (IPM) to define a fast and efficient likelihood model. Additionally, the method encompasses an interaction model using Markov Random Fields (MRF) that allows treatment of dependencies between the motions of targets. The proposed method is tested in highway sequences and compared to state-of-the-art methods for vehicle tracking, i.e., independent target tracking with Kalman filtering (KF) and joint tracking with particle filtering. The results showed fewer tracking failures using the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov Chain Monte Carlo methods are widely used in signal processing and communications for statistical inference and stochastic optimization. In this work, we introduce an efficient adaptive Metropolis-Hastings algorithm to draw samples from generic multimodal and multidimensional target distributions. The proposal density is a mixture of Gaussian densities with all parameters (weights, mean vectors and covariance matrices) updated using all the previously generated samples applying simple recursive rules. Numerical results for the one and two-dimensional cases are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo (MC) methods are widely used in signal processing, machine learning and stochastic optimization. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information using another MCMC technique working on the entire population of current states. These parallel ?vertical? chains are led by random-walk proposals, whereas the ?horizontal? MCMC uses a independent proposal, which can be easily adapted by making use of all the generated samples. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error, as well as robustness w.r.t. to initial values and parameter choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-label classification (MLC) is the supervised learning problem where an instance may be associated with multiple labels. Modeling dependencies between labels allows MLC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies. On the one hand, the original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors down the chain. On the other hand, a recent Bayes-optimal method improves the performance, but is computationally intractable in practice. Here we present a novel double-Monte Carlo scheme (M2CC), both for finding a good chain sequence and performing efficient inference. The M2CC algorithm remains tractable for high-dimensional data sets and obtains the best overall accuracy, as shown on several real data sets with input dimension as high as 1449 and up to 103 labels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review the main results from extensive Monte Carlo (MC) simulations on athermal polymer packings in the bulk and under confinement. By employing the simplest possible model of excluded volume, macromolecules are represented as freely-jointed chains of hard spheres of uniform size. Simulations are carried out in a wide concentration range: from very dilute up to very high volume fractions, reaching the maximally random jammed (MRJ) state. We study how factors like chain length, volume fraction and flexibility of bond lengths affect the structure, shape and size of polymers, their packing efficiency and their phase behaviour (disorder–order transition). In addition, we observe how these properties are affected by confinement realized by flat, impenetrable walls in one dimension. Finally, by mapping the parent polymer chains to primitive paths through direct geometrical algorithms, we analyse the characteristics of the entanglement network as a function of packing density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ObjectKineticMonteCarlo models allow for the study of the evolution of the damage created by irradiation to time scales that are comparable to those achieved experimentally. Therefore, the essential ObjectKineticMonteCarlo parameters can be validated through comparison with experiments. However, this validation is not trivial since a large number of parameters is necessary, including migration energies of point defects and their clusters, binding energies of point defects in clusters, as well as the interactionradii. This is particularly cumbersome when describing an alloy, such as the Fe–Cr system, which is of interest for fusion energy applications. In this work we describe an ObjectKineticMonteCarlo model for Fe–Cr alloys in the dilute limit. The parameters used in the model come either from density functional theory calculations or from empirical interatomic potentials. This model is used to reproduce isochronal resistivity recovery experiments of electron irradiateddiluteFe–Cr alloys performed by Abe and Kuramoto. The comparison between the calculated results and the experiments reveal that an important parameter is the capture radius between substitutionalCr and self-interstitialFe atoms. A parametric study is presented on the effect of the capture radius on the simulated recovery curves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty propagation in fuel cycle calculations due to Nuclear Data (ND) is a important important issue for : issue for : • Present fuel cycles (e.g. high burnup fuel programme) • New fuel cycles designs (e.g. fast breeder reactors and ADS) Different error propagation techniques can be used: • Sensitivity analysis • Response Response Surface Method Surface Method • Monte Carlo technique Then, p p , , in this paper, it is assessed the imp y pact of ND uncertainties on the decay heat and radiotoxicity in two applications: • Fission Pulse Decay ( y Heat calculation (FPDH) • Conceptual design of European Facility for Industrial Transmutation (EFIT)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este estudio se aplica una metodología de obtención de las leyes de frecuencia derivadas (de caudales máximo vertidos y niveles máximos alcanzados) en un entorno de simulaciones de Monte Carlo, para su inclusión en un modelo de análisis de riesgo de presas. Se compara su comportamiento respecto del uso de leyes de frecuencia obtenidas con las técnicas tradicionalmente utilizadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo simulations have been carried out to study the effect of temperature on the growth kinetics of a circular grain. This work demonstrates the importance of roughening fluctuations on the growth dynamics. Since the effect of thermal fluctuations is stronger in d =2 than in d =3, as predicted by d =3 theories of domain kinetics, the circular domain shrinks linearly with time as A (t)=A(0)-αt, where A (0) and A(t) are the initial and instantaneous areas, respectively. However, in contrast to d =3, the slope α is strongly temperature dependent for T≥0.6TC. An analytical theory which considers the thermal fluctuations agrees with the T dependence of the Monte Carlo data in this regime, and this model show that these fluctuations are responsible for the strong temperature dependence of the growth rate for d =2. Our results are particularly relevant to the problem of domain growth in surface science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulation of interest rate derivatives is a powerful tool to face the current market fluctuations. However, the complexity of the financial models and the way they are processed require exorbitant computation times, what is in clear conflict with the need of a processing time as short as possible to operate in the financial market. To shorten the computation time of financial derivatives the use of hardware accelerators becomes a must.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Monte Carlo computer simulation technique, in which a continuum system is modeled employing a discrete lattice, has been applied to the problem of recrystallization. Primary recrystallization is modeled under conditions where the degree of stored energy is varied and nucleation occurs homogeneously (without regard for position in the microstructure). The nucleation rate is chosen as site saturated. Temporal evolution of the simulated microstructures is analyzed to provide the time dependence of the recrystallized volume fraction and grain sizes. The recrystallized volume fraction shows sigmoidal variations with time. The data are approximately fit by the Johnson-Mehl-Avrami equation with the expected exponents, however significant deviations are observed for both small and large recrystallized volume fractions. Under constant rate nucleation conditions, the propensity for irregular grain shapes is decreased and the density of two sided grains increases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The energy and specific energy absorbed in the main cell compartments (nucleus and cytoplasm) in typical radiobiology experiments are usually estimated by calculations as they are not accessible for a direct measurement. In most of the work, the cell geometry is modelled using the combination of simple mathematical volumes. We propose a method based on high resolution confocal imaging and ion beam analysis (IBA) in order to import realistic cell nuclei geometries in Monte-Carlo simulations and thus take into account the variety of different geometries encountered in a typical cell population. Seventy-six cell nuclei have been imaged using confocal microscopy and their chemical composition has been measured using IBA. A cellular phantom was created from these data using the ImageJ image analysis software and imported in the Geant4 Monte-Carlo simulation toolkit. Total energy and specific energy distributions in the 76 cell nuclei have been calculated for two types of irradiation protocols: a 3 MeV alpha particle microbeam used for targeted irradiation and a 239Pu alpha source used for large angle random irradiation. Qualitative images of the energy deposited along the particle tracks have been produced and show good agreement with images of DNA double strand break signalling proteins obtained experimentally. The methodology presented in this paper provides microdosimetric quantities calculated from realistic cellular volumes. It is based on open-source oriented software that is publicly available.