972 resultados para Maximum entropy statistical estimate
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
El principal objetivo de este estudio es evaluar la influencia de las fendas de secado en las propiedades mecánicas de vigas de madera. Para esto, se utilizan 40 vigas de Pino silvestre (Pinus sylvestris L) de 4200 mm de longitud y 150x200 mm de sección que fueron ensayadas según norma EN 408. Las fendas se registran detalladamente atendiendo a su longitud y posición en cada cara de la viga, y midiendo el espesor y la profundidad cada 100mm a lo largo de la viga. Solo el 10% de la muestra es rechazada por las fendas, según los criterios establecidos por la norma española de clasificación visual UNE 56544. Para evaluar la influencia de las fendas en las propiedades mecánicas, se usan tres parámetros globales basados en el área, el volumen o la profundad de la fenda, y dos locales basados en la profundidad máxima y la profundidad en la zona de rotura. Además se determina la densidad de las piezas. Estos parámetros se comparan con las propiedades mecánicas (tensión de rotura, módulo de elasticidad y energía de rotura) y se encuentra escasa relación entre ellos. Las mejores correlaciones se encuentran entre los parámetros relacionados con la profundidad de las fendas, tanto con el módulo de elasticidad como con la tensión de rotura. The aim of this study is the evaluation of the influence of drying fissures on the mechanical properties of timber beams. For that purpose, 40 sawn timber pieces of Scots pine (Pinus sylvestris L.) with 150x200 mm in cross-section and 4200 mm in length have been tested according to EN 408, obtaining MOR and MOE. The fissures were registered in detail measuring their length and position in each face of the beam, and the thickness and depth every 100 mm in length. Only 10 % of the pieces were rejected because fissures, according to UNE 56544 Spanish visual grading standard. To evaluate the influence of fissures in mechanical properties three global parameters: Fissures Area Ratio or ratio between the area occupied by fissures and the total area in the neutral axis plane of the beam; Fissures Volume Ratio or ratio between volume of fissures and the total volume of the beam; Fissures Average Depth and two local parameters were used: Fissures Maximum Depth in the beam, and Fissures Depth in the broken zone of the beam. Also the density of the beams was registered. These parameters were compared with mechanical properties (tensile strength, elasticity modulus, and rupture energy) and the relationship between them had not been founded. The best relationship was founded between the elasticity modulus y the tensile strength with the parameters which included the depth of the fissures.
Resumo:
Following the success achieved in previous research projects usin non-destructive methods to estimate the physical and mechanical aging of particle and fibre boards, this paper studies the relationships between aging, physical and mechanical changes, using non-destructive measurements of oriented strand board (OSB). 184 pieces of OSB board from a French source were tested to analyze its actual physical and mechanical properties. The same properties were estimated using acoustic non-destructive methods (ultrasound and stress wave velocity) during a physical laboratory aging test. Measurements were recorded of propagation wave velocity with the sensors aligned, edge to edge, and forming an angle of 45 degrees, with both sensors on the same face of the board. This is because aligned measures are not possible on site. The velocity results are always higher in 45 degree measurements. Given the results of statistical analysis, it can be concluded that there is a strong relationship between acoustic measurements and the decline in physical and mechanical properties of the panels due to aging. The authors propose several models to estimate the physical and mechanical properties of board, as well as their degree of aging. The best results are obtained using ultrasound, although the difference in comparison with the stress wave method is not very significant. A reliable prediction of the degree of deterioration (aging) of board is presented.
Resumo:
This paper studies the relationship between aging, physical changes and the results of non-destructive testing of plywood. 176 pieces of plywood were tested to analyze their actual and estimated density using non-destructive methods (screw withdrawal force and ultrasound wave velocity) during a laboratory aging test. From the results of statistical analysis it can be concluded that there is a strong relationship between the non-destructive measurements carried out, and the decline in the physical properties of the panels due to aging. The authors propose several models to estimate board density. The best results are obtained with ultrasound. A reliable prediction of the degree of deterioration (aging) of board is presented.
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.
Resumo:
This paper presents a time-domain stochastic system identification method based on maximum likelihood estimation (MLE) with the expectation maximization (EM) algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. The benchmark structure is a four-story, two-bay by two-bay steel-frame scale model structure built in the Earthquake Engineering Research Laboratory at the University of British Columbia, Canada. This paper focuses on Phase I of the analytical benchmark studies. A MATLAB-based finite element analysis code obtained from the IASC-ASCE SHM Task Group web site is used to calculate the dynamic response of the prototype structure. A number of 100 simulations have been made using this MATLAB-based finite element analysis code in order to evaluate the proposed identification method. There are several techniques to realize system identification. In this work, stochastic subspace identification (SSI)method has been used for comparison. SSI identification method is a well known method and computes accurate estimates of the modal parameters. The principles of the SSI identification method has been introduced in the paper and next the proposed MLE with EM algorithm has been explained in detail. The advantages of the proposed structural identification method can be summarized as follows: (i) the method is based on maximum likelihood, that implies minimum variance estimates; (ii) EM is a computational simpler estimation procedure than other optimization algorithms; (iii) estimate more parameters than SSI, and these estimates are accurate. On the contrary, the main disadvantages of the method are: (i) EM algorithm is an iterative procedure and it consumes time until convergence is reached; and (ii) this method needs starting values for the parameters. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using both the SSI method and the proposed MLE + EM method. The numerical results show that the proposed method identifies eigenfrequencies, damping ratios and mode shapes reasonably well even in the presence of 10% measurement noises. These modal parameters are more accurate than the SSI estimated modal parameters.