992 resultados para image fusion


Relevância:

30.00% 30.00%

Publicador:

Resumo:

C.M. Onyango, J.A. Marchant and R. Zwiggelaar, 'Modelling uncertainty in agricultural image analysis', Computers and Electronics in Agriculture 17 (3), 295-305 (1997)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object tracking is an active research area nowadays due to its importance in human computer interface, teleconferencing and video surveillance. However, reliable tracking of objects in the presence of occlusions, pose and illumination changes is still a challenging topic. In this paper, we introduce a novel tracking approach that fuses two cues namely colour and spatio-temporal motion energy within a particle filter based framework. We conduct a measure of coherent motion over two image frames, which reveals the spatio-temporal dynamics of the target. At the same time, the importance of both colour and motion energy cues is determined in the stage of reliability evaluation. This determination helps maintain the performance of the tracking system against abrupt appearance changes. Experimental results demonstrate that the proposed method outperforms the other state of the art techniques in the used test datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire s'intéresse à la détection de mouvement dans une séquence d'images acquises à l'aide d'une caméra fixe. Dans ce problème, la difficulté vient du fait que les mouvements récurrents ou non significatifs de la scène tels que les oscillations d'une branche, l'ombre d'un objet ou les remous d'une surface d'eau doivent être ignorés et classés comme appartenant aux régions statiques de la scène. La plupart des méthodes de détection de mouvement utilisées à ce jour reposent en fait sur le principe bas-niveau de la modélisation puis la soustraction de l'arrière-plan. Ces méthodes sont simples et rapides mais aussi limitées dans les cas où l'arrière-plan est complexe ou bruité (neige, pluie, ombres, etc.). Cette recherche consiste à proposer une technique d'amélioration de ces algorithmes dont l'idée principale est d'exploiter et mimer deux caractéristiques essentielles du système de vision humain. Pour assurer une vision nette de l’objet (qu’il soit fixe ou mobile) puis l'analyser et l'identifier, l'œil ne parcourt pas la scène de façon continue, mais opère par une série de ``balayages'' ou de saccades autour (des points caractéristiques) de l'objet en question. Pour chaque fixation pendant laquelle l'œil reste relativement immobile, l'image est projetée au niveau de la rétine puis interprétée en coordonnées log polaires dont le centre est l'endroit fixé par l'oeil. Les traitements bas-niveau de détection de mouvement doivent donc s'opérer sur cette image transformée qui est centrée pour un point (de vue) particulier de la scène. L'étape suivante (intégration trans-saccadique du Système Visuel Humain (SVH)) consiste ensuite à combiner ces détections de mouvement obtenues pour les différents centres de cette transformée pour fusionner les différentes interprétations visuelles obtenues selon ses différents points de vue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a unique two-stage image restoration framework especially for further application of a novel rectangular poor-pixels detector, which, with properties of miniature size, light weight and low power consumption, has great value in the micro vision system. To meet the demand of fast processing, only a few measured images shifted up to subpixel level are needed to join the fusion operation, fewer than those required in traditional approaches. By maximum likelihood estimation with a least squares method, a preliminary restored image is linearly interpolated. After noise removal via Canny operator based level set evolution, the final high-quality restored image is achieved. Experimental results demonstrate effectiveness of the proposed framework. It is a sensible step towards subsequent image understanding and object identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new man-made target tracking algorithm integrating features from (Forward Looking InfraRed) image sequence is presented based on particle filter. Firstly, a multiscale fractal feature is used to enhance targets in FLIR images. Secondly, the gray space feature is defined by Bhattacharyya distance between intensity histograms of the reference target and a sample target from MFF (Multi-scale Fractal Feature) image. Thirdly, the motion feature is obtained by differencing between two MFF images. Fourthly, a fusion coefficient can be automatically obtained by online feature selection method for features integrating based on fuzzy logic. Finally, a particle filtering framework is developed to fulfill the target tracking. Experimental results have shown that the proposed algorithm can accurately track weak or small man-made target in FLIR images with complicated background. The algorithm is effective, robust and satisfied to real time tracking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we improve the guidance system performance via sensor fusion techniques. Vision based guidance systems can be improved in performance via radar tacking or employing video tracking by unmanned jying vehicles. We also introduce an image texture gradient based image segmentation technique to identify the target in a typical surface-to-air type application with the proposed Robust Extended Kalman Filter based state estimation technique for the implementation of the Proportional Navigation guidance controlleller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe how object estimation by a stationary or a non-stationary camera can be improved using recently-developed robust estimation ideas. The robustness of vision-based systems can be improved significantly by employing a Robust Extended Kalman Filter (REKF). The system performance is also enhanced by increasing the spatial diveristy in measurements via employing additional cameras for video capture. We describe a normal-flow based image segmentation technique to identify the object for the application of our proposed state estimation technique. Our simulations demonstrate that dynamic system modelling coupled with the application of a REKF significantly improves the estimation system performance, especially when large uncertainties are present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we use recently developed robust estimation ideas to improve object tracking by a stationary or nonstationary camera. Large uncertainties are always present in vision-based systems, particularly, in relation to the estimation of the initial state as well as the measurement of object motion. The robustness of these systems can be significantly improved by employing a robust extended Kalman filter (REKF). The system performance can also be enhanced by increasing the spatial diversity in measurements via employing additional cameras for video capture. We compare the performances of various image segmentation techniques in moving-object localization and show that normal-flow-based segmentation yields comparable results to, but requires significantly less time than, optical-flow-based segmentation. We also demonstrate with simulations that dynamic system modeling coupled with the application of an REKF significantly improves the estimation system performance, particularly, when subjected to large uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gait and face are two important biometrics for human identification. Complementary properties of these two biometrics suggest fusion of them. The relationship between gait and face in the fusion is affected by the subject-to-camera distance. On the one hand, gait is a suitable biometric trait for human recognition at a distance. On the other hand, face recognition is more reliable when the subject is close to the camera. This paper proposes an adaptive fusion method called distance-driven fusion to combine gait and face for human identification in video. Rather than predefined fixed fusion rules, distance-driven fusion dynamically adjusts its rule according to the subject-to-camera distance in real time. Experimental results show that distance-driven fusion performs better than not only single biometric, but also the conventional
static fusion rules including MEAN, PRODUCT, MIN, and MAX.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel human recognition method in video, which combines human face and gait traits
using a dynamic multi-modal biometrics fusion scheme. The Fisherface approach is adopted to extract face
features, while for gait features, Locality Preserving Projection (LPP) is used to achieve low-dimensional
manifold embedding of the temporal silhouette data derived from image sequences. Face and gait features are
fused dynamically at feature level based on a distance-driven fusion method. Encouraging experimental results
are achieved on the video sequences containing 20 people, which show that dynamically fused features produce
a more discriminating power than any individual biometric as well as integrated features built on common static
fusion schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on novel technologies for facial image analysis, which involves three topics: face recognition under uncontrolled conditions, automatic facial age estimation, and context-aware fusion of face and gait. They are either key issues bridging laboratorial research and real applications, or innovative problems that have barely been studied before.