892 resultados para SIFT,Computer Vision,Python,Object Recognition,Feature Detection,Descriptor Computation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual object tracking has been one of the most popular research topics in the field of computer vision recently. Specifically, hand tracking has attracted significant attention since it would enable many useful practical applications. However, hand tracking is still a very challenging problem which cannot be considered solved. The fact that almost every aspect of hand appearance can change is the fundamental reason for this difficulty. This thesis focused on 2D-based hand tracking in high-speed camera videos. During the project, a toolbox for this purpose was collected which contains nine different tracking methods. In the experiments, these methods were tested and compared against each other with both high-speed videos recorded during the project and publicly available normal speed videos. The results revealed that tracking accuracies varied considerably depending on the video and the method. Therefore, no single method was clearly the best in all videos, but three methods, CT, HT, and TLD, performed better than the others overall. Moreover, the results provide insights about the suitability of each method to different types and situations of hand tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of low-cost RGB-D sensors has brought new opportunities for robotics by providing affordable devices that can provide synchronized images with both color and depth information. In this thesis, recent work on pose estimation utilizing RGBD sensors is reviewed. Also, a pose recognition system for rigid objects using RGB-D data is implemented. The implementation uses half-edge primitives extracted from the RGB-D images for pose estimation. The system is based on the probabilistic object representation framework by Detry et al., which utilizes Nonparametric Belief Propagation for pose inference. Experiments are performed on household objects to evaluate the performance and robustness of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis researches automatic traffic sign inventory and condition analysis using machine vision and pattern recognition methods. Automatic traffic sign inventory and condition analysis can be used to more efficient road maintenance, improving the maintenance processes, and to enable intelligent driving systems. Automatic traffic sign detection and classification has been researched before from the viewpoint of self-driving vehicles, driver assistance systems, and the use of signs in mapping services. Machine vision based inventory of traffic signs consists of detection, classification, localization, and condition analysis of traffic signs. The produced machine vision system performance is estimated with three datasets, from which two of have been been collected for this thesis. Based on the experiments almost all traffic signs can be detected, classified, and located and their condition analysed. In future, the inventory system performance has to be verified in challenging conditions and the system has to be pilot tested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les pays industrialisés comme le Canada doivent faire face au vieillissement de leur population. En particulier, la majorité des personnes âgées, vivant à domicile et souvent seules, font face à des situations à risques telles que des chutes. Dans ce contexte, la vidéosurveillance est une solution innovante qui peut leur permettre de vivre normalement dans un environnement sécurisé. L’idée serait de placer un réseau de caméras dans l’appartement de la personne pour détecter automatiquement une chute. En cas de problème, un message pourrait être envoyé suivant l’urgence aux secours ou à la famille via une connexion internet sécurisée. Pour un système bas coût, nous avons limité le nombre de caméras à une seule par pièce ce qui nous a poussé à explorer les méthodes monoculaires de détection de chutes. Nous avons d’abord exploré le problème d’un point de vue 2D (image) en nous intéressant aux changements importants de la silhouette de la personne lors d’une chute. Les données d’activités normales d’une personne âgée ont été modélisées par un mélange de gaussiennes nous permettant de détecter tout événement anormal. Notre méthode a été validée à l’aide d’une vidéothèque de chutes simulées et d’activités normales réalistes. Cependant, une information 3D telle que la localisation de la personne par rapport à son environnement peut être très intéressante pour un système d’analyse de comportement. Bien qu’il soit préférable d’utiliser un système multi-caméras pour obtenir une information 3D, nous avons prouvé qu’avec une seule caméra calibrée, il était possible de localiser une personne dans son environnement grâce à sa tête. Concrêtement, la tête de la personne, modélisée par une ellipsoide, est suivie dans la séquence d’images à l’aide d’un filtre à particules. La précision de la localisation 3D de la tête a été évaluée avec une bibliothèque de séquence vidéos contenant les vraies localisations 3D obtenues par un système de capture de mouvement (Motion Capture). Un exemple d’application utilisant la trajectoire 3D de la tête est proposée dans le cadre de la détection de chutes. En conclusion, un système de vidéosurveillance pour la détection de chutes avec une seule caméra par pièce est parfaitement envisageable. Pour réduire au maximum les risques de fausses alarmes, une méthode hybride combinant des informations 2D et 3D pourrait être envisagée.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette dissertation, nous présentons plusieurs techniques d’apprentissage d’espaces sémantiques pour plusieurs domaines, par exemple des mots et des images, mais aussi à l’intersection de différents domaines. Un espace de représentation est appelé sémantique si des entités jugées similaires par un être humain, ont leur similarité préservée dans cet espace. La première publication présente un enchaînement de méthodes d’apprentissage incluant plusieurs techniques d’apprentissage non supervisé qui nous a permis de remporter la compétition “Unsupervised and Transfer Learning Challenge” en 2011. Le deuxième article présente une manière d’extraire de l’information à partir d’un contexte structuré (177 détecteurs d’objets à différentes positions et échelles). On montrera que l’utilisation de la structure des données combinée à un apprentissage non supervisé permet de réduire la dimensionnalité de 97% tout en améliorant les performances de reconnaissance de scènes de +5% à +11% selon l’ensemble de données. Dans le troisième travail, on s’intéresse à la structure apprise par les réseaux de neurones profonds utilisés dans les deux précédentes publications. Plusieurs hypothèses sont présentées et testées expérimentalement montrant que l’espace appris a de meilleures propriétés de mixage (facilitant l’exploration de différentes classes durant le processus d’échantillonnage). Pour la quatrième publication, on s’intéresse à résoudre un problème d’analyse syntaxique et sémantique avec des réseaux de neurones récurrents appris sur des fenêtres de contexte de mots. Dans notre cinquième travail, nous proposons une façon d’effectuer de la recherche d’image ”augmentée” en apprenant un espace sémantique joint où une recherche d’image contenant un objet retournerait aussi des images des parties de l’objet, par exemple une recherche retournant des images de ”voiture” retournerait aussi des images de ”pare-brises”, ”coffres”, ”roues” en plus des images initiales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content Based Image Retrieval is one of the prominent areas in Computer Vision and Image Processing. Recognition of handwritten characters has been a popular area of research for many years and still remains an open problem. The proposed system uses visual image queries for retrieving similar images from database of Malayalam handwritten characters. Local Binary Pattern (LBP) descriptors of the query images are extracted and those features are compared with the features of the images in database for retrieving desired characters. This system with local binary pattern gives excellent retrieval performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based "face'' and "non-face'' prototype clusters. At each image location, the local pattern is matched against the distribution-based model, and a trained classifier determines, based on the local difference measurements, whether or not a human face exists at the current image location. We provide an analysis that helps identify the critical components of our system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss a variety of object recognition experiments in which human subjects were presented with realistically rendered images of computer-generated three-dimensional objects, with tight control over stimulus shape, surface properties, illumination, and viewpoint, as well as subjects' prior exposure to the stimulus objects. In all experiments recognition performance was: (1) consistently viewpoint dependent; (2) only partially aided by binocular stereo and other depth information, (3) specific to viewpoints that were familiar; (4) systematically disrupted by rotation in depth more than by deforming the two-dimensional images of the stimuli. These results are consistent with recently advanced computational theories of recognition based on view interpolation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mosaics have been commonly used as visual maps for undersea exploration and navigation. The position and orientation of an underwater vehicle can be calculated by integrating the apparent motion of the images which form the mosaic. A feature-based mosaicking method is proposed in this paper. The creation of the mosaic is accomplished in four stages: feature selection and matching, detection of points describing the dominant motion, homography computation and mosaic construction. In this work we demonstrate that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic. The system is able to provide 3D metric information concerning the vehicle motion using the knowledge of the intrinsic parameters of the camera while integrating the measurements of an ultrasonic sensor. The experimental results of real images have been tested on the GARBI underwater vehicle