20 resultados para 3D object recognition

em SAPIENTIA - Universidade do Algarve - Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic feature routine, and saliency maps for Focus-of-Attention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic routing and saliency maps for Focus-of-Attention. All these combined allow us to segregate faces. Events of different facial views are stored in memory and combined in order to identify the view and recognise the face including facial expression. In this paper we show that with five 2D views and their cortical representations it is possible to determine the left-right and frontal-lateral-profile views and to achieve view-invariant recognition of 3D faces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object recognition requires that templates with canonical views are stored in memory. Such templates must somehow be normalised. In this paper we present a novel method for obtaining 2D translation, rotation and size invariance. Cortical simple, complex and end-stopped cells provide multi-scale maps of lines, edges and keypoints. These maps are combined such that objects are characterised. Dynamic routing in neighbouring neural layers allows feature maps of input objects and stored templates to converge. We illustrate the construction of group templates and the invariance method for object categorisation and recognition in the context of a cortical architecture, which can be applied in computer vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an improved model for line and edge detection in cortical area V1. This model is based on responses of simple and complex cells, and it is multi-scale with no free parameters. We illustrate the use of the multi-scale line/edge representation in different processes: visual reconstruction or brightness perception, automatic scale selection and object segregation. A two-level object categorization scenario is tested in which pre-categorization is based on coarse scales only and final categorization on coarse plus fine scales. We also present a multi-scale object and face recognition model. Processing schemes are discussed in the framework of a complete cortical architecture. The fact that brightness perception and object recognition may be based on the same symbolic image representation is an indication that the entire (visual) cortex is involved in consciousness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Keypoints (junctions) provide important information for focus-of-attention (FoA) and object categorization/recognition. In this paper we analyze the multi-scale keypoint representation, obtained by applying a linear and quasi-continuous scaling to an optimized model of cortical end-stopped cells, in order to study its importance and possibilities for developing a visual, cortical architecture.We show that keypoints, especially those which are stable over larger scale intervals, can provide a hierarchically structured saliency map for FoA and object recognition. In addition, the application of non-classical receptive field inhibition to keypoint detection allows to distinguish contour keypoints from texture (surface) keypoints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a 3D representation that is based on the pro- cessing in the visual cortex by simple, complex and end-stopped cells. We improved multiscale methods for line/edge and keypoint detection, including a method for obtaining vertex structure (i.e. T, L, K etc). We also describe a new disparity model. The latter allows to attribute depth to detected lines, edges and keypoints, i.e., the integration results in a 3D \wire-frame" representation suitable for object recognition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de dout., Engenharia Electrónica e de Computadores, Faculdade de Ciência e Tecnologia, Universidade do Algarve, 2007

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of the project "SmartVision: active vision for the blind" is to develop a small and portable but intelligent and reliable system for assisting the blind and visually impaired while navigating autonomously, both outdoor and indoor. In this paper we present an overview of the prototype, design issues, and its different modules which integrate a GIS with GPS, Wi-Fi, RFID tags and computer vision. The prototype addresses global navigation by following known landmarks, local navigation with path tracking and obstacle avoidance, and object recognition. The system does not replace the white cane, but extends it beyond its reach. The user-friendly interface consists of a 4-button hand-held box, a vibration actuator in the handle of the cane, and speech synthesis. A future version may also employ active RFID tags for marking navigation landmarks, and speech recognition may complement speech synthesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ultrasonic, infrared, laser and other sensors are being applied in robotics. Although combinations of these have allowed robots to navigate, they are only suited for specific scenarios, depending on their limitations. Recent advances in computer vision are turning cameras into useful low-cost sensors that can operate in most types of environments. Cameras enable robots to detect obstacles, recognize objects, obtain visual odometry, detect and recognize people and gestures, among other possibilities. In this paper we present a completely biologically inspired vision system for robot navigation. It comprises stereo vision for obstacle detection, and object recognition for landmark-based navigation. We employ a novel keypoint descriptor which codes responses of cortical complex cells. We also present a biologically inspired saliency component, based on disparity and colour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências Biomédicas, Departamento de Ciências Biomédicas e Medicina, Universidade do Algarve, 2016

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There are roughly two processing systems: (1) very fast gist vision of entire scenes, completely bottom-up and data driven, and (2) Focus-of-Attention (FoA) with sequential screening of specific image regions and objects. The latter system has to be sequential because unnormalised input objects must be matched against normalised templates of canonical object views stored in memory, which involves dynamic routing of features in the visual pathways.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação de Mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object categorisation is linked to detection, segregation and recognition. In the visual system, these processes are achieved in the ventral \what"and dorsal \where"pathways [3], with bottom-up feature extractions in areas V1, V2, V4 and IT (what) in parallel with top-down attention from PP via MT to V2 and V1 (where). The latter is steered by object templates in memory, i.e. in prefrontal cortex with a what component in PF46v and a where component in PF46d.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of visual perception are based on image representations in cortical area V1 and higher areas which contain many cell layers for feature extraction. Basic simple, complex and end-stopped cells provide input for line, edge and keypoint detection. In this paper we present an improved method for multi-scale line/edge detection based on simple and complex cells. We illustrate the line/edge representation for object reconstruction, and we present models for multi-scale face (object) segregation and recognition that can be embedded into feedforward dorsal and ventral data streams (the “what” and “where” subsystems) with feedback streams from higher areas for obtaining translation, rotation and scale invariance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extraction. Simple, complex and end-stopped cells provide input for line, edge and keypoint detection. Detected events provide a rich, multi-scale object representation, and this representation can be stored in memory in order to identify objects. In this paper, the above context is applied to face recognition. The multi-scale line/edge representation is explored in conjunction with keypoint-based saliency maps for Focus-of-Attention. Recognition rates of up to 96% were achieved by combining frontal and 3/4 views, and recognition was quite robust against partial occlusions.