951 resultados para Visual Object Recognition
Resumo:
We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.
Resumo:
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.
Resumo:
Behavioural advantages for imitation of human movements over movements instructed by other visual stimuli are attributed to an ‘action observation-execution matching’ (AOEM) mechanism. Here, we demonstrate that priming/exogenous cueing with a videotaped finger movement stimulus (S1) produces specific congruency effects in reaction times (RTs) of imitative responses to a target movement (S2) at defined stimulus onset asynchronies (SOAs). When contrasted with a moving object at an SOA of 533 ms, only a human movement is capable of inducing an effect reminiscent of ‘inhibition of return’ (IOR), i.e. a significant advantage for imitation of a subsequent incongruent as compared to a congruent movement. When responses are primed by a finger movement at SOAs of 533 and 1,200 ms, inhibition of congruent or facilitation of incongruent responses, respectively, is stronger as compared to priming by a moving object. This pattern does not depend on whether S2 presents a finger movement or a moving object, thus effects cannot be attributed to visual similarity between S1 and S2. We propose that, whereas both priming by a finger movement and a moving object induces processes of spatial orienting, solely observation of a human movement activates AOEM. Thus, S1 immediately elicits an imitative response tendency. As an overt imitation of S1 is inadequate in the present setting, the response is inhibited which, in turn, modulates congruency effects.
Resumo:
The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.
Resumo:
Background: Prescribing magnification is typically based on distance or near visual acuity. this presumes a constant minimum angle of visual resolution with working distance and therefore enlargement of an object moved to a shorter working distance (relative distance enlargement). this study examines this premise in a visually impaired population. methods: distance letter visual acuity was measured prospectively for 380 low vision patients (distance visual acuity between 0.3 and 2.1 logmar) over the age of 57 years, along with near word visual acuity at an appropriate distance for near lens additions from +4 d to +20 D. demographic information, the disease causing low vision, contrast sensitivity, visual field and psychological status were also recorded. results: distance letter acuity was significantly related to (r = 0.84) but on average 0.1 ± 0.2 logmar better (1 ± 2 lines on a logmar chart) than near word acuity at 25 cm with a +4 d lens addition. in 39. 8 per cent of patients, near word acuity was more than 0.1 logmar worse than distance letter acuity. in 11.0 per cent of subjects, near visual acuity was more than 0.1 logmar better than distance letter acuity. the group with near word acuity worse than distance letter acuity also had lower contrast sensitivity. the group with near word acuity better than distance letter acuity was less likely to have age-Related macular degeneration. smaller print size could be read by reducing working distance (achieved by using higher near lens additions) in 86. 1 per cent, although not by as much as predicted by geometric progression in 14. 5 per cent. discussion: although distance letter and near word acuity are highly related, they are on average 1 logmar line different and this varies significantly between individuals. near word acuity did not increase linearly with relative distance enlargement in approximately one in seven visually impaired, suggesting that the measurement of visual resolution over a range of working distances will assist appropriate prescribing of magnification aids.
Resumo:
Inhibition of return (IOR) effects, in which participants detect a target in a cued box more slowly than one in an uncued box, suggest that behavior is aided by inhibition of recently attended irrelevant locations. To investigate the controversial question of whether inhibition can be applied to object identity in these tasks, in the present research we presented faces upright or inverted during cue and/or target sequences. IOR was greater when both cue and target faces were upright than when cue and/or target faces were inverted. Because the only difference between the conditions was the ease of facial recognition, this result indicates that inhibition was applied to object identity. Interestingly, inhibition of object identity affected IOR both whenencoding a cue face andretrieving information about a target face. Accordingly, we propose that episodic retrieval of inhibition associated with object identity may mediate behavior in cuing tasks.
Resumo:
Spatial objects may not only be perceived visually but also by touch. We report recent experiments investigating to what extent prior object knowledge acquired in either the haptic or visual sensory modality transfers to a subsequent visual learning task. Results indicate that even mental object representations learnt in one sensory modality may attain a multi-modal quality. These findings seem incompatible with picture-based reasoning schemas but leave open the possibility of modality-specific reasoning mechanisms.
Resumo:
The project “Reference in Discourse” deals with the selection of a specific object from a visual scene in a natural language situation. The goal of this research is to explain this everyday discourse reference task in terms of a concept generation process based on subconceptual visual and verbal information. The system OINC (Object Identification in Natural Communicators) aims at solving this problem in a psychologically adequate way. The system’s difficulties occurring with incomplete and deviant descriptions correspond to the data from experiments with human subjects. The results of these experiments are reported.
Resumo:
Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.
Resumo:
A novel approach of normal ECG recognition based on scale-space signal representation is proposed. The approach utilizes curvature scale-space signal representation used to match visual objects shapes previously and dynamic programming algorithm for matching CSS representations of ECG signals. Extraction and matching processes are fast and experimental results show that the approach is quite robust for preliminary normal ECG recognition.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92C20
Resumo:
Background - Abnormalities in visual processes have been observed in schizophrenia patients and have been associated with alteration of the lateral occipital complex and visual cortex. However, the relationship of these abnormalities with clinical symptomatology is largely unknown. Methods - We investigated the brain activity associated with object perception in schizophrenia. Pictures of common objects were presented to 26 healthy participants (age = 36.9; 11 females) and 20 schizophrenia patients (age = 39.9; 8 females) in an fMRI study. Results - In the healthy sample the presentation of pictures yielded significant activation (pFWE (cluster) < 0.001) of the bilateral fusiform gyrus, bilateral lingual gyrus, and bilateral middle occipital gyrus. In patients, the bilateral fusiform gyrus and bilateral lingual gyrus were significantly activated (pFWE (cluster) < 0.001), but not so the middle occipital gyrus. However, significant bilateral activation of the middle occipital gyrus (pFWE (cluster) < 0.05) was revealed when illness duration was controlled for. Depression was significantly associated with increased activation, and anxiety with decreased activation, of the right middle occipital gyrus and several other brain areas in the patient group. No association with positive or negative symptoms was revealed. Conclusions - Illness duration accounts for the weak activation of the middle occipital gyrus in patients during picture presentation. Affective symptoms, but not positive or negative symptoms, influence the activation of the right middle occipital gyrus and other brain areas.
Resumo:
It has been well documented that traffic accidents that can be avoided occur when the motorists miss or ignore traffic signs. With the attention of drivers getting diverted due to distractions like cell phone conversations, missing traffic signs has become more prevalent. Also, poor weather and other unfriendly driving conditions sometimes makes the motorists not to be alert all the time and see every traffic sign on the road. Besides, most cars do not have any form of traffic assistance. Because of heavy traffic and proliferation of traffic signs on the roads, there is a need for a system that assists the driver not to miss a traffic sign to reduce the probability of an accident. Since visual information is critical for driving, processed video signals from cameras have been chosen to assist drivers. These inexpensive cameras can be easily mounted on the automobile. The objective of the present investigation and the traffic system development is to recognize the traffic signs electronically and alert drivers. For the case study and the system development, five important and critical traffic signs have been selected. They are: STOP, NO ENTER, NO RIGHT TURN, NO LEFT TURN, and YIELD. The system was evaluated processing still pictures taken from the public roads, and the recognition results were presented in an analysis table to indicate the correct identifications and the false ones. The system reached the acceptable recognition rate of 80% for all five traffic signs. The processing rate was about three seconds. The capabilities of MATLAB, VLSI design platforms and coding have been used to generate a visual warning to complement the visual driver support system with a Field Programmable Gate Array (FPGA) on a XUP Virtex-II Pro Development System.
Resumo:
Perception and recognition of faces are fundamental cognitive abilities that form a basis for our social interactions. Research has investigated face perception using a variety of methodologies across the lifespan. Habituation, novelty preference, and visual paired comparison paradigms are typically used to investigate face perception in young infants. Storybook recognition tasks and eyewitness lineup paradigms are generally used to investigate face perception in young children. These methodologies have introduced systematic differences including the use of linguistic information for children but not infants, greater memory load for children than infants, and longer exposure times to faces for infants than for older children, making comparisons across age difficult. Thus, research investigating infant and child perception of faces using common methods, measures, and stimuli is needed to better understand how face perception develops. According to predictions of the Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002), in early development, perception of faces is enhanced in unimodal visual (i.e., silent dynamic face) rather than bimodal audiovisual (i.e., dynamic face with synchronous speech) stimulation. The current study investigated the development of face recognition across children of three ages: 5 – 6 months, 18 – 24 months, and 3.5 – 4 years, using the novelty preference paradigm and the same stimuli for all age groups. It also assessed the role of modality (unimodal visual versus bimodal audiovisual) and memory load (low versus high) on face recognition. It was hypothesized that face recognition would improve across age and would be enhanced in unimodal visual stimulation with a low memory load. Results demonstrated a developmental trend (F(2, 90) = 5.00, p = 0.009) with older children showing significantly better recognition of faces than younger children. In contrast to predictions, no differences were found as a function of modality of presentation (bimodal audiovisual versus unimodal visual) or memory load (low versus high). This study was the first to demonstrate a developmental improvement in face recognition from infancy through childhood using common methods, measures and stimuli consistent across age.