28 resultados para visual object categorization
Resumo:
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.
Resumo:
The current research examined the influence of ingroup/outgroup categorization on brain event-related potentials measured during perceptual processing of own- and other-race faces. White participants performed a sequential matching task with upright and inverted faces belonging either to their own race (White) or to another race (Black) and affiliated with either their own university or another university by a preceding visual prime. Results demonstrated that the right-lateralized N170 component evoked by test faces was modulated by race and by social category: the N170 to own-race faces showed a larger inversion effect (i.e., latency delay for inverted faces) when the faces were categorized as other-university rather than own-university members; the N170 to other-race faces showed no modulation of its inversion effect by university affiliation. These results suggest that neural correlates of structural face encoding (as evidenced by the N170 inversion effects) can be modulated by both visual (racial) and nonvisual (social) ingroup/outgroup status. © 2014 © 2014 Taylor & Francis.
Resumo:
Behavioural advantages for imitation of human movements over movements instructed by other visual stimuli are attributed to an ‘action observation-execution matching’ (AOEM) mechanism. Here, we demonstrate that priming/exogenous cueing with a videotaped finger movement stimulus (S1) produces specific congruency effects in reaction times (RTs) of imitative responses to a target movement (S2) at defined stimulus onset asynchronies (SOAs). When contrasted with a moving object at an SOA of 533 ms, only a human movement is capable of inducing an effect reminiscent of ‘inhibition of return’ (IOR), i.e. a significant advantage for imitation of a subsequent incongruent as compared to a congruent movement. When responses are primed by a finger movement at SOAs of 533 and 1,200 ms, inhibition of congruent or facilitation of incongruent responses, respectively, is stronger as compared to priming by a moving object. This pattern does not depend on whether S2 presents a finger movement or a moving object, thus effects cannot be attributed to visual similarity between S1 and S2. We propose that, whereas both priming by a finger movement and a moving object induces processes of spatial orienting, solely observation of a human movement activates AOEM. Thus, S1 immediately elicits an imitative response tendency. As an overt imitation of S1 is inadequate in the present setting, the response is inhibited which, in turn, modulates congruency effects.
Resumo:
The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.
Resumo:
Background: Prescribing magnification is typically based on distance or near visual acuity. this presumes a constant minimum angle of visual resolution with working distance and therefore enlargement of an object moved to a shorter working distance (relative distance enlargement). this study examines this premise in a visually impaired population. methods: distance letter visual acuity was measured prospectively for 380 low vision patients (distance visual acuity between 0.3 and 2.1 logmar) over the age of 57 years, along with near word visual acuity at an appropriate distance for near lens additions from +4 d to +20 D. demographic information, the disease causing low vision, contrast sensitivity, visual field and psychological status were also recorded. results: distance letter acuity was significantly related to (r = 0.84) but on average 0.1 ± 0.2 logmar better (1 ± 2 lines on a logmar chart) than near word acuity at 25 cm with a +4 d lens addition. in 39. 8 per cent of patients, near word acuity was more than 0.1 logmar worse than distance letter acuity. in 11.0 per cent of subjects, near visual acuity was more than 0.1 logmar better than distance letter acuity. the group with near word acuity worse than distance letter acuity also had lower contrast sensitivity. the group with near word acuity better than distance letter acuity was less likely to have age-Related macular degeneration. smaller print size could be read by reducing working distance (achieved by using higher near lens additions) in 86. 1 per cent, although not by as much as predicted by geometric progression in 14. 5 per cent. discussion: although distance letter and near word acuity are highly related, they are on average 1 logmar line different and this varies significantly between individuals. near word acuity did not increase linearly with relative distance enlargement in approximately one in seven visually impaired, suggesting that the measurement of visual resolution over a range of working distances will assist appropriate prescribing of magnification aids.
Resumo:
Spatial objects may not only be perceived visually but also by touch. We report recent experiments investigating to what extent prior object knowledge acquired in either the haptic or visual sensory modality transfers to a subsequent visual learning task. Results indicate that even mental object representations learnt in one sensory modality may attain a multi-modal quality. These findings seem incompatible with picture-based reasoning schemas but leave open the possibility of modality-specific reasoning mechanisms.
Resumo:
The project “Reference in Discourse” deals with the selection of a specific object from a visual scene in a natural language situation. The goal of this research is to explain this everyday discourse reference task in terms of a concept generation process based on subconceptual visual and verbal information. The system OINC (Object Identification in Natural Communicators) aims at solving this problem in a psychologically adequate way. The system’s difficulties occurring with incomplete and deviant descriptions correspond to the data from experiments with human subjects. The results of these experiments are reported.
Resumo:
Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Resumo:
Previous research (e.g., Jüttner et al, 2013, Developmental Psychology, 49, 161-176) has shown that object recognition may develop well into late childhood and adolescence. The present study extends that research and reveals novel di erences in holistic and analytic recognition performance in 7-11 year olds compared to that seen in adults. We interpret our data within Hummel’s hybrid model of object recognition (Hummel, 2001, Visual Cognition, 8, 489-517) that proposes two parallel routes for recognition (analytic vs. holistic) modulated by attention. Using a repetition-priming paradigm, we found in Experiment 1 that children showed no holistic priming, but only analytic priming. Given that holistic priming might be thought to be more ‘primitive’, we confirmed in Experiment 2 that our surprising finding was not because children’s analytic recognition was merely a result of name repetition. Our results suggest a developmental primacy of analytic object recognition. By contrast, holistic object recognition skills appear to emerge with a much more protracted trajectory extending into late adolescence
Resumo:
In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains.
Resumo:
Background - Abnormalities in visual processes have been observed in schizophrenia patients and have been associated with alteration of the lateral occipital complex and visual cortex. However, the relationship of these abnormalities with clinical symptomatology is largely unknown. Methods - We investigated the brain activity associated with object perception in schizophrenia. Pictures of common objects were presented to 26 healthy participants (age = 36.9; 11 females) and 20 schizophrenia patients (age = 39.9; 8 females) in an fMRI study. Results - In the healthy sample the presentation of pictures yielded significant activation (pFWE (cluster) < 0.001) of the bilateral fusiform gyrus, bilateral lingual gyrus, and bilateral middle occipital gyrus. In patients, the bilateral fusiform gyrus and bilateral lingual gyrus were significantly activated (pFWE (cluster) < 0.001), but not so the middle occipital gyrus. However, significant bilateral activation of the middle occipital gyrus (pFWE (cluster) < 0.05) was revealed when illness duration was controlled for. Depression was significantly associated with increased activation, and anxiety with decreased activation, of the right middle occipital gyrus and several other brain areas in the patient group. No association with positive or negative symptoms was revealed. Conclusions - Illness duration accounts for the weak activation of the middle occipital gyrus in patients during picture presentation. Affective symptoms, but not positive or negative symptoms, influence the activation of the right middle occipital gyrus and other brain areas.
Resumo:
Corticobasal degeneration is a rare, progressive neurodegenerative disease and a member of the 'parkinsonian' group of disorders, which also includes Parkinson's disease, progressive supranuclear palsy, dementia with Lewy bodies and multiple system atrophy. The most common initial symptom is limb clumsiness, usually affecting one side of the body, with or without accompanying rigidity or tremor. Subsequently, the disease affects gait and there is a slow progression to influence ipsilateral arms and legs. Apraxia and dementia are the most common cortical signs. Corticobasal degeneration can be difficult to distinguish from other parkinsonian syndromes but if ocular signs and symptoms are present, they may aid clinical diagnosis. Typical ocular features include increased latency of saccadic eye movements ipsilateral to the side exhibiting apraxia, impaired smooth pursuit movements and visuo-spatial dysfunction, especially involving spatial rather than object-based tasks. Less typical features include reduction in saccadic velocity, vertical gaze palsy, visual hallucinations, sleep disturbance and an impaired electroretinogram. Aspects of primary vision such as visual acuity and colour vision are usually unaffected. Management of the condition to deal with problems of walking, movement, daily tasks and speech problems is an important aspect of the disease.