887 resultados para Visual Object Recognition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

"C00-2118-0048."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bibliography: leaf 25.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial character of our reaching movements is extremely sensitive to potential obstacles in the workspace. We recently found that this sensitivity was retained by most patients with left visual neglect when reaching between two objects, despite the fact that they tended to ignore the leftward object when asked to bisect the space between them. This raises the possibility that obstacle avoidance does not require a conscious awareness of the obstacle avoided. We have now tested this hypothesis in a patient with visual extinction following right temporoparietal damage. Extinction is an attentional disorder in which patients fail to report stimuli on the side of space opposite a brain lesion under conditions of bilateral stimulation. Our patient avoided obstacles during reaching, to exactly the same degree, regardless of whether he was able to report their presence. This implicit processing of object location, which may depend on spared superior parietal-lobe pathways, demonstrates that conscious awareness is not necessary for normal obstacle avoidance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing body of evidence that the processes mediating the allocation of spatial attention within objects may be separable from those governing attentional distribution between objects. In the neglect literature, a related proposal has been made regarding the perception of (within-object) sizes and (between-object) distances. This proposal follows observations that, in size-matching and bisection tasks, neglect is more strongly expressed when patients are required to attend to the sizes of discrete objects than to the (unfilled) distances between objects. These findings are consistent with a partial dissociation between size and distance processing, but a simpler alternative must also be considered. Whilst a neglect patient may fail to explore the full extent of a solid stimulus, the estimation of an unfilled distance requires that both endpoints be inspected before the task can be attempted at all. The attentional cueing implicit in distance estimation tasks might thus account for their superior performance by neglect patients. We report two bisection studies that address this issue. The first confirmed, amongst patients with left visual neglect, a reliable reduction of rightward error for unfilled gap stimuli as compared with solid lines. The second study assessed the cause of this reduction, deconfounding the effects of stimulus type (lines vs. gaps) and attentional cueing, by applying an explicit cueing manipulation to line and gap bisection tasks. Under these matched cueing conditions, all patients performed similarly on line and gap bisection tasks, suggesting that the reduction of neglect typically observed for gap stimuli may be attributable entirely to cueing effects. We found no evidence that a spatial extent, once fully attended, is judged any differently according to whether it is filled or unfilled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion is a powerful cue for figure-ground segregation, allowing the recognition of shapes even if the luminance and texture characteristics of the stimulus and background are matched. In order to investigate the neural processes underlying early stages of the cue-invariant processing of form, we compared the responses of neurons in the striate cortex (V1) of anaesthetized marmosets to two types of moving stimuli: bars defined by differences in luminance, and bars defined solely by the coherent motion of random patterns that matched the texture and temporal modulation of the background. A population of form-cue-invariant (FCI) neurons was identified, which demonstrated similar tuning to the length of contours defined by first- and second-order cues. FCI neurons were relatively common in the supragranular layers (where they corresponded to 28% of the recorded units), but were absent from layer 4. Most had complex receptive fields, which were significantly larger than those of other V1 neurons. The majority of FCI neurons demonstrated end-inhibition in response to long first- and second-order bars, and were strongly direction selective, Thus, even at the level of V1 there are cells whose variations in response level appear to be determined by the shape and motion of the entire second-order object, rather than by its parts (i.e. the individual textural components). These results are compatible with the existence of an output channel from V1 to the ventral stream of extrastriate areas, which already encodes the basic building blocks of the image in an invariant manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Capacity limits in visual attention have traditionally been studied using static arrays of elements from which an observer must detect a target defined by a certain visual feature or combination of features. In the current study we use this visual search paradigm, with accuracy as the dependent variable, to examine attentional capacity limits for different visual features undergoing change over time. In Experiment 1, detectability of a single changing target was measured under conditions where the type of change (size, speed, colour), the magnitude of change, the set size and homogeneity of the unchanging distractors were all systematically varied. Psychometric function slopes were calculated for different experimental conditions and ‘change thresholds’extracted from these slopes were used in Experiment 2, in which multiple supra-threshold changes were made, simultaneously, either to a single or to two or three different stimulus elements. These experiments give an objective psychometric paradigm for measuring changes in visual features over time. Results favour object-based accounts of visual attention, and show consistent differences in the allocation of attentional capacity to different perceptual dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be y 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be