951 resultados para Visual Object Recognition
Resumo:
Relatório de Estágio apresentado à Escola Superior de Artes Aplicadas do Instituto Politécnico de Castelo Branco, em associação com a Faculdade de Arquitetura da Universidade de Lisboa, para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Design Gráfico.
Resumo:
Tese de doutoramento, Estudos Artísticos (Estudos de Teatro), Universidade de Lisboa, Faculdade de Letras, 2016
Resumo:
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations Research and Development, McLean, Va.
Resumo:
"January 1995."
Resumo:
"C00-2118-0048."
Resumo:
Bibliography: leaf 25.
Resumo:
Mode of access: Internet.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The spatial character of our reaching movements is extremely sensitive to potential obstacles in the workspace. We recently found that this sensitivity was retained by most patients with left visual neglect when reaching between two objects, despite the fact that they tended to ignore the leftward object when asked to bisect the space between them. This raises the possibility that obstacle avoidance does not require a conscious awareness of the obstacle avoided. We have now tested this hypothesis in a patient with visual extinction following right temporoparietal damage. Extinction is an attentional disorder in which patients fail to report stimuli on the side of space opposite a brain lesion under conditions of bilateral stimulation. Our patient avoided obstacles during reaching, to exactly the same degree, regardless of whether he was able to report their presence. This implicit processing of object location, which may depend on spared superior parietal-lobe pathways, demonstrates that conscious awareness is not necessary for normal obstacle avoidance.
Resumo:
There is a growing body of evidence that the processes mediating the allocation of spatial attention within objects may be separable from those governing attentional distribution between objects. In the neglect literature, a related proposal has been made regarding the perception of (within-object) sizes and (between-object) distances. This proposal follows observations that, in size-matching and bisection tasks, neglect is more strongly expressed when patients are required to attend to the sizes of discrete objects than to the (unfilled) distances between objects. These findings are consistent with a partial dissociation between size and distance processing, but a simpler alternative must also be considered. Whilst a neglect patient may fail to explore the full extent of a solid stimulus, the estimation of an unfilled distance requires that both endpoints be inspected before the task can be attempted at all. The attentional cueing implicit in distance estimation tasks might thus account for their superior performance by neglect patients. We report two bisection studies that address this issue. The first confirmed, amongst patients with left visual neglect, a reliable reduction of rightward error for unfilled gap stimuli as compared with solid lines. The second study assessed the cause of this reduction, deconfounding the effects of stimulus type (lines vs. gaps) and attentional cueing, by applying an explicit cueing manipulation to line and gap bisection tasks. Under these matched cueing conditions, all patients performed similarly on line and gap bisection tasks, suggesting that the reduction of neglect typically observed for gap stimuli may be attributable entirely to cueing effects. We found no evidence that a spatial extent, once fully attended, is judged any differently according to whether it is filled or unfilled.
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Motion is a powerful cue for figure-ground segregation, allowing the recognition of shapes even if the luminance and texture characteristics of the stimulus and background are matched. In order to investigate the neural processes underlying early stages of the cue-invariant processing of form, we compared the responses of neurons in the striate cortex (V1) of anaesthetized marmosets to two types of moving stimuli: bars defined by differences in luminance, and bars defined solely by the coherent motion of random patterns that matched the texture and temporal modulation of the background. A population of form-cue-invariant (FCI) neurons was identified, which demonstrated similar tuning to the length of contours defined by first- and second-order cues. FCI neurons were relatively common in the supragranular layers (where they corresponded to 28% of the recorded units), but were absent from layer 4. Most had complex receptive fields, which were significantly larger than those of other V1 neurons. The majority of FCI neurons demonstrated end-inhibition in response to long first- and second-order bars, and were strongly direction selective, Thus, even at the level of V1 there are cells whose variations in response level appear to be determined by the shape and motion of the entire second-order object, rather than by its parts (i.e. the individual textural components). These results are compatible with the existence of an output channel from V1 to the ventral stream of extrastriate areas, which already encodes the basic building blocks of the image in an invariant manner.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.