911 resultados para Higher-level visual processing
Resumo:
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain - precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.
Resumo:
Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode.
Resumo:
Previous studies have shown that the human posterior cingulate contains a visual processing area selective for optic flow (CSv). However, other studies performed in both humans and monkeys have identified a somatotopic motor region at the same location (CMA). Taken together, these findings suggested the possibility that the posterior cingulate contains a single visuomotor integration region. To test this idea we used fMRI to identify both visual and motor areas of the posterior cingulate in the same brains and to test the activity of those regions during a visuomotor task. Results indicated that rather than a single visuomotor region the posterior cingulate contains adjacent but separate motor and visual regions. CSv lies in the fundus of the cingulate sulcus, while CMA lies in the dorsal bank of the sulcus, slightly superior in terms of stereotaxic coordinates. A surprising and novel finding was that activity in CSv was suppressed during the visuomotor task, despite the visual stimulus being identical to that used to localize the region. This may provide an important clue to the specific role played by this region in the utilization of optic flow to control self-motion.
Resumo:
Includes bibliography
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.
Resumo:
In an ideal world, all instructors of safety and health courses would be masters of course subject matter as well as the theories and practices for effective teaching. In practice, however, most instructors are much stronger in one or the other. This paper provides an example of how some fundamental knowledge from educational experts can be useful for improving a traditional safety course. Is there a problem with the way traditional safety and health (S&H) courses are taught? It is asserted by this author that S&H education, in general, places too much emphasis on acquisition and comprehension of facts at the expense of helping students develop higher-level cognitive abilities. This paper explains the basis for the assertion and reports an experience upgrading a traditional fire protection course to include more assignments involving the higher-level ability known in the education community as synthesis.
Resumo:
BACKGROUND: Many patients with Posttraumatic Stress Disorder (PTSD) feel overwhelmed in situations with high levels of sensory input, as in crowded situations with complex sensory characteristics. These difficulties might be related to subtle sensory processing deficits similar to those that have been found for sounds in electrophysiological studies. METHOD: Visual processing was investigated with functional magnetic resonance imaging in trauma-exposed participants with (N = 18) and without PTSD (N = 21) employing a picture-viewing task. RESULTS: Activity observed in response to visual scenes was lower in PTSD participants 1) in the ventral stream of the visual system, including striate and extrastriate, inferior temporal, and entorhinal cortices, and 2) in dorsal and ventral attention systems (P < 0.05, FWE-corrected). These effects could not be explained by the emotional salience of the pictures. CONCLUSIONS: Visual processing was substantially altered in PTSD in the ventral visual stream, a component of the visual system thought to be responsible for object property processing. Together with previous reports of subtle auditory deficits in PTSD, these findings provide strong support for potentially important sensory processing deficits, whose origins may be related to dysfunctional attention processes.
Resumo:
Past research has shown that the gender typicality of applicants’ faces affects leadership selection irrespective of a candidate’s gender: A masculine facial appearance is congruent with masculine-typed leadership roles, thus masculine-looking applicants are hired more certainly than feminine-looking ones. In the present study, we extended this line of research by investigating hiring decisions for both masculine- and feminine-typed professional roles. Furthermore, we used eye tracking to examine the visual exploration of applicants’ portraits. Our results indicate that masculine-looking applicants were favored for the masculine-typed role (leader) and feminine-looking applicants for the feminine-typed role (team member). Eye movement patterns showed that information about gender category and facial appearance was integrated during first fixations of the portraits. Hiring decisions, however, were not based on this initial analysis, but occurred at a second stage, when the portrait was viewed in the context of considering the applicant for a specific job.
Resumo:
Many mental disorders disrupt social skills, yet few studies have examined how the brain processes social information. Functional neuroimaging, neuroconnectivity and electrophysiological studies suggest that orbital frontal cortex plays important roles in social cognition, including the analysis of information from faces, which are important cues in social interactions. Studies in humans and non-human primates show that damage to orbital frontal cortex produces social behavior impairments, including abnormal aggression, but these studies have failed to determine whether damage to this area impairs face processing. In addition, it is not known whether damage early in life is more detrimental than damage in adulthood. This study examined whether orbital frontal cortex is necessary for the discrimination of face identity and facial expressions, and for appropriate behavioral responses to aggressive (threatening) facial expressions. Rhesus monkeys (Macaca mulatta) received selective lesions of orbital frontal cortex as newborns or adults. As adults, these animals were compared with sham-operated controls on their ability to discriminate between faces of individual monkeys and between different facial expressions of emotion. A passive visual paired-comparison task with standardized rhesus monkey face stimuli was designed and used to assess discrimination. In addition, looking behavior toward aggressive expressions was assessed and compared with that of normal control animals. The results showed that lesion of orbital frontal cortex (1) may impair discrimination between faces of individual monkeys, (2) does not impair facial expression discrimination, and (3) changes the amount of time spent looking at aggressive (threatening) facial expressions depending on the context. The effects of early and late lesions did not differ. Thus, orbital frontal cortex appears to be part of the neural circuitry for recognizing individuals and for modulating the response to aggression in faces, and the plasticity of the immature brain does not allow for recovery of these functions when the damage occurs early in life. This study opens new avenues for the assessment of rhesus monkey face processing and the neural basis of social cognition, and allows a better understanding of the nature of the neuropathology in patients with mental disorders that disrupt social behavior, such as autism. ^
Resumo:
The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.