983 resultados para Visual cue integration
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Previous studies have shown that the human posterior cingulate contains a visual processing area selective for optic flow (CSv). However, other studies performed in both humans and monkeys have identified a somatotopic motor region at the same location (CMA). Taken together, these findings suggested the possibility that the posterior cingulate contains a single visuomotor integration region. To test this idea we used fMRI to identify both visual and motor areas of the posterior cingulate in the same brains and to test the activity of those regions during a visuomotor task. Results indicated that rather than a single visuomotor region the posterior cingulate contains adjacent but separate motor and visual regions. CSv lies in the fundus of the cingulate sulcus, while CMA lies in the dorsal bank of the sulcus, slightly superior in terms of stereotaxic coordinates. A surprising and novel finding was that activity in CSv was suppressed during the visuomotor task, despite the visual stimulus being identical to that used to localize the region. This may provide an important clue to the specific role played by this region in the utilization of optic flow to control self-motion.
Resumo:
Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.
Resumo:
The purpose of this study was to examine the effects of visual and somatosensory information on body sway in individuals with Down syndrome (DS). Nine adults with DS (19-29 years old) and nine control subjects (CS) (19-29 years old) stood in the upright stance in four experimental conditions: no vision and no touch; vision and no touch; no vision and touch; and vision and touch. In the vision condition, participants looked at a target placed in front of them; in the no vision condition, participants wore a black cotton mask. In the touch condition, participants touched a stationary surface with their right index finger; in the no touch condition, participants kept their arms hanging alongside their bodies. A force plate was used to estimate center of pressure excursion for both anterior-posterior and medial-lateral directions. MANOVA revealed that both the individuals with DS and the control subjects used vision and touch to reduce overall body sway, although individuals with DS still oscillated more than did the CS. These results indicate that adults with DS are able to use sensory information to reduce body sway, and they demonstrate that there is no difference in sensory integration between the individuals with DS and the CS.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Background: Aging is characterized by a decline in the postural control performance, which is based on a coherent and stable coupling between sensory information and motor action. Therefore, changes in postural control in elderlies can be related to changes in this coupling. In addition, it has been observed that physical activity seems to improve postural control performance in elderlies. These improvements can be due to changes in the coupling between sensory information and motor action related to postural control. Objective: the purpose of this study was to verify the coupling between visual information and body sway in active and sedentary elderlies. Methods: Sixteen sedentary elderlies ( SE), 16 active elderlies ( AE) and 16 young adults ( YA) were asked to stand upright inside a moving room in two experimental conditions: ( 1) discrete movement and ( 2) continuous movement of the room. Results: In the continuous condition, the results showed that the coupling between the movement of the room and body sway was stronger and more stable for SE and AE compared to YA. In the discrete condition, SE showed larger body displacement compared to AE and YA. Conclusions: SE have more difficulty to discriminate and to integrate sensory information than AE and YA indicating that physical activity may improve sensory integration. Copyright (C) 2005 S. Karger AG, Basel.
Resumo:
Three-dimensional kinematic analysis of line of gaze, arm and ball was used to describe the visual and motor behaviour of male adolescents diagnosed with attention deficit hyperactivity disorder (ADHD). The ADHD participants were tested when both on (ADHD-On) and off (ADHD-Off) their medication and compared to age-matched normal controls in a modified table tennis task that required tracking the ball and hitting to cued right and left targets. Long-duration information was provided by a pre-cue, in which the target was illuminated approximately 2 s before the serve, and short-duration information by an early-cue illuminated about 350 ms after the serve, leaving -500 ms to select the target and perform the action. The ADHD groups differed significantly from the control group in both the pre-cue and early-cue conditions in being less accurate, in having a later onset and duration of pursuit tracking, and a higher frequency of gaze on and off the ball. The use of medication significantly reduced the gaze frequency of the ADHD participants, but surprisingly this did not lead to an increase in pursuit tracking, suggesting a barrier was reached beyond which ball flight information could not be processed. The control and ADHD groups did not differ in arm movement onset, duration and velocity in the short-duration early-cue condition; in the long-duration pre-cue condition, however, the ADHD group's movement time onset and arm velocity differed significantly from controls. The results show that the ADHD groups were able to process short-duration information without experiencing adverse effects on their motor behaviour; however, long-duration information contributed to irregular movement control.
Resumo:
The maintenance of a given body orientation is obtained by the complex relation between sensory information and muscle activity. Therefore, this study purpose was to review the role of visual, somatosensory, vestibular and auditory information in the maintenance and control of the posture. Method. a search by papers for the last 24 years was done in the PubMed and CAPES databases. The following keywords were used: postural control, sensory information, vestibular system, visual system, somatosensory system, auditory system and haptic system. Results. the influence of each sensory system and its integration were analyzed for the maintenance and control of the posture. Conclusion. the literature showed that there is information redundancy provided by sensory channels. Thus, the central nervous system chooses the main source for the posture control.
Resumo:
Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources. © 2013 Viana et al.
Resumo:
Postural sway variability was evaluated in Parkinson’s disease (PD) patients at different stages of disease. Twenty PD patients were grouped into two groups (unilateral, 14; bilateral, 6) according to disease severity. The results showed no significant differences in postural sway variability between the groups (p ≥ 0.05). Postural sway variability was higher in the antero-posterior direction and with the eyes closed. Significant differences between the unilateral and bilateral groups were observed in clinical tests (UPDRS, Berg Balance Scale, and retropulsion test; p ≤ 0.05, all). Postural sway variability was unaffected by disease severity, indicating that neurological mechanisms for postural control still function at advanced stages of disease. Postural sway instability appears to occur in the antero-posterior direction to compensate for the stooped posture. The eyes-closed condition during upright stance appears to be challenging for PD patients because of the associated sensory integration deficit. Finally, objective measures such as postural sway variability may be more reliable than clinical tests to evaluate changes in balance control in PD patients.
Resumo:
Ziel der Arbeit ist die Analyse von Prinzipien der Konturintegration im menschlichen visuellen System. Die perzeptuelle Verbindung benachbarter Teile in einer visuellen Szene zu einem Ganzen wird durch zwei gestalttheoretisch begründete Propositionen gekennzeichnet, die komplementäre lokale Mechanismen der Konturintegration beschreiben. Das erste Prinzip der Konturintegration fordert, dass lokale Ähnlichkeit von Elementen in einem anderen Merkmal als Orientierung nicht hinreicht für die Entdeckung von Konturen, sondern ein zusätzlicher statistischer Merkmalsunterschied von Konturelementen und Umgebung vorliegen muss, um Konturentdeckung zu ermöglichen. Das zweite Prinzip der Konturintegration behauptet, dass eine kollineare Ausrichtung von Konturelementen für Konturintegration hinreicht, und es bei deren Vorliegen zu robuster Konturintegrationsleistung kommt, auch wenn die lokalen merkmalstragenden Elemente in anderen Merkmalen in hohem Maße zufällig variieren und damit keine nachbarschaftliche Ähnlichkeitsbeziehung entlang der Kontur aufweisen. Als empirische Grundlage für die beiden vorgeschlagenen Prinzipien der Konturintegration werden drei Experimente berichtet, die zunächst die untergeordnete Rolle globaler Konturmerkmale wie Geschlossenheit bei der Konturentdeckung aufweisen und daraufhin die Bedeutung lokaler Mechanismen für die Konturintegration anhand der Merkmale Kollinearität, Ortsfrequenz sowie der spezifischen Art der Interaktion zwischen beiden Merkmalen beleuchten. Im ersten Experiment wird das globale Merkmal der Geschlossenheit untersucht und gezeigt, dass geschlossene Konturen nicht effektiver entdeckt werden als offene Konturen. Das zweite Experiment zeigt die Robustheit von über Kollinearität definierten Konturen über die zufällige Variation im Merkmal Ortsfrequenz entlang der Kontur und im Hintergrund, sowie die Unmöglichkeit der Konturintegration bei nachbarschaftlicher Ähnlichkeit der Konturelemente, wenn Ähnlichkeit statt über kollineare Orientierung über gleiche Ortsfrequenzen realisiert ist. Im dritten Experiment wird gezeigt, dass eine redundante Kombination von kollinearer Orientierung mit einem statistischen Unterschied im Merkmal Ortsfrequenz zu erheblichen Sichtbarkeitsgewinnen bei der Konturentdeckung führt. Aufgrund der Stärke der Summationswirkung wird vorgeschlagen, dass durch die Kombination mehrerer Hinweisreize neue kortikale Mechanismen angesprochen werden, die die Konturentdeckung unterstützen. Die Resultate der drei Experimente werden in den Kontext aktueller Forschung zur Objektwahrnehmung gestellt und ihre Bedeutung für die postulierten allgemeinen Prinzipien visueller Gruppierung in der Konturintegration diskutiert. Anhand phänomenologischer Beispiele mit anderen Merkmalen als Orientierung und Ortsfrequenz wird gezeigt, dass die gefundenen Prinzipien Generalisierbarkeit für die Verarbeitung von Konturen im visuellen System beanspruchen können.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
Flowers attract honeybees using colour and scent signals. Bimodality (having both scent and colour) in flowers leads to increased visitation rates, but how the signals influence each other in a foraging situation is still quite controversial. We studied four basic questions: When faced with conflicting scent and colour information, will bees choose by scent and ignore the “wrong” colour, or vice versa? To get to the bottom of this question, we trained bees on scent-colour combination AX (rewarded) versus BY (unrewarded) and tested them on AY (previously rewarded colour and unrewarded scent) versus BX (previously rewarded scent and unrewarded colour). It turned out that the result depends on stimulus quality: if the colours are very similar (unsaturated blue and blue-green), bees choose by scent. If they are very different (saturated blue and yellow), bees choose by colour. We used the same scents, lavender and rosemary, in both cases. Our second question was: Are individual bees hardwired to use colour and ignore scent (or vice versa), or can this behaviour be modified, depending on which cue is more readily available in the current foraging context? To study this question, we picked colour-preferring bees and gave them extra training on scent-only stimuli. Afterwards, we tested if their preference had changed, and if they still remembered the scent stimulus they had originally used as their main cue. We came to the conclusion that a colour preference can be reversed through scent-only training. We also gave scent-preferring bees extra training on colour-only stimuli, and tested for a change in their preference. The number of animals tested was too small for statistical tests (n = 4), but a common tendency suggested that colour-only training leads to a preference for colour. A preference to forage by a certain sensory modality therefore appears to be not fixed but flexible, and adapted to the bee’s surroundings. Our third question was: Do bees learn bimodal stimuli as the sum of their parts (elemental learning), or as a new stimulus which is different from the sum of the components’ parts (configural learning)? We trained bees on bimodal stimuli, then tested them on the colour components only, and the scent components only. We performed this experiment with a similar colour set (unsaturated blue and blue-green, as above), and a very different colour set (saturated blue and yellow), but used lavender and rosemary for scent stimuli in both cases. Our experiment yielded unexpected results: with the different colours, the results were best explained by elemental learning, but with the similar colour set, bees exhibited configural learning. Still, their memory of the bimodal compound was excellent. Finally, we looked at reverse-learning. We reverse-trained bees with bimodal stimuli to find out whether bimodality leads to better reverse-learning compared to monomodal stimuli. We trained bees on AX (rewarded) versus BY (unrewarded), then on AX (unrewarded) versus BY (rewarded), and finally on AX (rewarded) and BY (unrewarded) again. We performed this experiment with both colour sets, always using the same two scents (lavender and rosemary). It turned out that bimodality does not help bees “see the pattern” and anticipate the switch. Generally, bees trained on the different colour set performed better than bees trained on the similar colour set, indicating that stimulus salience influences reverse-learning.
Resumo:
Cognitive functioning is based on binding processes, by which different features and elements of neurocognition are integrated and coordinated. Binding is an essential ingredient of, for instance, Gestalt perception. We have implemented a paradigm of causality perception based on the work of Albert Michotte, in which 2 identical discs move from opposite sides of a monitor, steadily toward, and then past one another. Their coincidence generates an ambiguous percept of either "streaming" or "bouncing," which the subjects (34 schizophrenia spectrum patients and 34 controls with mean age 27.9 y) were instructed to report. The latter perception is a marker of the binding processes underlying perceived causality (type I binding). In addition to this visual task, acoustic stimuli were presented at different times during the task (150 ms before and after visual coincidence), which can modulate perceived causality. This modulation by intersensory and temporally delayed stimuli is viewed as a different type of binding (type II). We show here, using a mixed-effects hierarchical analysis, that type II binding distinguishes schizophrenia spectrum patients from healthy controls, whereas type I binding does not. Type I binding may even be excessive in some patients, especially those with positive symptoms; Type II binding, however, was generally attenuated in patients. The present findings point to ways in which the disconnection (or Gestalt) hypothesis of schizophrenia can be refined, suggesting more specific markers of neurocognitive functioning and potential targets of treatment.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.