870 resultados para multisensory perception


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Asperger Syndrome (AS) belongs to autism spectrum disorders where both verbal and non-verbal communication difficulties are at the core of the impairment. Social communication requires a complex use of affective, linguistic-cognitive and perceptual processes. In the four studies included in the current thesis, some of the linguistic and perceptual factors that are important for face-to-face communication were studied using behavioural methods. In all four studies the results obtained from individuals with AS were compared with typically developed age, gender and IQ matched controls. First, the language skills of school-aged children were characterized in detail with standardized tests that measured different aspects of receptive and expressive language (Study I). The children with AS were found to be worse than the controls in following complex verbal instructions. Next, the visual perception of facial expressions of emotion with varying degrees of visual detail was examined (Study II). Adults with AS were found to have impaired recognition of facial expressions on the basis of very low spatial frequencies which are important for processing global information. Following that, multisensory perception was investigated by looking at audiovisual speech perception (Studies III and IV). Adults with AS were found to perceive audiovisual speech qualitatively differently from typically developed adults, although both groups were equally accurate in recognizing auditory and visual speech presented alone. Finally, the effect of attention on audiovisual speech perception was studied by registering eye gaze behaviour (Study III) and by studying the voluntary control of visual attention (Study IV). The groups did not differ in eye gaze behaviour or in the voluntary control of visual attention. The results of the study series demonstrate that many factors underpinning face-to-face social communication are atypical in AS. In contrast with previous assumptions about intact language abilities, the current results show that children with AS have difficulties in understanding complex verbal instructions. Furthermore, the study makes clear that deviations in the perception of global features in faces expressing emotions as well as in the multisensory perception of speech are likely to harm face-to-face social communication.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synästhetiker schmecken Berührungen, sehen Farben und Formen, wenn sie Musik hören oder einen Duft riechen. Es wurden auch so außergewöhnliche Formen wie Wochentage-Farben-, Berührung-Geruch- oder Schmerz-Farben-Synästhesien gefunden. Die von Neuro- wissenschaftlern und Philosophen als „Bindung“ genannte Fähigkeit mehrere Reize, die in verschiedenen Hirnarealen verarbeitet werden, miteinander zu koppeln und zu einer einheitlichen Repräsentation bzw. erfahrenen Einheit des Bewusstseins zusammenzufassen, betrifft jeden gesunden Mensch. Synästhetiker sind aber Menschen, deren Gehirne zur „Hyperbindung“ oder zum hyperkohärentem Erleben befähigt sind, da bei ihnen wesentlich mehr solcher Kopplungen entstehen. Das Phänomen der Synästhesie ist schon seit mehreren Jahrhunderten bekannt, aber immer noch ein Rätsel. Bisher glaubten Forscher, solche Phänomene beruhten bloß auf überdurchschnittlich dichten neuronalen Verdrahtungen zwischen sensorischen Hirnregionen. Aus der aktuellen Forschung kann man jedoch schließen, dass die Ursache der Synästhesie nicht allein eine verstärkte Verbindung zwischen zwei Sinneskanälen ist. Laut eigener Studien ist der Sinnesreiz selbst sowie seine fest verdrahteten sensorischen Pfade nicht notwendig für die Auslösung des synästhetischen Erlebens. Eine grundlegende Rolle spielt dabei dessen Bedeutung für einen Synästhetiker. Für die Annahme, dass die Semantik für die synästhetische Wahrnehmung das Entscheidende ist, müssten synästhetische Assoziationen ziemlich flexibel sein. Und genau das wurde herausgefunden, nämlich, dass normalerweise sehr stabile synästhetische Assoziationen unter bestimmten Bedingungen sich auf neue Auslöser übertragen lassen. Weitere Untersuchung betraf die neu entdeckte Schwimmstil-Farbe-Synästhesie, die tritt hervor nicht nur wenn Synästhetiker schwimmen, aber auch wenn sie über das Schwimmen denken. Sogar die Namen dieser charakteristischen Bewegungen können ihre Farbempfindungen auslösen, sobald sie im stimmigen Kontext auftauchen. Wie man von anderen Beispielen in der Hirnforschung weiß, werden häufig benutzte neuronale Pfade im Laufe der Zeit immer stärker ausgebaut. Wenn also ein Synästhetiker auf bestimmte Stimuli häufig stoßt und dabei eine entsprechende Mitempfindung bekommt, kann das mit der Zeit auch seine Hirnanatomie verändern, so dass die angemessenen strukturellen Verknüpfungen entstehen. Die angebotene Erklärung steht also im Einklang mit den bisherigen Ergebnissen. Die vorliegende Dissertation veranschaulicht, wie einheitlich und kohärent Wahrnehmung, Motorik, Emotionen und Denken (sensorische und kognitive Prozesse) im Phänomen der Synästhesie miteinander zusammenhängen. Das synästhetische nicht-konzeptuelle Begleiterlebnis geht mit dem konzeptuellen Inhalt des Auslösers einher. Ähnlich schreiben wir übliche, nicht-synästhetische phänomenale Eigenschaften den bestimmten Begriffen zu. Die Synästhesie bringt solche Verschaltungen einfach auf beeindruckende Weise zum Ausdruck und lässt das mannigfaltige Erleben stärker integrieren.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"This letter aims to highlight the multisensory integration weighting mechanisms that may account for the results in studies investigating haptic feedback in laparoscopic surgery. The current lack of multisensory theoretical knowledge in laparoscopy is evident, and “a much better understanding of how multimodal displays in virtual environments influence human performance is required” ...publisher website

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This short paper presents a means of capturing non spatial information (specifically understanding of places) for use in a Virtual Heritage application. This research is part of the Digital Songlines Project which is developing protocols, methodologies and a toolkit to facilitate the collection and sharing of Indigenous cultural heritage knowledge, using virtual reality. Within the context of this project most of the cultural activities relate to celebrating life and to the Australian Aboriginal people, land is the heart of life. Australian Indigenous art, stories, dances, songs and rituals celebrate country as its focus or basis. To the Aboriginal people the term “Country” means a lot more than a place or a nation, rather “Country” is a living entity with a past a present and a future; they talk about it in the same way as they talk about their mother. The landscape is seen to have a spiritual connection in a view seldom understood by non-indigenous persons; this paper introduces an attempt to understand such empathy and relationship and to reproduce it in a virtual environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In daily activities people are using a number of available means for the achievement of balance, such as the use of hands and the co-ordination of balance. One of the approaches that explains this relationship between perception and action is the ecological theory that is based on the work of a) Bernstein (1967), who imposed the problem of ‘the degrees of freedom’, b) Gibson (1979), who referred to the theory of perception and the way which the information is received from the environment in order for a certain movement to be achieved, c) Newell (1986), who proposed that movement can derive from the interaction of the constraints that imposed from the environment and the organism and d) Kugler, Kelso and Turvey (1982), who showed the way which “the degrees of freedom” are connected and interact. According to the above mentioned theories, the development of movement co-ordination can result from the different constraints that imposed into the organism-environment system. The close relation between the environmental and organismic constraints, as well as their interaction is responsible for the movement system that will be activated. These constraints apart from shaping the co-ordination of specific movements can be a rate limiting factor, to a certain degree, in the acquisition and mastering of a new skill. This frame of work can be an essential tool for the study of catching an object (e.g., a ball). The importance of this study becomes obvious due to the fact that movements that involved in catching an object are representative of every day actions and characteristic of the interaction between perception and action.