144 resultados para Auditory perception.

em Queensland University of Technology - ePrints Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pavlovian fear conditioning is an evolutionary conserved and extensively studied form of associative learning and memory. In mammals, the lateral amygdala (LA) is an essential locus for Pavlovian fear learning and memory. Despite significant progress unraveling the cellular mechanisms responsible for fear conditioning, very little is known about the anatomical organization of neurons encoding fear conditioning in the LA. One key question is how fear conditioning to different sensory stimuli is organized in LA neuronal ensembles. Here we show that Pavlovian fear conditioning, formed through either the auditory or visual sensory modality, activates a similar density of LA neurons expressing a learning-induced phosphorylated extracellular signal-regulated kinase (p-ERK1/2). While the size of the neuron population specific to either memory was similar, the anatomical distribution differed. Several discrete sites in the LA contained a small but significant number of p-ERK1/2-expressing neurons specific to either sensory modality. The sites were anatomically localized to different levels of the longitudinal plane and were independent of both memory strength and the relative size of the activated neuronal population, suggesting some portion of the memory trace for auditory and visually cued fear conditioning is allocated differently in the LA. Presenting the visual stimulus by itself did not activate the same p-ERK1/2 neuron density or pattern, confirming the novelty of light alone cannot account for the specific pattern of activated neurons after visual fear conditioning. Together, these findings reveal an anatomical distribution of visual and auditory fear conditioning at the level of neuronal ensembles in the LA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feedforward inhibition deficits have been consistently demonstrated in a range of neuropsychiatric conditions using prepulse inhibition (PPI) of the acoustic startle eye-blink reflex when assessing sensorimotor gating. While PPI can be recorded in acutely decerebrated rats, behavioural, pharmacological and psychophysiological studies suggest the involvement of a complex neural network extending from brainstem nuclei to higher order cortical areas. The current functional magnetic resonance imaging study investigated the neural network underlying PPI and its association with electromyographically (EMG) recorded PPI of the acoustic startle eye-blink reflex in 16 healthy volunteers. A sparse imaging design was employed to model signal changes in blood oxygenation level-dependent (BOLD) responses to acoustic startle probes that were preceded by a prepulse at 120 ms or 480 ms stimulus onset asynchrony or without prepulse. Sensorimotor gating was EMG confirmed for the 120-ms prepulse condition, while startle responses in the 480-ms prepulse condition did not differ from startle alone. Multiple regression analysis of BOLD contrasts identified activation in pons, thalamus, caudate nuclei, left angular gyrus and bilaterally in anterior cingulate, associated with EMGrecorded sensorimotor gating. Planned contrasts confirmed increased pons activation for startle alone vs 120-ms prepulse condition, while increased anterior superior frontal gyrus activation was confirmed for the reverse contrast. Our findings are consistent with a primary pontine circuitry of sensorimotor gating that interconnects with inferior parietal, superior temporal, frontal and prefrontal cortices via thalamus and striatum. PPI processes in the prefrontal, frontal and superior temporal cortex were functionally distinct from sensorimotor gating.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was designed to identify the neural networks underlying automatic auditory deviance detection in 10 healthy subjects using functional magnetic resonance imaging. We measured blood oxygenation level-dependent contrasts derived from the comparison of blocks of stimuli presented as a series of standard tones (50 ms duration) alone versus blocks that contained rare duration-deviant tones (100 ms) that were interspersed among a series of frequent standard tones while subjects were watching a silent movie. Possible effects of scanner noise were assessed by a “no tone” condition. In line with previous positron emission tomography and EEG source modeling studies, we found temporal lobe and prefrontal cortical activation that was associated with auditory duration mismatch processing. Data were also analyzed employing an event-related hemodynamic response model, which confirmed activation in response to duration-deviant tones bilaterally in the superior temporal gyrus and prefrontally in the right inferior and middle frontal gyri. In line with previous electrophysiological reports, mismatch activation of these brain regions was significantly correlated with age. These findings suggest a close relationship of the event-related hemodynamic response pattern with the corresponding electrophysiological activity underlying the event-related “mismatch negativity” potential, a putative measure of auditory sensory memory.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This short paper presents a means of capturing non spatial information (specifically understanding of places) for use in a Virtual Heritage application. This research is part of the Digital Songlines Project which is developing protocols, methodologies and a toolkit to facilitate the collection and sharing of Indigenous cultural heritage knowledge, using virtual reality. Within the context of this project most of the cultural activities relate to celebrating life and to the Australian Aboriginal people, land is the heart of life. Australian Indigenous art, stories, dances, songs and rituals celebrate country as its focus or basis. To the Aboriginal people the term “Country” means a lot more than a place or a nation, rather “Country” is a living entity with a past a present and a future; they talk about it in the same way as they talk about their mother. The landscape is seen to have a spiritual connection in a view seldom understood by non-indigenous persons; this paper introduces an attempt to understand such empathy and relationship and to reproduce it in a virtual environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In daily activities people are using a number of available means for the achievement of balance, such as the use of hands and the co-ordination of balance. One of the approaches that explains this relationship between perception and action is the ecological theory that is based on the work of a) Bernstein (1967), who imposed the problem of ‘the degrees of freedom’, b) Gibson (1979), who referred to the theory of perception and the way which the information is received from the environment in order for a certain movement to be achieved, c) Newell (1986), who proposed that movement can derive from the interaction of the constraints that imposed from the environment and the organism and d) Kugler, Kelso and Turvey (1982), who showed the way which “the degrees of freedom” are connected and interact. According to the above mentioned theories, the development of movement co-ordination can result from the different constraints that imposed into the organism-environment system. The close relation between the environmental and organismic constraints, as well as their interaction is responsible for the movement system that will be activated. These constraints apart from shaping the co-ordination of specific movements can be a rate limiting factor, to a certain degree, in the acquisition and mastering of a new skill. This frame of work can be an essential tool for the study of catching an object (e.g., a ball). The importance of this study becomes obvious due to the fact that movements that involved in catching an object are representative of every day actions and characteristic of the interaction between perception and action.