18 resultados para auditory cues
em Duke University
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
While cochlear implants (CIs) usually provide high levels of speech recognition in quiet, speech recognition in noise remains challenging. To overcome these difficulties, it is important to understand how implanted listeners separate a target signal from interferers. Stream segregation has been studied extensively in both normal and electric hearing, as a function of place of stimulation. However, the effects of pulse rate, independent of place, on the perceptual grouping of sequential sounds in electric hearing have not yet been investigated. A rhythm detection task was used to measure stream segregation. The results of this study suggest that while CI listeners can segregate streams based on differences in pulse rate alone, the amount of stream segregation observed decreases as the base pulse rate increases. Further investigation of the perceptual dimensions encoded by the pulse rate and the effect of sequential presentation of different stimulation rates on perception could be beneficial for the future development of speech processing strategies for CIs.
Resumo:
Cells sense cues in their surrounding microenvironment. These cues are converted into intracellular signals and transduced to the nucleus in order for the cell to respond and adapt its function. Within the nucleus, structural changes occur that ultimately lead to changes in the gene expression. In this study, we explore the structural changes of the nucleus of human mesenchymal stem cells as an effect of topographical cues. We use a controlled nanotopography to drive shape changes to the cell nucleus, and measure the changes with both fluorescence microscopy and a novel light scattering technique. The nucleus changes shape dramatically in response to the nanotopography, and in a manner dependent on the mechanical properties of the substrate. The kinetics of the nuclear deformation follows an unexpected trajectory. As opposed to a gradual shape change in response to the topography, once the cytoskeleton attains an aligned and elongation morphology on the time scale of several hours, the nucleus changes shape rapidly and intensely.
Resumo:
BACKGROUND: Myosin VIIA (MyoVIIA) is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck), are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO) organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia) from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase). Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.
Resumo:
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Resumo:
Understanding animals' spatial perception is a critical step toward discerning their cognitive processes. The spatial sense is multimodal and based on both the external world and mental representations of that world. Navigation in each species depends upon its evolutionary history, physiology, and ecological niche. We carried out foraging experiments on wild vervet monkeys (Chlorocebus pygerythrus) at Lake Nabugabo, Uganda, to determine the types of cues used to detect food and whether associative cues could be used to find hidden food. Our first and second set of experiments differentiated between vervets' use of global spatial cues (including the arrangement of feeding platforms within the surrounding vegetation) and/or local layout cues (the position of platforms relative to one another), relative to the use of goal-object cues on each platform. Our third experiment provided an associative cue to the presence of food with global spatial, local layout, and goal-object cues disguised. Vervets located food above chance levels when goal-object cues and associative cues were present, and visual signals were the predominant goal-object cues that they attended to. With similar sample sizes and methods as previous studies on New World monkeys, vervets were not able to locate food using only global spatial cues and local layout cues, unlike all five species of platyrrhines thus far tested. Relative to these platyrrhines, the spatial location of food may need to stay the same for a longer time period before vervets encode this information, and goal-object cues may be more salient for them in small-scale space.
Resumo:
Olfactory cues play an integral, albeit underappreciated, role in mediating vertebrate social and reproductive behaviour. These cues fluctuate with the signaller's hormonal condition, coincident with and informative about relevant aspects of its reproductive state, such as pubertal onset, change in season and, in females, timing of ovulation. Although pregnancy dramatically alters a female's endocrine profiles, which can be further influenced by fetal sex, the relationship between gestation and olfactory cues is poorly understood. We therefore examined the effects of pregnancy and fetal sex on volatile genital secretions in the ring-tailed lemur (Lemur catta), a strepsirrhine primate possessing complex olfactory mechanisms of reproductive signalling. While pregnant, dams altered and dampened their expression of volatile chemicals, with compound richness being particularly reduced in dams bearing sons. These changes were comparable in magnitude with other, published chemical differences among lemurs that are salient to conspecifics. Such olfactory 'signatures' of pregnancy may help guide social interactions, potentially promoting mother-infant recognition, reducing intragroup conflict or counteracting behavioural mechanisms of paternity confusion; cues that also advertise fetal sex may additionally facilitate differential sex allocation.
Resumo:
Research on future episodic thought has produced compelling theories and results in cognitive psychology, cognitive neuroscience, and clinical psychology. In experiments aimed to integrate these with basic concepts and methods from autobiographical memory research, 76 undergraduates remembered past and imagined future positive and negative events that had or would have a major impact on them. Correlations of the online ratings of visual and auditory imagery, emotion, and other measures demonstrated that individuals used the same processes to the same extent to remember past and construct future events. These measures predicted the theoretically important metacognitive judgment of past reliving and future "preliving" in similar ways. On standardized tests of reactions to traumatic events, scores for future negative events were much higher than scores for past negative events. The scores for future negative events were in the range that would qualify for a diagnosis of posttraumatic stress disorder (PTSD); the test was replicated (n = 52) to check for order effects. Consistent with earlier work, future events had less sensory vividness. Thus, the imagined symptoms of future events were unlikely to be caused by sensory vividness. In a second experiment, to confirm this, 63 undergraduates produced numerous added details between 2 constructions of the same negative future events; deficits in rated vividness were removed with no increase in the standardized tests of reactions to traumatic events. Neuroticism predicted individuals' reactions to negative past events but did not predict imagined reactions to future events. This set of novel methods and findings is interpreted in the contexts of the literatures of episodic future thought, autobiographical memory, PTSD, and classic schema theory.
Resumo:
We sought to map the time course of autobiographical memory retrieval, including brain regions that mediate phenomenological experiences of reliving and emotional intensity. Participants recalled personal memories to auditory word cues during event-related functional magnetic resonance imaging (fMRI). Participants pressed a button when a memory was accessed, maintained and elaborated the memory, and then gave subjective ratings of emotion and reliving. A novel fMRI approach based on timing differences capitalized on the protracted reconstructive process of autobiographical memory to segregate brain areas contributing to initial access and later elaboration and maintenance of episodic memories. The initial period engaged hippocampal, retrosplenial, and medial and right prefrontal activity, whereas the later period recruited visual, precuneus, and left prefrontal activity. Emotional intensity ratings were correlated with activity in several regions, including the amygdala and the hippocampus during the initial period. Reliving ratings were correlated with activity in visual cortex and ventromedial and inferior prefrontal regions during the later period. Frontopolar cortex was the only brain region sensitive to emotional intensity across both periods. Results were confirmed by time-locked averages of the fMRI signal. The findings indicate dynamic recruitment of emotion-, memory-, and sensory-related brain regions during remembering and their dissociable contributions to phenomenological features of the memories.
Resumo:
OBJECTIVE: The authors sought to increase understanding of the brain mechanisms involved in cigarette addiction by identifying neural substrates modulated by visual smoking cues in nicotine-deprived smokers. METHOD: Event-related functional magnetic resonance imaging (fMRI) was used to detect brain activation after exposure to smoking-related images in a group of nicotine-deprived smokers and a nonsmoking comparison group. Subjects viewed a pseudo-random sequence of smoking images, neutral nonsmoking images, and rare targets (photographs of animals). Subjects pressed a button whenever a rare target appeared. RESULTS: In smokers, the fMRI signal was greater after exposure to smoking-related images than after exposure to neutral images in mesolimbic dopamine reward circuits known to be activated by addictive drugs (right posterior amygdala, posterior hippocampus, ventral tegmental area, and medial thalamus) as well as in areas related to visuospatial attention (bilateral prefrontal and parietal cortex and right fusiform gyrus). In nonsmokers, no significant differences in fMRI signal following exposure to smoking-related and neutral images were detected. In most regions studied, both subject groups showed greater activation following presentation of rare target images than after exposure to neutral images. CONCLUSIONS: In nicotine-deprived smokers, both reward and attention circuits were activated by exposure to smoking-related images. Smoking cues are processed like rare targets in that they activate attentional regions. These cues are also processed like addictive drugs in that they activate mesolimbic reward regions.
Resumo:
A sample of 124 words were used to cue autobiographical memories in 120 adults varying in age from 20 to 73 years. Individual words reliably cued autobiographical memories of different ages with different speeds. For all age groups, words rated high in imagery produced older memories and faster reaction times.
Resumo:
If and only if each single cue uniquely defines its target, a independence model based on fragment theory can predict the strength of a combined dual cue from the strengths of its single cue components. If the single cues do not each uniquely define their target, no single monotonic function can predict the strength of the dual cue from its components; rather, what matters is the number of possible targets. The probability of generating a target word was .19 for rhyme cues, .14 for category cues, and .97 for rhyme-plus-category dual cues. Moreover, some pairs of cues had probabilities of producing their targets of .03 when used individually and 1.00 when used together, whereas other pairs had moderate probabilities individually and together. The results, which are interpreted in terms of multiple constraints limiting the number of responses, show why rhymes, which play a minimal role in laboratory studies of memory, are common in real-world mnemonics.
Resumo:
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.