909 resultados para auditory
Resumo:
Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding “auditory” filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The “sir” or “stir” test-words were distinguished by degrees of amplitude modulation, and played in the context; “next you’ll get _ to click on.” Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a “noise-like” quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word’s [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word’s bands.
Resumo:
Background: Jargon aphasia with neologisms (i.e., novel nonword utterances) is a challenging language disorder that lacks a definitive theoretical description as well as clear treatment recommendations (Marshall, 2006). Aim: The aims of this two part investigation were to determine the source of neologisms in an individual with jargon aphasia (FF), to identify potential facilitatory semantic and/or phonological cuing effects in picture naming, and to determine whether the timing of the cues relative to the target picture mediated the cuing advantage. Methods and Procedures: FF’s underlying linguistic deficits were determined using several cognitive and linguistic tests. A series of computerized naming experiments using a modified version of the 175 item-Philadelphia Naming Test (Roach, Schwartz, Martin, Grewal, & Brecher, 1996) manipulated the cue type (semantic versus phonological) and relatedness (related versus unrelated). In a follow-up experiment, the relative timing of phonological cues was manipulated to test the effect of timing on the cuing advantage. The accuracy of naming responses and error patterns were analyzed. Outcome and Results: FF’s performance on the linguistic and cognitive test battery revealed a severe naming impairment with relatively spared word and nonword repetition, auditory comprehension of words and monitoring, and fairly well preserved semantic abilities. This performance profile was used to evaluate various explanations for neologisms including a loss of phonological codes, monitoring failure, and impairments in semantic system. The primary locus of his deficit appears to involve the connection between semantics to phonology, specifically, when word production involves accessing the phonological forms following semantic access. FF showed a significant cuing advantage only for phonological cues in picture naming, particularly when the cue preceded or coincided with the onset of the target picture. Conclusions: When integrated with previous findings, the results from this study suggest that the core deficit of this and at least some other jargon aphasics is in the connection from semantics to phonology. The facilitative advantage of phonological cues could potentially be exploited in future clinical and research studies to test the effectiveness of these cues for enhancing naming performance in individuals like FF.
Resumo:
Cognitive control mechanisms—such as inhibition—decrease the likelihood that goal-directed activity is ceded to irrelevant events. Here, we use the action of auditory distraction to show how retrieval from episodic long-term memory is affected by competitor inhibition. Typically, a sequence of to-be-ignored spoken distracters drawn from the same semantic category as a list of visually-presented to-be-recalled items impairs free recall performance. In line with competitor inhibition theory (Anderson, 2003), free recall was worse for items on a probe trial if they were a repeat of distracter items presented during the previous, prime, trial (Experiment 1). This effect was only produced when the distracters were dominant members of the same category as the to-be-recalled items on the prime. For prime trials in which distracters were low-dominant members of the to-be-remembered item category or were unrelated to that category—and hence not strong competitors for retrieval—positive priming was found (Experiments 2 & 3). These results are discussed in terms of inhibitory approaches to negative priming and memory retrieval.
Resumo:
Constrained principal component analysis (CPCA) with a finite impulse response (FIR) basis set was used to reveal functionally connected networks and their temporal progression over a multistage verbal working memory trial in which memory load was varied. Four components were extracted, and all showed statistically significant sensitivity to the memory load manipulation. Additionally, two of the four components sustained this peak activity, both for approximately 3 s (Components 1 and 4). The functional networks that showed sustained activity were characterized by increased activations in the dorsal anterior cingulate cortex, right dorsolateral prefrontal cortex, and left supramarginal gyrus, and decreased activations in the primary auditory cortex and "default network" regions. The functional networks that did not show sustained activity were instead dominated by increased activation in occipital cortex, dorsal anterior cingulate cortex, sensori-motor cortical regions, and superior parietal cortex. The response shapes suggest that although all four components appear to be invoked at encoding, the two sustained-peak components are likely to be additionally involved in the delay period. Our investigation provides a unique view of the contributions made by a network of brain regions over the course of a multiple-stage working memory trial.
Resumo:
Ongoing debate in the literature concerns whether there is a link between contagious yawning and the human mirror neuron system (hMNS). One way of examining this issue is with the use of the electroencephalogram (EEG) to measure changes in mu activation during the observation of yawns. Mu oscillations are seen in the alpha bandwidth of the EEG (8–12 Hz) over sensorimotor areas. Previous work has shown that mu suppression is a useful index of hMNS activation and is sensitive to individual differences in empathy. In two experiments, we presented participants with videos of either people yawning or control stimuli. We found greater mu suppression for yawns than for controls over right motor and premotor areas, particularly for those scoring higher on traits of empathy. In a third experiment, auditory recordings of yawns were compared against electronically scrambled versions of the same yawns. We observed greater mu suppression for yawns than for the controls over right lateral premotor areas. Again, these findings were driven by those scoring highly on empathy. The results from these experiments support the notion that the hMNS is involved in contagious yawning, emphasise the link between contagious yawning and empathy, and stress the importance of good control stimuli.
Resumo:
Wernicke’s aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in WA, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein et al. (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke’s aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke’s aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke’s aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke’s aphasia and favour models of language which propose a leftward asymmetry in phonological analysis.
Resumo:
Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.
Resumo:
The effects of auditory distraction in memory tasks have been examined to date with procedures that minimize participants’ control over their own memory processes. Surprisingly little attention has been paid to metacognitive control factors which might affect memory performance. In this study, we investigate the effects of auditory distraction on metacognitive control of memory, examining the effects of auditory distraction in recognition tasks utilizing the metacognitive framework of Koriat and Goldsmith (1996), to determine whether strategic regulation of memory accuracy is impacted by auditory distraction. Results replicated previous findings in showing that auditory distraction impairs memory performance in tasks minimizing participants’ metacognitive control (forced-report test). However, the results revealed also that when metacognitive control is allowed (free-report tests), auditory distraction impacts upon a range of metacognitive indices. In the present study, auditory distraction undermined accuracy of metacognitive monitoring (resolution), reduced confidence in responses provided and, correspondingly, increased participants’ propensity to withhold responses in free-report recognition. Crucially, changes in metacognitive processes were related to impairment in free-report recognition performance, as the use of the ‘don’t know’ option under distraction led to a reduction in the number of correct responses volunteered in free-report tests. Overall, the present results show how auditory distraction exerts its influence on memory performance via both memory and metamemory processes.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Language processing plays a crucial role in language development, providing the ability to assign structural representations to input strings (e.g., Fodor, 1998). In this paper we aim at contributing to the study of children's processing routines, examining the operations underlying the auditory processing of relative clauses in children compared to adults. English-speaking children (6–8;11) and adults participated in the study, which employed a self-paced listening task with a final comprehension question. The aim was to determine (i) the role of number agreement in object relative clauses in which the subject and object NPs differ in terms of number properties, and (ii) the role of verb morphology (active vs. passive) in subject relative clauses. Even though children's off-line accuracy was not always comparable to that of adults, analyses of reaction times results support the view that children have the same structural processing reflexes observed in adults.
Resumo:
Wernicke’s aphasia occurs following a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used fMRI to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered following damage to the classical comprehension regions. Twelve participants with Wernicke’s aphasia and twelve control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and ROI analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an “over-activation” in comparison to control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results concord with models which indicate that the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.
Resumo:
As the fidelity of virtual environments (VE) continues to increase, the possibility of using them as training platforms is becoming increasingly realistic for a variety of application domains, including military and emergency personnel training. In the past, there was much debate on whether the acquisition and subsequent transfer of spatial knowledge from VEs to the real world is possible, or whether the differences in medium during training would essentially be an obstacle to truly learning geometric space. In this paper, the authors present various cognitive and environmental factors that not only contribute to this process, but also interact with each other to a certain degree, leading to a variable exposure time requirement in order for the process of spatial knowledge acquisition (SKA) to occur. The cognitive factors that the authors discuss include a variety of individual user differences such as: knowledge and experience; cognitive gender differences; aptitude and spatial orientation skill; and finally, cognitive styles. Environmental factors discussed include: Size, Spatial layout complexity and landmark distribution. It may seem obvious that since every individual's brain is unique - not only through experience, but also through genetic predisposition that a one size fits all approach to training would be illogical. Furthermore, considering that various cognitive differences may further emerge when a certain stimulus is present (e.g. complex environmental space), it would make even more sense to understand how these factors can impact spatial memory, and to try to adapt the training session by providing visual/auditory cues as well as by changing the exposure time requirements for each individual. The impact of this research domain is important to VE training in general, however within service and military domains, guaranteeing appropriate spatial training is critical in order to ensure that disorientation does not occur in a life or death scenario.
Resumo:
Recent evidence from animal and adult human subjects has demonstrated potential benefits to cognition from flavonoid supplementation. This study aimed to investigate whether these cognitive benefits extended to a sample of school-aged children. Using a cross-over design, with a wash out of at least seven days between drinks, fourteen 8-10 year old children consumed either a flavonoid-rich blueberry drink or matched vehicle. Two hours after consumption, subjects completed a battery of five cognitive tests comprising the Go-NoGo, Stroop, Rey’s Auditory Verbal Learning Task, Object Location Task, and a Visual N-back. In comparison to vehicle, the blueberry drink produced significant improvements in the delayed recall of a previously learned list of words, showing for the first time a cognitive benefit for acute flavonoid intervention in children. However, performance on a measure of proactive interference indicated that the blueberry intervention led to a greater negative impact of previously memorised words on the encoding of a set of new words. There was no benefit of our blueberry intervention for measures of attention, response inhibition or visuo-spatial memory. While findings are mixed, the improvements in delayed recall found in this pilot study suggest that, following acute flavonoid-rich blueberry interventions, school aged children encode memory items more effectively.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
The treatment of auditory-verbal short-term memory (STM) deficits in aphasia is a growing avenue of research (Martin & Reilly, 2012; Murray, 2012). STM treatment requires time precision, which is suited to computerised delivery. We have designed software, which provides STM treatment for aphasia. The treatment is based on matching listening span tasks (Howard & Franklin, 1990), aiming to improve the temporal maintenance of multi-word sequences (Salis, 2012). The person listens to pairs of word-lists that differ in word-order and decides if the pairs are the same or different. This approach does not require speech output and is suitable for persons with aphasia who have limited or no output. We describe the software and how its review from clinicians shaped its design.