83 resultados para auditory hallucinations
Resumo:
Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.
Resumo:
Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.
Resumo:
The effects of auditory distraction in memory tasks have been examined to date with procedures that minimize participants’ control over their own memory processes. Surprisingly little attention has been paid to metacognitive control factors which might affect memory performance. In this study, we investigate the effects of auditory distraction on metacognitive control of memory, examining the effects of auditory distraction in recognition tasks utilizing the metacognitive framework of Koriat and Goldsmith (1996), to determine whether strategic regulation of memory accuracy is impacted by auditory distraction. Results replicated previous findings in showing that auditory distraction impairs memory performance in tasks minimizing participants’ metacognitive control (forced-report test). However, the results revealed also that when metacognitive control is allowed (free-report tests), auditory distraction impacts upon a range of metacognitive indices. In the present study, auditory distraction undermined accuracy of metacognitive monitoring (resolution), reduced confidence in responses provided and, correspondingly, increased participants’ propensity to withhold responses in free-report recognition. Crucially, changes in metacognitive processes were related to impairment in free-report recognition performance, as the use of the ‘don’t know’ option under distraction led to a reduction in the number of correct responses volunteered in free-report tests. Overall, the present results show how auditory distraction exerts its influence on memory performance via both memory and metamemory processes.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Language processing plays a crucial role in language development, providing the ability to assign structural representations to input strings (e.g., Fodor, 1998). In this paper we aim at contributing to the study of children's processing routines, examining the operations underlying the auditory processing of relative clauses in children compared to adults. English-speaking children (6–8;11) and adults participated in the study, which employed a self-paced listening task with a final comprehension question. The aim was to determine (i) the role of number agreement in object relative clauses in which the subject and object NPs differ in terms of number properties, and (ii) the role of verb morphology (active vs. passive) in subject relative clauses. Even though children's off-line accuracy was not always comparable to that of adults, analyses of reaction times results support the view that children have the same structural processing reflexes observed in adults.
Resumo:
Wernicke’s aphasia occurs following a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used fMRI to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered following damage to the classical comprehension regions. Twelve participants with Wernicke’s aphasia and twelve control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and ROI analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an “over-activation” in comparison to control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results concord with models which indicate that the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.
Resumo:
As the fidelity of virtual environments (VE) continues to increase, the possibility of using them as training platforms is becoming increasingly realistic for a variety of application domains, including military and emergency personnel training. In the past, there was much debate on whether the acquisition and subsequent transfer of spatial knowledge from VEs to the real world is possible, or whether the differences in medium during training would essentially be an obstacle to truly learning geometric space. In this paper, the authors present various cognitive and environmental factors that not only contribute to this process, but also interact with each other to a certain degree, leading to a variable exposure time requirement in order for the process of spatial knowledge acquisition (SKA) to occur. The cognitive factors that the authors discuss include a variety of individual user differences such as: knowledge and experience; cognitive gender differences; aptitude and spatial orientation skill; and finally, cognitive styles. Environmental factors discussed include: Size, Spatial layout complexity and landmark distribution. It may seem obvious that since every individual's brain is unique - not only through experience, but also through genetic predisposition that a one size fits all approach to training would be illogical. Furthermore, considering that various cognitive differences may further emerge when a certain stimulus is present (e.g. complex environmental space), it would make even more sense to understand how these factors can impact spatial memory, and to try to adapt the training session by providing visual/auditory cues as well as by changing the exposure time requirements for each individual. The impact of this research domain is important to VE training in general, however within service and military domains, guaranteeing appropriate spatial training is critical in order to ensure that disorientation does not occur in a life or death scenario.
Resumo:
Recent evidence from animal and adult human subjects has demonstrated potential benefits to cognition from flavonoid supplementation. This study aimed to investigate whether these cognitive benefits extended to a sample of school-aged children. Using a cross-over design, with a wash out of at least seven days between drinks, fourteen 8-10 year old children consumed either a flavonoid-rich blueberry drink or matched vehicle. Two hours after consumption, subjects completed a battery of five cognitive tests comprising the Go-NoGo, Stroop, Rey’s Auditory Verbal Learning Task, Object Location Task, and a Visual N-back. In comparison to vehicle, the blueberry drink produced significant improvements in the delayed recall of a previously learned list of words, showing for the first time a cognitive benefit for acute flavonoid intervention in children. However, performance on a measure of proactive interference indicated that the blueberry intervention led to a greater negative impact of previously memorised words on the encoding of a set of new words. There was no benefit of our blueberry intervention for measures of attention, response inhibition or visuo-spatial memory. While findings are mixed, the improvements in delayed recall found in this pilot study suggest that, following acute flavonoid-rich blueberry interventions, school aged children encode memory items more effectively.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
The treatment of auditory-verbal short-term memory (STM) deficits in aphasia is a growing avenue of research (Martin & Reilly, 2012; Murray, 2012). STM treatment requires time precision, which is suited to computerised delivery. We have designed software, which provides STM treatment for aphasia. The treatment is based on matching listening span tasks (Howard & Franklin, 1990), aiming to improve the temporal maintenance of multi-word sequences (Salis, 2012). The person listens to pairs of word-lists that differ in word-order and decides if the pairs are the same or different. This approach does not require speech output and is suitable for persons with aphasia who have limited or no output. We describe the software and how its review from clinicians shaped its design.
Resumo:
Background Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Methods Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant’s voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a ‘self-identification’ task, classifying each morph as ‘self’ voice (or face) or an ‘other’ voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Results Fifty percent ‘self’ response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = −0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = −0.020, P = 0.438). Conclusions Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.
Resumo:
Three experiments examine the role of articulatory motor planning in experiencing an involuntary musical recollection (an “earworm”). Experiment 1 shows that interfering with articulatory motor programming by chewing gum reduces both the number of voluntary and the number of involuntary—unwanted—musical thoughts. This is consistent with other findings that chewing gum interferes with voluntary processes such as recollections from verbal memory, the interpretation of ambiguous auditory images, and the scanning of familiar melodies, but is not predicted by theories of thought suppression, which assume that suppression is made more difficult by concurrent tasks or cognitive loads. Experiment 2 shows that chewing the gum affects the experience of “hearing” the music and cannot be ascribed to a general effect on thinking about a tune only in abstract terms. Experiment 3 confirms that the reduction of musical recollections by chewing gum is not the consequence of a general attentional or dual-task demand. The data support a link between articulatory motor programming and the appearance in consciousness of both voluntary and unwanted musical recollections.
Resumo:
Possible impairments of memory in end-stage renal disease (ESRD) were investigated in two experiments. In Experiment 1, in which stimulus words were presented visually, participants were tested on conceptual or perceptual memory tasks, with retrieval being either explicit or implicit. Compared with healthy controls, ESRD patients were impaired when memory required conceptual but not when it required perceptual processing, regardless of whether retrieval was explicit or implicit. An impairment of conceptual implicit memory (priming) in the ESRD group represented a previously unreported deficit compared to healthy aging. There were no significant differences between pre- and immediate post-dialysis memory performance in ESRD patients on any of the tasks. In Experiment 2, in which presentation was auditory, patients again performed worse than controls on an explicit conceptual memory task. We conclude that the type of processing required by the task (conceptual vs. perceptual) is more important than the type of retrieval (explicit vs. implicit) in memory failures in ESRD patients, perhaps because temporal brain regions are more susceptible to the effects of the illness than are posterior regions.
Resumo:
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. “moo”, “splash”). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage may be due to a stronger connection between semantic information and phonological forms.
Resumo:
Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent-infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.