976 resultados para Auditory perception.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.  

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open-set word and sentence speech-perception test scores are commonly used as a measure of hearing abilities in children and adults using cochlear implants and/or hearing aids. These tests are usually presented auditorily with a verbal response. In the case of children, scores are typically lower and more variable than for adults with hearing impairments using similar devices. It is difficult to interpret children's speech-perception scores without considering the effects of lexical knowledge and speech-production abilities on their responses. This study postulated a simple mathematical model to describe the effects of hearing, lexical knowledge, and speech production on the perception test scores for monosyllabic words by children with impaired hearing. Thirty-three primary-school children with impaired hearing, fitted with hearing aids and/or cochlear implants, were evaluated using speech-perception, reading-aloud, speech-production, and language measures. These various measures were incorporated in the mathematical model, which revealed that performance in an open-set word-perception test in the auditory-alone mode is strongly dependent on residual hearing levels, lexical knowledge, and speech-production abilities. Further applications of the model provided an estimate of the effect of each component on the overall speech-perception score for each child.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilizing user-centred system design and evaluation method has become an increasingly important tool to foster better usability in the field of virtual environments (VEs). In recent years, although it is still the norm that designers and developers are concerning the technological advancement and striving for designing impressive multimodal multisensory interfaces, more and more awareness are aroused among the development team that in order to produce usable and useful interfaces, it is essential to have users in mind during design and validate a new design from users' perspective. In this paper, we describe a user study carried out to validate a newly developed haptically enabled virtual training system. By taking consideration of the complexity of individual differences on human performance, adoption and acceptance of haptic and audio-visual I/O devices, we address how well users learn, perform, adapt to and perceive object assembly training. We also explore user experience and interaction with the system, and discuss how multisensory feedback affects user performance, perception and acceptance. At last, we discuss how to better design VEs that enhance users perception, their interaction and motor activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experiments evaluated an operant procedure for establishing stimulus control using auditory and electrical stimuli as a baseline for measuring the electrical current threshold of electrodes implanted in the cochlea. Twenty-one prelingually deaf children, users of cochlear implants, learned a Go/No Go auditory discrimination task (i.e., pressing a button in the presence of the stimulus but not in its absence). When the simple discrimination baseline became stable, the electrical current was manipulated in descending and ascending series according to an adapted staircase method. Thresholds were determined for three electrodes, one in each location in the cochlea (basal, medial, and apical). Stimulus control was maintained within a certain range of decreasing electrical current but was eventually disrupted. Increasing the current recovered stimulus control, thus allowing the determination of a range of electrical currents that could be defined as the threshold. The present study demonstrated the feasibility of the operant procedure combined with a psychophysical method for threshold assessment, thus contributing to the routine fitting and maintenance of cochlear implants within the limitations of a hospital setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New technology in the Freedom (R) speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. Aim: To identify the contribution of this technology - the Nucleus 22 (R) - on speech perception tests in silence and in noise, and on audiometric thresholds. Methods: A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra (R) was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom (R) technology for the Nucleus22 (R), auditory thresholds and speech perception tests were performed in free field in soundproof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Results: Freedom (R) applied for the Nucleus22 (R) showed a statistically significant difference in all speech perception tests and audiometric thresholds. Conclusion: The reedom (R) technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22 (R).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated whether there are differences in the Speech-Evoked Auditory Brainstem Response among children with Typical Development (TD), (Central) Auditory Processing Disorder (C) APD, and Language Impairment (LI). The speech-evoked Auditory Brainstem Response was tested in 57 children (ages 6-12). The children were placed into three groups: TD (n = 18), (C)APD (n = 18) and LI (n = 21). Speech-evoked ABR were elicited using the five-formant syllable/da/. Three dimensions were defined for analysis, including timing, harmonics, and pitch. A comparative analysis of the responses between the typical development children and children with (C)APD and LI revealed abnormal encoding of the speech acoustic features that are characteristics of speech perception in children with (C)APD and LI, although the two groups differed in their abnormalities. While the children with (C)APD might had a greater difficulty distinguishing stimuli based on timing cues, the children with LI had the additional difficulty of distinguishing speech harmonics, which are important to the identification of speech sounds. These data suggested that an inefficient representation of crucial components of speech sounds may contribute to the difficulties with language processing found in children with LI. Furthermore, these findings may indicate that the neural processes mediated by the auditory brainstem differ among children with auditory processing and speech-language disorders. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The auditory brainstem implant (ABI) was first developed to help neurofibromatosis type 2 patients. Recently, its use has been recently extended to adults with non-tumor etiologies and children with profound hearing loss who were not candidates for a cochlear implant (Cl). Although the results has been extensively reported, the stimulation parameters involved behind the outcomes have received less attention. Objective: The aim of this study is to describe the audiologic outcomes and the MAP parameters in ABI adults and children at our center. Methods: Retrospective chart review. Five adults and four children were implanted with the ABI24M from September 2005 to June 2009. In the adult patients, four had Neurofibromatosis type 2, and one had postmeningitic deafness with complete ossification of both cochleae. Three of the children had cochlear malformation or dysplasia, and one had complete ossified cochlea due to meningitis. Map parameters as well as the intraoperative electrical auditory brainstem responses were collected. Evaluation was performed with at least six months of device use and included free-field hearing thresholds, speech perception tests in the adult patients and for the children, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and (ESP) were used to evaluate the development of auditory skills, besides the MUSS to evaluate. Results: The number of active electrodes that did not cause any non-auditory sensation varied from three to nineteen. All of them were programmed with SPEAK strategy, and the pulse widths varied from 100 to 300 mu s. Free-field thresholds with warble tones varied from very soft auditory sensation of 70 dBHL at 250 Hz to a pure tone average of 45 dBHL. Speech perception varied from none to 60% open-set recognition of sentences in silence in the adult population and from no auditory sensation at all to a slight improvement in the IT-MAIS/MAIS scores. Conclusion: We observed that ABI may be a good option for offering some hearing attention to both adults and children. In children, the results might not be enough to ensure oral language development. Programming the speech processor in children demands higher care to the audiologist. (C) 2011 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auditory imagery for songs was studied in two groups of patients with left or right temporal-lobe excision for control of epilepsy, and a group of matched normal control subjects. Two tasks were used. In the perceptual task, subjects saw the text of a familiar song and simultaneously heard it sung. On each trial they judged if the second of two capitalized lyrics was higher or lower in pitch than the first. The imagery task was identical in all respects except that no song was presented, so that subjects had to generate an auditory image of the song. The results indicated that all subjects found the imagery task more difficult than the perceptual task, but patients with right temporal-lobe damage performed significantly worse on both tasks than either patients with left temporal-lobe lesions or normal control subjects. These results support the idea that imagery arises from activation of a neural substrate shared with perceptual mechanisms, and provides evidence for a right temporal- lobe specialization for this type of auditory imaginal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuropsychological studies have suggested that imagery processes may be mediated by neuronal mechanisms similar to those used in perception. To test this hypothesis, and to explore the neural basis for song imagery, 12 normal subjects were scanned using the water bolus method to measure cerebral blood flow (CBF) during the performance of three tasks. In the control condition subjects saw pairs of words on each trial and judged which word was longer. In the perceptual condition subjects also viewed pairs of words, this time drawn from a familiar song; simultaneously they heard the corresponding song, and their task was to judge the change in pitch of the two cued words within the song. In the imagery condition, subjects performed precisely the same judgment as in the perceptual condition, but with no auditory input. Thus, to perform the imagery task correctly an internal auditory representation must be accessed. Paired-image subtraction of the resulting pattern of CBF, together with matched MRI for anatomical localization, revealed that both perceptual and imagery. tasks produced similar patterns of CBF changes, as compared to the control condition, in keeping with the hypothesis. More specifically, both perceiving and imagining songs are associated with bilateral neuronal activity in the secondary auditory cortices, suggesting that processes within these regions underlie the phenomenological impression of imagined sounds. Other CBF foci elicited in both tasks include areas in the left and right frontal lobes and in the left parietal lobe, as well as the supplementary motor area. This latter region implicates covert vocalization as one component of musical imagery. Direct comparison of imagery and perceptual tasks revealed CBF increases in the inferior frontal polar cortex and right thalamus. We speculate that this network of regions may be specifically associated with retrieval and/or generation of auditory information from memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.