993 resultados para SPEECH-PERCEPTION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introdução: O implante coclear (IC) amplamente aceito como forma de intervenção e (re) habilitação nas perdas auditivas severas e profundas nas diversas faixas etárias. Contudo observa-se no usuário do IC unilateral queixas como localização e compreensão sonora em meio ao ruído, gerado pelo padrão anormal de estimulação sensorial. A fim de fornecer os benefícios da audição binaural, é preconizado a estimulação bilateral, seja por meio do IC bilateral ou com a adaptação de um aparelho de amplificação sonora individual (AASI) contralateralmente ao IC. Esta última condição é referida como estimulação bimodal, quando temos, concomitantemente dois modos de estimulação: Elétrica (IC) e acústica (AASI). Não há dados suficientes na literatura voltados à população infantil que esclareça ou demonstre o desenvolvimento do córtex auditivo na audição bimodal. Ressalta-se que não foram encontrados estudos em crianças. Objetivo: Caracterizar o PEAC complexo P1, N1 P2 em usuários da estimulação bimodal e verificar se há correlação com testes de percepção de fala. Metodologia: Estudo descritivo de séries de casos, com a realização do PEAC em cinco crianças usuárias da estimulação bimodal, a partir da metodologia proposta por Ventura (2008) utilizando o sistema Smart EP USB Jr da Intelligent Hearing Systems. Foi utilizado o som de fala /da/, apresentado em campo livre. O exame será realizado em três situações: Somente IC, IC e AASI e somente AASI. A análise dos dados dos potenciais corticais foi realizada após a marcação da presença ou ausência dos componentes do complexo P1-N1-P2 por dois juízes com experiência em potenciais evocados. Resultados: Foi obtida a captação do PEAC em todas as crianças em todas as situações de teste, além do que foi possível observar a correlação destes com os testes de percepção auditiva da fala. Foi possível verificar que o registro dos PEAC é um procedimento viável para a avaliação da criança com estimulação bimodal, porém, ainda não há dados suficientes quanto a utilização deste para a avaliação e indicação do IC bilateral.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introdução: Crianças com transtorno fonológico (TF) apresentam dificuldade na percepção de fala, em processar estímulos acústicos quando apresentados de forma rápida e em sequência. A percepção dos sons complexos da fala, dependem da integridade no processo de codificação analisado pelo Sistema Nervoso Auditivo. Por meio do Potencial Evocado Auditivo de Tronco Encefálico com estímulo complexo (PEATEc) é possível investigar a representação neural dos sons em níveis corticais e obter informações diretas sobre como a estrutura do som da sílaba falada é codificada no sistema auditivo. Porém, acredita-se que esse potencial sofre interferências tanto de processos bottom-up quanto top-down, o que não se sabe é quanto e como cada um desses processos modifica as respostas do PEATEc. Uma das formas de investigar a real influência dos aspectos top-down e bottom-up nos resultados do PEATEc é estimulando separadamente esses dois processos por meio do treinamento auditivo e da terapia fonoaudiológica. Objetivo: Verificar o impacto da estimulação sensorial (processamento bottom-up) e cognitiva (processamento top-down), separadamente, nos diferentes domínios da resposta eletrofisiológica do PEATEc. Método: Participaram deste estudo 11 crianças diagnosticadas com TF, com idades entre 7 e 10:11, submetidas a avaliação comportamental e eletrofisiológica e então dividas nos grupos Bottom-up (B-U) (N=6) e Top-down T-D (N=5). A estimulação bottom-up foi voltada ao treinamento das habilidades sensoriais, através de softwares de computador. A estimulação top-down foi realizada por meio de tarefas para estimular as habilidades cognitiva por meio do Programa de Estimulação Fonoaudiológica (PEF). Ambas as estimulações foram aplicadas uma vez por semana, num período de aproximadamente 45 minutos por 12 semanas. Resultados: O grupo B-U apresentou melhoras em relação aos domínios onset e harmônicos e no valor da pontuação do escore após ser submetido à estimulação bottom-up. Por sua vez, após serem submetidos à estimulação top-down, o grupo T-D apresentou melhoras em relação aos domínios onset, espectro-temporal, fronteiras do envelope e harmônicos e para os valores da pontuação do escore. Conclusão: Diante dos resultados obtidos neste estudo, foi possível observar que a estimulação sensorial (processamento bottom-up) e a estimulação cognitiva (processamento top-down) mostraram impactar de forma diferente a resposta eletrofisiológica do PEATEc

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The results from previous studies have indicated that a pre-attentive component of the event-related potential (ERP), the mismatch negativity (MMN), may be an objective measure of the automatic auditory processing of phonemes and words. Aims: This article reviews the relationship between the MMN data and psycholinguistic models of spoken word processing, in order to determine whether the MMN may be used to objectively pinpoint spoken word processing deficits in individuals with aphasia. Main Contribution: This article outlines the ways in which the MMN data support psycholinguistic models currently used in the clinical management of aphasic individuals. Furthermore, the cell assembly model of the neurophysiological mechanisms underlying spoken word processing is discussed in relation to the MMN and psycholinguistic models. Conclusions: The MMN data support current theoretical psycholinguistic and neurophysiological models of spoken word processing. Future MMN studies that include normal and aphasic populations will further elucidate the role that the MMN may play in the clinical management of aphasic individuals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although developmental increases in the size of the position effect within a mispronunciation detection task have been interpreted as consistent with a view of the lexical restructuring process as protracted, the position effect itself might not be reliable. The current research examined the effects of position and clarity of acoustic-phonetic information on sensitivity to mispronounced onsets in 5- and 6-year-olds and adults. Both children and adults showed a position effect only when mispronunciations also differed in the amount of relevant acoustic-phonetic information. Adults' sensitivity to mispronounced second-syllable onsets also reflected the availability of acoustic-phonetic information. The implications of these findings are discussed in relation to the lexical restructuring hypothesis. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current study implements a speech perception experiment that interrogates local perceptions of Spanish varieties in Miami. Participants (N=292) listened to recordings of three Spanish varieties (Peninsular, Highland Colombian, and Post-Castro Cuban) and were given background information about the speakers, including the parents’ country of origin. In certain cases, the parents’ national-origin label matched the country of origin of the speaker, but otherwise the background information and voices were mismatched. The manipulation distinguishes perceptions determined by bottom-up cues (dialect) from top-down ones (social information). Participants then rated each voice for a range of personal characteristics and answered hypothetical questions about the speakers’ employment, family, and income. Results show clear top-down effects of the social information that often drive perceptions up or down depending on the traits themselves. Additionally, the data suggest differences in perceptions between Hispanic/non-Hispanic and Cuban/non-Cuban participants, although the Cuban participants do not drive the Hispanic participants’ perceptions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study characterized two fiber pathways important for language, the superior longitudinal fasciculus/arcuate fasciculus (SLF/AF) and the frontal aslant tract (FAT), and related these tracts to speech, language, and literacy skill in children five to eight years old. We used Diffusion Tensor Imaging (DTI) to characterize the fiber pathways and administered several language assessments. The FAT was identified for the first time in children. Results showed no age-related change in integrity of the FAT, but did show age-related change in the left (but not right) SLF/AF. Moreover, only the integrity of the right FAT was related to phonology but not audiovisual speech perception, articulation, language, or literacy. Both the left and right SLF/AF related to language measures, specifically receptive and expressive language, and language content. These findings are important for understanding the neurobiology of language in the developing brain, and can be incorporated within contemporary dorsal-ventral-motor models for language.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Early intervention is the key to spoken language for hearing impaired children. A severe hearing loss diagnosis in young children raises the urgent question on the optimal type of hearing aid device. As there is no recent data on comparing selection criteria for a specific hearing aid device, the goal of the Hearing Evaluation of Auditory Rehabilitation Devices (hEARd) project (Coninx & Vermeulen, 2012) evolved to collect and analyze interlingually comparable normative data on the speech perception performances of children with hearing aids and children with cochlear implants (CI). METHOD: In various institutions for hearing rehabilitation in Belgium, Germany and the Netherlands the Adaptive Auditory Speech Test AAST was used in the hEARd project, to determine speech perception abilities in kindergarten and school aged hearing impaired children. Results in the speech audiometric procedures were matched to the unaided hearing loss values of children using hearing aids and compared to results of children using CI. 277 data sets of hearing impaired children were analyzed. Results of children using hearing aids were summarized in groups as to their unaided hearing loss values. The grouping was related to the World Health Organization’s (WHO) grading of hearing impairment from mild (25–40 dB HL) to moderate (41–60 dB HL), severe (61-80 dB HL) and profound hearing impairment (80 dB HL and higher). RESULTS: AAST speech recognition results in quiet showed a significantly better performance for the CI group in comparison to the group of profoundly impaired hearing aid users as well as the group of severely impaired hearing aid users. However the CI users’ performances in speech perception in noise did not vary from the hearing aid users’ performances. Within the collected data analyses showed that children with a CI show an equivalent performance on speech perception in quiet as children using hearing aids with a “moderate” hearing impairment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Perceptual voice analysis is a subjective process. However, despite reports of varying degrees of intrajudge and interjudge reliability, it is widely used in clinical voice evaluation. One of the ways to improve the reliability of this procedure is to provide judges with signals as external standards so that comparison can be made in relation to these anchor signals. The present study used a Klatt speech synthesizer to create a set of speech signals with varying degree of three different voice qualities based on a Cantonese sentence. The primary objective of the study was to determine whether different abnormal voice qualities could be synthesized using the built-in synthesis parameters using a perceptual study. The second objective was to determine the relationship between acoustic characteristics of the synthesized signals and perceptual judgment. Twenty Cantonese-speaking speech pathologists with at least three years of clinical experience in perceptual voice evaluation were asked to undertake two tasks. The first was to decide whether the voice quality of the synthesized signals was normal or not. The second was to decide whether the abnormal signals should be described as rough, breathy, or vocal fry. The results showed that signals generated with a small degree of aspiration noise were perceived as breathiness while signals with a small degree of flutter or double pulsing were perceived as roughness. When the flutter or double pulsing increased further, tremor and vocal fry, rather than roughness, were perceived. Furthermore, the amount of aspiration noise, flutter, or double pulsing required for male voice stimuli was different from that required for the female voice stimuli with a similar level of perceptual breathiness and roughness. These findings showed that changes in perceived vocal quality could be achieved by systematic modifications of synthesis parameters. This opens up the possibility of using synthesized voice signals as external standards or anchors to improve the reliability of clinical perceptual voice evaluation. (C) 2002 Acoustical Society of America.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The primary goal of this project is to study the ability of adult cochlear implant users to perceive emotion through speech alone. A secondary goal of this project is to study the development of emotion perception in normal hearing children to serve as a baseline for comparing emotion perception abilities in similarly-aged children with impaired hearing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reviews a study to investigate how a hearing impaired person can learn to discriminate speech distorted by a low pass filter in a sensory aid.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reviews a study to investigate how a hearing impaired person can learn to discriminate speech distorted by a low pass filter in a sensory aid.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A speech message played several metres from the listener in a room is usually heard to have much the same phonetic content as it does when played nearby, even though the different amounts of reflected sound make the temporal envelopes of these signals very different. To study this ‘constancy’ effect, listeners heard speech messages and speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those in these filters when the speech message is played. The ‘contexts’ were “next you’ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum, formed by amplitude modulation. Listeners identified the test words appropriately, even in the 8-band conditions where the speech had a ‘robotic’ quality. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of room reflections (i.e. from the same, far distance), or where it had a much lower level (i.e. from nearby). Constancy effects were obtained with both the natural- and the 8-band speech. Results are considered in terms of the degree of ‘matching’ between the context’s and test-word’s bands.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements