975 resultados para Visual word recognition
Resumo:
Even though pediatric hearing aid (HA) users listen most often to female talkers, clinically-used speech tests primarily consist of adult male talkers' speech. Potential effects of age and/or gender of the talker on speech perception of pediatric HA users were examined using two speech tests, hVd-vowel identification and CNC word recognition, and using speech materials spoken by four talker types (adult males, adult females, 10-12 year old girls, and 5-7 year old girls). For the nine pediatric HA users tested, word scores for the male talker's speech were higher than those for the female talkers, indicating that talker type can affect word recognition scores and that clinical tests may over-estimate everyday speech communication abilities of pediatric HA users.
Resumo:
Background: Deficits in reading airment (SLI), Down syndrome (DS) and autism spectrum disorders (ASD). Methods: In this review (based on a search of the ISI Web of Knowledge database to 2011), the Simple View of Reading is used as a framework for considering reading comprehension in these groups. Conclusions: There is substantial evidence for reading comprehension impairments in SLI and growing evidence that weaknesses in this domain are common in DS and ASD. Further, in these groups reading comprehension is typically more impaired than word recognition. However, there is also evidence that some children and adolescents with DS, ASD and a history of SLI develop reading comprehension and word recognition skills at or above the age appropriate level. This review of the literature indicates that factors including word recognition, oral language, nonverbal ability and working memory may explain reading comprehension difficulties in SLI, DS and ASD. In addition, it highlights methodological issues, implications of poor reading comprehension and fruitful areas for future research.
Reading comprehension in autism spectrum disorders: The role of oral language and social functioning
Resumo:
Reading comprehension is an area of difficulty for many individuals with autism spectrum disorders (ASD). According to the Simple View of Reading, word recognition and oral language are both important determinants of reading comprehension ability. We provide a novel test of this model in 100 adolescents with ASD of varying intellectual ability. Further, we explore whether reading comprehension is additionally influenced by individual differences in social behaviour and social cognition in ASD. Adolescents with ASD aged 14-16 years completed assessments indexing word recognition, oral language, reading comprehension, social behaviour and social cognition. Regression analyses show that both word recognition and oral language explain unique variance in reading comprehension. Further, measures of social behaviour and social cognition predict reading comprehension after controlling for the variance explained by word recognition and oral language. This indicates that word recognition, oral language and social impairments may constrain reading comprehension in ASD.
Resumo:
ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.
Resumo:
Purpose: Previously, anthocyanin-rich blueberry treatments have shown positive effects on cognition in both animals and human adults. However, little research has considered whether these benefits transfer to children. Here we describe an acute time-course and dose–response investigation considering whether these cognitive benefits extend to children. Methods: Using a double-blind cross-over design, on three occasions children (n = 21; 7–10 years) consumed placebo (vehicle) or blueberry drinks containing 15 or 30 g freeze-dried wild blueberry (WBB) powder. A cognitive battery including tests of verbal memory, word recognition, response interference, response inhibition and levels of processing was performed at baseline, and 1.15, 3 and 6 h following treatment. Results: Significant WBB-related improvements included final immediate recall at 1.15 h, delayed word recognition sustained over each period, and accuracy on cognitively demanding incongruent trials in the interference task at 3h. Importantly, across all measures, cognitive performance improved, consistent with a dose–response model, with the best performance following 30 g WBB and the worst following vehicle. Conclusion: Findings demonstrate WBB-related cognitive improvements in 7- to 10-year-old children. These effects would seem to be particularly sensitive to the cognitive demand of task.
Resumo:
The automatic speech recognition by machine has been the target of researchers in the past five decades. In this period have been numerous advances, such as in the field of recognition of isolated words (commands), which has very high rates of recognition, currently. However, we are still far from developing a system that could have a performance similar to the human being (automatic continuous speech recognition). One of the great challenges of searches for continuous speech recognition is the large amount of pattern. The modern languages such as English, French, Spanish and Portuguese have approximately 500,000 words or patterns to be identified. The purpose of this study is to use smaller units than the word such as phonemes, syllables and difones units as the basis for the speech recognition, aiming to recognize any words without necessarily using them. The main goal is to reduce the restriction imposed by the excessive amount of patterns. In order to validate this proposal, the system was tested in the isolated word recognition in dependent-case. The phonemes characteristics of the Brazil s Portuguese language were used to developed the hierarchy decision system. These decisions are made through the use of neural networks SVM (Support Vector Machines). The main speech features used were obtained from the Wavelet Packet Transform. The descriptors MFCC (Mel-Frequency Cepstral Coefficient) are also used in this work. It was concluded that the method proposed in this work, showed good results in the steps of recognition of vowels, consonants (syllables) and words when compared with other existing methods in literature
Resumo:
Down syndrome (DS) is one of the most frequent causes of intellectual disability, affecting one in every 600 to 1000 live births. Studies have demonstrated that people with DS have a lower capacity for short-term memory (STM) and working memory (WM), which affects their capability to learn new words and to follow spoken instructions, specially when they involve multiple information or consecutive orders/orientations. It seems that the basis of the learning process, as it happens with language and mathematics comprehension and reasoning, relies in the STM and WM systems. Individuals with DS are increasingly included in mainstream education, and yet, very few researches have been conducted to investigate the influence of memory development and the type of enrollment (regular school and special school). This study investigated the relationship between the type of school enrollment with the performance on STM tests and also, the relationship of this performance with early stimulation (ES). The tests used in the first research were the digit span, free recall, word recognition and subtests of the Wechsler Intelligence Scale for Children Third Edition (WISC-III). Individuals enrolled in the regular schools group had higher scores on the digit span test and the subtests of the WISC-III. In the free recall and recognition tests, no differences were found. This study indicates that the type of enrollment might influence the memory development of individuals with DS and clearly points the need for future investigations. In the second research, the tests used were the digit span, free word recall and subtests of the WISC-III. The test results showed better performance by adults that received ES before six months of age. The studies showed improvement in STM both in people who attended or were attending regular school, as well as those who benefited from ES before six months of age. However, some issues still need to be better understood. What is the relation between this stimulation with the individual s education? Since ES may reflect a greater family involvement with the individual, what is the role of emotional components derived from this involvement in the cognitive improvement? These and other questions are part of the continuity of this study
Resumo:
Given the widespread use of computers, the visual pattern recognition task has been automated in order to address the huge amount of available digital images. Many applications use image processing techniques as well as feature extraction and visual pattern recognition algorithms in order to identify people, to make the disease diagnosis process easier, to classify objects, etc. based on digital images. Among the features that can be extracted and analyzed from images is the shape of objects or regions. In some cases, shape is the unique feature that can be extracted with a relatively high accuracy from the image. In this work we present some of most important shape analysis methods and compare their performance when applied on three well-known shape image databases. Finally, we propose the development of a new shape descriptor based on the Hough Transform.
Resumo:
A new implantable hearing system, the direct acoustic cochlear stimulator (DACS) is presented. This system is based on the principle of a power-driven stapes prosthesis and intended for the treatment of severe mixed hearing loss due to advanced otosclerosis. It consists of an implantable electromagnetic transducer, which transfers acoustic energy directly to the inner ear, and an audio processor worn externally behind the implanted ear. The device is implanted using a specially developed retromeatal microsurgical approach. After removal of the stapes, a conventional stapes prosthesis is attached to the transducer and placed in the oval window to allow direct acoustical coupling to the perilymph of the inner ear. In order to restore the natural sound transmission of the ossicular chain, a second stapes prosthesis is placed in parallel to the first one into the oval window and attached to the patient's own incus, as in a conventional stapedectomy. Four patients were implanted with an investigational DACS device. The hearing threshold of the implanted ears before implantation ranged from 78 to 101 dB (air conduction, pure tone average, 0.5-4 kHz) with air-bone gaps of 33-44 dB in the same frequency range. Postoperatively, substantial improvements in sound field thresholds, speech intelligibility as well as in the subjective assessment of everyday situations were found in all patients. Two years after the implantations, monosyllabic word recognition scores in quiet at 75 dB improved by 45-100 percent points when using the DACS. Furthermore, hearing thresholds were already improved by the second stapes prosthesis alone by 14-28 dB (pure tone average 0.5-4 kHz, DACS switched off). No device-related serious medical complications occurred and all patients have continued to use their device on a daily basis for over 2 years. Copyright (c) 2008 S. Karger AG, Basel.
Resumo:
OBJECTIVE To confirm the clinical efficacy and safety of a direct acoustic cochlear implant. STUDY DESIGN Prospective multicenter study. SETTING The study was performed at 3 university hospitals in Europe (Germany, The Netherlands, and Switzerland). PATIENTS Fifteen patients with severe-to-profound mixed hearing loss because of otosclerosis or previous failed stapes surgery. INTERVENTION Implantation with a Codacs direct acoustic cochlear implant investigational device (ID) combined with a stapedotomy with a conventional stapes prosthesis MAIN OUTCOME MEASURES Preoperative and postoperative (3 months after activation of the investigational direct acoustic cochlear implant) audiometric evaluation measuring conventional pure tone and speech audiometry, tympanometry, aided thresholds in sound field and hearing difficulty by the Abbreviated Profile of Hearing Aid Benefit questionnaire. RESULTS The preoperative and postoperative air and bone conduction thresholds did not change significantly by the implantation with the investigational Direct Acoustic Cochlear Implant. The mean sound field thresholds (0.25-8 kHz) improved significantly by 48 dB. The word recognition scores (WRS) at 50, 65, and 80 dB SPL improved significantly by 30.4%, 75%, and 78.2%, respectively, after implantation with the investigational direct acoustic cochlear implant compared with the preoperative unaided condition. The difficulty in hearing, measured by the Abbreviated Profile of Hearing Aid Benefit, decreased by 27% after implantation with the investigational direct acoustic cochlear implant. CONCLUSION Patients with moderate-to-severe mixed hearing loss because of otosclerosis can benefit substantially using the Codacs investigational device.
Resumo:
Several issues concerning the current use of speech interfaces are discussed and the design and development of a speech interface that enables air traffic controllers to command and control their terminals by voice is presented. A special emphasis is made in the comparison between laboratory experiments and field experiments in which a set of ergonomics-related effects are detected that cannot be observed in the controlled laboratory experiments. The paper presents both objective and subjective performance obtained in field evaluation of the system with student controllers at an air traffic control (ATC) training facility. The system exhibits high word recognition test rates (0.4% error in Spanish and 1.5% in English) and low command error (6% error in Spanish and 10.6% error in English in the field tests). Subjective impression has also been positive, encouraging future development and integration phases in the Spanish ATC terminals designed by Aeropuertos Españoles y Navegación Aérea (AENA).
Resumo:
Knowledge of the stage composition and the temporal dynamics of human cognitive operations is critical for building theories of higher mental activity. This information has been difficult to acquire, even with different combinations of techniques such as refined behavioral testing, electrical recording/interference, and metabolic imaging studies. Verbal object comprehension was studied herein in a single individual, by using three tasks (object naming, auditory word comprehension, and visual word comprehension), two languages (English and Farsi), and four techniques (stimulus manipulation, direct cortical electrical interference, electrocorticography, and a variation of the technique of direct cortical electrical interference to produce time-delimited effects, called timeslicing), in a subject in whom indwelling subdural electrode arrays had been placed for clinical purposes. Electrical interference at a pair of electrodes on the left lateral occipitotemporal gyrus interfered with naming in both languages and with comprehension in the language tested (English). The naming and comprehension deficit resulted from interference with processing of verbal object meaning. Electrocorticography indices of cortical activation at this site during naming started 250–300 msec after visual stimulus presentation. By using the timeslicing technique, which varies the onset of electrical interference relative to the behavioral task, we found that completion of processing for verbal object meaning varied from 450 to 750 msec after current onset. This variability was found to be a function of the subject’s familiarity with the objects.
Resumo:
A perda auditiva no idoso acarreta em dificuldade na percepção da fala. O teste comumente utilizado na logoaudiometria é a pesquisa do índice de reconhecimento de fala máximo (IR-Max) em uma única intensidade de apresentação da fala. Entretanto, o procedimento mais adequado seria a realização do teste em diversas intensidades, visto que o índice de acerto depende da intensidade da fala no momento do teste e está relacionado com o grau e configuração da perda auditiva. A imprecisão na obtenção do IR-Max poderá gerar uma hipótese diagnóstica errônea e o insucesso no processo de intervenção na perda auditiva. Objetivo: Verificar a interferência do nível de apresentação da fala, no teste de reconhecimento de fala, em idosos com perda auditiva sensorioneural com diferentes configurações audiométricas. Métodos: Participaram 64 idosos, 120 orelhas (61 do gênero feminino e 59 do gênero masculino), idade entre 60 e 88 anos, divididos em grupos: G1- composto por 23 orelhas com configuração horizontal, G2- 55 orelhas com configuração descendente, G3- 42 orelhas com configuração abrupta. Os critérios de inclusão foram: perda auditiva sensorioneural de grau leve a severo, não usuário de aparelho de amplificação sonora individual (AASI), ou com tempo de uso inferior a dois meses, e ausência de alterações cognitivas. Foram realizados os seguintes procedimentos: pesquisas do limiar de reconhecimento de fala (LRF), do índice de reconhecimento de fala (IRF) em diversas intensidades e do nível de máximo conforto (MCL) e desconforto (UCL) para a fala. Para tal, foram utilizadas listas com 11 monossílabos, para diminuir a duração do teste. A análise estatística foi composta pelo teste Análise de Variância (ANOVA) e teste de Tukey. Resultados: A configuração descendente foi a de maior ocorrência. Indivíduos com configuração horizontal apresentaram índice médio de acerto mais elevado de reconhecimento de fala. Ao considerar o total avaliado, 27,27% dos indivíduos com configuração horizontal revelaram o IR-Max no MCL, assim como 38,18% com configuração descendente e 26,19% com configuração abrupta. O IR-Max foi encontrado no UCL, em 40,90% dos indivíduos com configuração horizontal, 45,45% com configuração descendente e 28,20% com configuração abrupta. Respectivamente, o maior e o menor índice médio de acerto foram encontrados em: G1- 30 e 40 dBNS; G2- 50 e 10 dBNS; G3- 45 e 10 dBNS. Não há uma única intensidade de fala a ser utilizada em todos os tipos de configurações audiométricas, entretanto, os níveis de sensação que identificaram os maiores índices médios de acerto foram: G1- 20 a 30 dBNS, G2- 20 a 50 dBNS; G3- 45 dBNS. O MCL e o UCL-5 dB para a fala não foram eficazes para determinar o IR-Max. Conclusões: O nível de apresentação teve influência no desempenho no reconhecimento de fala para monossílabos em idosos com perda auditiva sensorioneural em todas as configurações audiométricas. A perda auditiva de grau moderado e a configuração audiométrica descendente foram mais frequentes nessa população, seguida da abrupta e horizontal.
Resumo:
La dysphasie consiste en une atteinte sévère et persistante de l’acquisition et du développement du langage oral. Les élèves qui en sont atteints peinent à devenir des lecteurs habiles et sont à haut risque d’échec scolaire. Si ce trouble très complexe est étudié dans différents domaines (la santé et l’éducation, entre autres), peu d’études se sont spécifiquement intéressées à vérifier si les élèves dysphasiques possèdent des connaissances morphologiques dérivationnelles. Or, depuis un certain nombre d’années, de nombreux chercheurs soutiennent que ces connaissances, qui concernent la forme des mots et leurs règles de formation, peuvent constituer une stratégie supplémentaire aidante pour les élèves aux prises avec un déficit phonologique, comme les élèves dysphasiques. C’est dans ce cadre que se situe la présente étude, dont l’objectif général est d’évaluer les connaissances morphologiques dérivationnelles d’apprentis-lecteurs dysphasiques francophones du primaire. Pour y parvenir, trois tâches morphologiques, soit une tâche de jugement de relation, une tâche de dérivation et une tâche de plausibilité, ont été soumises à trois groupes de participants dont un groupe d’élèves dysphasiques (D=30) et deux groupes contrôles, c’est-à-dire des élèves du même âge chronologique (CA, n=30) et des élèves plus jeunes, mais du même niveau de lecture (CL, n=30). Nos résultats montrent que l’ensemble des trois groupes de participants a tiré profit des unités morphologiques contenues dans les items pour réussir les tâches proposées, les dysphasiques obtenant des résultats inférieurs aux CA mais comparables aux CL. Toutefois, ces résultats ne s’apparentent pas tout à fait au continuum de développement des connaissances morphologiques dérivationnelles établi par Tyler et Nagy (1989). De plus, aucun effet du type d’affixation (items préfixés vs suffixés) n’a été observé. Les résultats obtenus nous permettent de proposer des pistes d’interventions orthodidactiques visant l’enseignement de la morphologie dérivationnelle auprès des élèves aux prises avec des difficultés de lecture, à l’instar des participants dysphasiques qui ont participé à cette étude.
Resumo:
La dysphasie consiste en une atteinte sévère et persistante de l’acquisition et du développement du langage oral. Les élèves qui en sont atteints peinent à devenir des lecteurs habiles et sont à haut risque d’échec scolaire. Si ce trouble très complexe est étudié dans différents domaines (la santé et l’éducation, entre autres), peu d’études se sont spécifiquement intéressées à vérifier si les élèves dysphasiques possèdent des connaissances morphologiques dérivationnelles. Or, depuis un certain nombre d’années, de nombreux chercheurs soutiennent que ces connaissances, qui concernent la forme des mots et leurs règles de formation, peuvent constituer une stratégie supplémentaire aidante pour les élèves aux prises avec un déficit phonologique, comme les élèves dysphasiques. C’est dans ce cadre que se situe la présente étude, dont l’objectif général est d’évaluer les connaissances morphologiques dérivationnelles d’apprentis-lecteurs dysphasiques francophones du primaire. Pour y parvenir, trois tâches morphologiques, soit une tâche de jugement de relation, une tâche de dérivation et une tâche de plausibilité, ont été soumises à trois groupes de participants dont un groupe d’élèves dysphasiques (D=30) et deux groupes contrôles, c’est-à-dire des élèves du même âge chronologique (CA, n=30) et des élèves plus jeunes, mais du même niveau de lecture (CL, n=30). Nos résultats montrent que l’ensemble des trois groupes de participants a tiré profit des unités morphologiques contenues dans les items pour réussir les tâches proposées, les dysphasiques obtenant des résultats inférieurs aux CA mais comparables aux CL. Toutefois, ces résultats ne s’apparentent pas tout à fait au continuum de développement des connaissances morphologiques dérivationnelles établi par Tyler et Nagy (1989). De plus, aucun effet du type d’affixation (items préfixés vs suffixés) n’a été observé. Les résultats obtenus nous permettent de proposer des pistes d’interventions orthodidactiques visant l’enseignement de la morphologie dérivationnelle auprès des élèves aux prises avec des difficultés de lecture, à l’instar des participants dysphasiques qui ont participé à cette étude.