979 resultados para word recognition


Relevância:

60.00% 60.00%

Publicador:

Resumo:

ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Previously, anthocyanin-rich blueberry treatments have shown positive effects on cognition in both animals and human adults. However, little research has considered whether these benefits transfer to children. Here we describe an acute time-course and dose–response investigation considering whether these cognitive benefits extend to children. Methods: Using a double-blind cross-over design, on three occasions children (n = 21; 7–10 years) consumed placebo (vehicle) or blueberry drinks containing 15 or 30 g freeze-dried wild blueberry (WBB) powder. A cognitive battery including tests of verbal memory, word recognition, response interference, response inhibition and levels of processing was performed at baseline, and 1.15, 3 and 6 h following treatment. Results: Significant WBB-related improvements included final immediate recall at 1.15 h, delayed word recognition sustained over each period, and accuracy on cognitively demanding incongruent trials in the interference task at 3h. Importantly, across all measures, cognitive performance improved, consistent with a dose–response model, with the best performance following 30 g WBB and the worst following vehicle. Conclusion: Findings demonstrate WBB-related cognitive improvements in 7- to 10-year-old children. These effects would seem to be particularly sensitive to the cognitive demand of task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reports a single case of ipsilesional left neglect dyslexia and interprets it according to the three-level model of visual word recognition proposed by Caramazza and Hillis (1990). The three levels reflect a progression from the physical stimulus to an abstract representation of a word. RR was not impaired at the first, retinocentric, level, which represents the individual features of letters within a word according to the location of the word in the visual field: She made the same number of errors to words presented in her left visual field as in her right visual field. A deficit at this level should also mean the patient neglects all stimuli. This did not occur with RR: She did not neglect when naming the items in rows of objects and rows of geometric symbols. In addition, although she displayed significant neglect dyslexia when making visual matching judgements on pairs of words and nonwords, she did not do so to pairs of nonsense letter shapes, shapes which display the same level of visual complexity as letters in words. RR was not impaired at the third, graphemic, level, which represents the ordinal positions of letters within a word: She continued to neglect the leftmost (spatial) letter of words presented in mirror-reversed orientation and she did not neglect in oral spelling. By elimination, these results suggest RR's deficit affects a spatial reference frame where the representational space is bounded by the stimulus: A stimulus-centred level of representation. We define five characteristics of a stimulus-centred deficit, as manifest in RR. First, it is not the case that neglect dyslexia occurs because the remaining letters in a string attract or capture attention away from the leftmost letter(s). Second, the deficit is continuous across the letter string. Third, perceptually significant features, such as spaces, define potential words. Fourth, the whole, rather than part, of a letter is neglected. Fifth, category information is preserved. It is concluded that the Caramazza-Hillis model accounts well for RR's data, although we conclude that neglect dyslexia can be present when a more general visuospatial neglect is absent.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The right cerebral hemisphere has long been argued to lack phonological processing capacity. Recently, however, a sex difference in the cortical representation of phonology has been proposed, suggesting discrete left hemisphere lateralization in males and more distributed, bilateral representation of function in females. To evaluate this hypothesis and shed light on sex differences in the phonological processing capabilities of the left and right hemispheres, we conducted two experiments. Experiment 1 assessed phonological activation implicitly (masked homophone priming), testing 52 (M = 25, F = 27; mean age 19.23 years, SD 1.64 years) strongly right-handed participants. Experiment 2 subsequently assessed the explicit recruitment of phonology (rhyme judgement), testing 50 (M = 25, F = 25; mean age 19.67 years, SD 2.05 years) strongly right-handed participants. In both experiments the orthographic overlap between stimulus pairs was strictly controlled using DICE [Brew, C., & McKelvie, D. (1996). Word-pair extraction for lexicography. In K. Oflazer & H. Somers (Eds.), Proceedings of the second international conference on new methods in language processing (pp. 45–55). Ankara: VCH], such that pairs shared (a) high orthographic and phonological similarity (e.g., not–KNOT); (b) high orthographic and low phonological similarity (e.g., pint–HINT); (c) low orthographic and high phonological similarity (e.g., use–EWES); or (d) low orthographic and low phonological similarity (e.g., kind–DONE). As anticipated, high orthographic similarity facilitated both left and right hemisphere performance, whereas the left hemisphere showed greater facility when phonological similarity was high. This difference in hemispheric processing of phonological representations was especially pronounced in males, whereas female performance was far less sensitive to visual field of presentation across both implicit and explicit phonological tasks. As such, the findings offer behavioural evidence indicating that though both hemispheres are capable of orthographic analysis, phonological processing is discretely lateralised to the left hemisphere in males, but available in both the left and right hemisphere in females.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is intended to investigate the interplay between proficiency and gender in the use of communication strategies. Sixty Iranian university male and female subjects studying English took part in the experiment and performed two tasks: word recognition and picture-story narration. The results indicate that proficiency had a more perceptible effect on the frequency and types of communication strategies. Tasks also had a strong effect on the number and type of strategies chosen. Gender did not yield any significant results except in the case of low proficiency level of female participants. The reason was attributed to the subject of study and formal educational system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces a basic frame for rehabilitation motion practice system which detects 3D motion trajectory with the Microsoft Kinect (MSK) sensor system and proposes a cost-effective 3D motion matching algorithm. The rehabilitation motion practice system displays a reference 3D motion in the database system that the player (patient) tries to follow. The player’s motion is traced by the MSK sensor system and then compared with the reference motion to evaluate how well the player follows the reference motion. In this system, 3D motion matching algorithm is a key feature for accurate evaluation for player’s performance. Even though similarity measurement of 3D trajectories is one of the most important tasks in 3D motion analysis, existing methods are still limited. Recent researches focus on the full length 3D trajectory data set. However, it is not true that every point on the trajectory plays the same role and has the same meaning. In this situation, we developed a new cost-effective method that only uses the less number of features called ‘signature’ which is a flexible descriptor computed from the region of ‘elbow points’. Therefore, our proposed method runs faster than other methods which use the full length trajectory information. The similarity of trajectories is measured based on the signature using an alignment method such as dynamic time warping (DTW), continuous dynamic time warping (CDTW) or longest common sub-sequence (LCSS) method. In the experimental studies, we applied the MSK sensor system to detect, trace and match the 3D motion of human body. This application was assumed as a system for guiding a rehabilitation practice which can evaluate how well the motion practice was performed based on comparison of the patient’s practice motion traced by the MSK system with the pre-defined reference motion in a database. In order to evaluate the accuracy of our 3D motion matching algorithm, we compared our method with two other methods using Australian sign word dataset. As a result, our matching algorithm outperforms in matching 3D motion, and it can be exploited for a base framework for various 3D motion-based applications at low cost with high accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The automatic speech recognition by machine has been the target of researchers in the past five decades. In this period have been numerous advances, such as in the field of recognition of isolated words (commands), which has very high rates of recognition, currently. However, we are still far from developing a system that could have a performance similar to the human being (automatic continuous speech recognition). One of the great challenges of searches for continuous speech recognition is the large amount of pattern. The modern languages such as English, French, Spanish and Portuguese have approximately 500,000 words or patterns to be identified. The purpose of this study is to use smaller units than the word such as phonemes, syllables and difones units as the basis for the speech recognition, aiming to recognize any words without necessarily using them. The main goal is to reduce the restriction imposed by the excessive amount of patterns. In order to validate this proposal, the system was tested in the isolated word recognition in dependent-case. The phonemes characteristics of the Brazil s Portuguese language were used to developed the hierarchy decision system. These decisions are made through the use of neural networks SVM (Support Vector Machines). The main speech features used were obtained from the Wavelet Packet Transform. The descriptors MFCC (Mel-Frequency Cepstral Coefficient) are also used in this work. It was concluded that the method proposed in this work, showed good results in the steps of recognition of vowels, consonants (syllables) and words when compared with other existing methods in literature

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Down syndrome (DS) is one of the most frequent causes of intellectual disability, affecting one in every 600 to 1000 live births. Studies have demonstrated that people with DS have a lower capacity for short-term memory (STM) and working memory (WM), which affects their capability to learn new words and to follow spoken instructions, specially when they involve multiple information or consecutive orders/orientations. It seems that the basis of the learning process, as it happens with language and mathematics comprehension and reasoning, relies in the STM and WM systems. Individuals with DS are increasingly included in mainstream education, and yet, very few researches have been conducted to investigate the influence of memory development and the type of enrollment (regular school and special school). This study investigated the relationship between the type of school enrollment with the performance on STM tests and also, the relationship of this performance with early stimulation (ES). The tests used in the first research were the digit span, free recall, word recognition and subtests of the Wechsler Intelligence Scale for Children Third Edition (WISC-III). Individuals enrolled in the regular schools group had higher scores on the digit span test and the subtests of the WISC-III. In the free recall and recognition tests, no differences were found. This study indicates that the type of enrollment might influence the memory development of individuals with DS and clearly points the need for future investigations. In the second research, the tests used were the digit span, free word recall and subtests of the WISC-III. The test results showed better performance by adults that received ES before six months of age. The studies showed improvement in STM both in people who attended or were attending regular school, as well as those who benefited from ES before six months of age. However, some issues still need to be better understood. What is the relation between this stimulation with the individual s education? Since ES may reflect a greater family involvement with the individual, what is the role of emotional components derived from this involvement in the cognitive improvement? These and other questions are part of the continuity of this study

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this work was to verify the effect of teaching the echoic behavior over the pictures naming in four children between eight and nine years old with prelingual hearing impaired, users of cochlear implants. The design adopted was: (a) pre-training that taught the matching-to-sample task; (b) pre-tests that selected three words to teach; (c) teaching of auditory-visual conditional relations; (d) naming pos-test; (e) the teaching of echoic with orofacial clues and, (f) the second naming pos-test. In the pre-test all participants achieved smaller percentage of correct on naming (60%-80%) and echoic (20%-50%) when compared to percentages word recognition (86%-93%). All participants learned the auditory-visual relations. The improvement on naming test occurred after auditory training select based for two participants; for other two participants the improvement on naming test occurred just after the training of echoic. Analysis of data showed that the listening and speaking performances are independent in their establishment and require specific conditions of teaching; in the case of this study, even though the result is not generalized to all participants, the highest correspondence into point to point naming was obtained following the teaching of echoic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new implantable hearing system, the direct acoustic cochlear stimulator (DACS) is presented. This system is based on the principle of a power-driven stapes prosthesis and intended for the treatment of severe mixed hearing loss due to advanced otosclerosis. It consists of an implantable electromagnetic transducer, which transfers acoustic energy directly to the inner ear, and an audio processor worn externally behind the implanted ear. The device is implanted using a specially developed retromeatal microsurgical approach. After removal of the stapes, a conventional stapes prosthesis is attached to the transducer and placed in the oval window to allow direct acoustical coupling to the perilymph of the inner ear. In order to restore the natural sound transmission of the ossicular chain, a second stapes prosthesis is placed in parallel to the first one into the oval window and attached to the patient's own incus, as in a conventional stapedectomy. Four patients were implanted with an investigational DACS device. The hearing threshold of the implanted ears before implantation ranged from 78 to 101 dB (air conduction, pure tone average, 0.5-4 kHz) with air-bone gaps of 33-44 dB in the same frequency range. Postoperatively, substantial improvements in sound field thresholds, speech intelligibility as well as in the subjective assessment of everyday situations were found in all patients. Two years after the implantations, monosyllabic word recognition scores in quiet at 75 dB improved by 45-100 percent points when using the DACS. Furthermore, hearing thresholds were already improved by the second stapes prosthesis alone by 14-28 dB (pure tone average 0.5-4 kHz, DACS switched off). No device-related serious medical complications occurred and all patients have continued to use their device on a daily basis for over 2 years. Copyright (c) 2008 S. Karger AG, Basel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To confirm the clinical efficacy and safety of a direct acoustic cochlear implant. STUDY DESIGN Prospective multicenter study. SETTING The study was performed at 3 university hospitals in Europe (Germany, The Netherlands, and Switzerland). PATIENTS Fifteen patients with severe-to-profound mixed hearing loss because of otosclerosis or previous failed stapes surgery. INTERVENTION Implantation with a Codacs direct acoustic cochlear implant investigational device (ID) combined with a stapedotomy with a conventional stapes prosthesis MAIN OUTCOME MEASURES Preoperative and postoperative (3 months after activation of the investigational direct acoustic cochlear implant) audiometric evaluation measuring conventional pure tone and speech audiometry, tympanometry, aided thresholds in sound field and hearing difficulty by the Abbreviated Profile of Hearing Aid Benefit questionnaire. RESULTS The preoperative and postoperative air and bone conduction thresholds did not change significantly by the implantation with the investigational Direct Acoustic Cochlear Implant. The mean sound field thresholds (0.25-8 kHz) improved significantly by 48 dB. The word recognition scores (WRS) at 50, 65, and 80 dB SPL improved significantly by 30.4%, 75%, and 78.2%, respectively, after implantation with the investigational direct acoustic cochlear implant compared with the preoperative unaided condition. The difficulty in hearing, measured by the Abbreviated Profile of Hearing Aid Benefit, decreased by 27% after implantation with the investigational direct acoustic cochlear implant. CONCLUSION Patients with moderate-to-severe mixed hearing loss because of otosclerosis can benefit substantially using the Codacs investigational device.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several issues concerning the current use of speech interfaces are discussed and the design and development of a speech interface that enables air traffic controllers to command and control their terminals by voice is presented. A special emphasis is made in the comparison between laboratory experiments and field experiments in which a set of ergonomics-related effects are detected that cannot be observed in the controlled laboratory experiments. The paper presents both objective and subjective performance obtained in field evaluation of the system with student controllers at an air traffic control (ATC) training facility. The system exhibits high word recognition test rates (0.4% error in Spanish and 1.5% in English) and low command error (6% error in Spanish and 10.6% error in English in the field tests). Subjective impression has also been positive, encouraging future development and integration phases in the Spanish ATC terminals designed by Aeropuertos Españoles y Navegación Aérea (AENA).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neuronal models predict that retrieval of specific event information reactivates brain regions that were active during encoding of this information. Consistent with this prediction, this positron-emission tomography study showed that remembering that visual words had been paired with sounds at encoding activated some of the auditory brain regions that were engaged during encoding. After word-sound encoding, activation of auditory brain regions was also observed during visual word recognition when there was no demand to retrieve auditory information. Collectively, these observations suggest that information about the auditory components of multisensory event information is stored in auditory responsive cortex and reactivated at retrieval, in keeping with classical ideas about “redintegration,” that is, the power of part of an encoded stimulus complex to evoke the whole experience.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A perda auditiva no idoso acarreta em dificuldade na percepção da fala. O teste comumente utilizado na logoaudiometria é a pesquisa do índice de reconhecimento de fala máximo (IR-Max) em uma única intensidade de apresentação da fala. Entretanto, o procedimento mais adequado seria a realização do teste em diversas intensidades, visto que o índice de acerto depende da intensidade da fala no momento do teste e está relacionado com o grau e configuração da perda auditiva. A imprecisão na obtenção do IR-Max poderá gerar uma hipótese diagnóstica errônea e o insucesso no processo de intervenção na perda auditiva. Objetivo: Verificar a interferência do nível de apresentação da fala, no teste de reconhecimento de fala, em idosos com perda auditiva sensorioneural com diferentes configurações audiométricas. Métodos: Participaram 64 idosos, 120 orelhas (61 do gênero feminino e 59 do gênero masculino), idade entre 60 e 88 anos, divididos em grupos: G1- composto por 23 orelhas com configuração horizontal, G2- 55 orelhas com configuração descendente, G3- 42 orelhas com configuração abrupta. Os critérios de inclusão foram: perda auditiva sensorioneural de grau leve a severo, não usuário de aparelho de amplificação sonora individual (AASI), ou com tempo de uso inferior a dois meses, e ausência de alterações cognitivas. Foram realizados os seguintes procedimentos: pesquisas do limiar de reconhecimento de fala (LRF), do índice de reconhecimento de fala (IRF) em diversas intensidades e do nível de máximo conforto (MCL) e desconforto (UCL) para a fala. Para tal, foram utilizadas listas com 11 monossílabos, para diminuir a duração do teste. A análise estatística foi composta pelo teste Análise de Variância (ANOVA) e teste de Tukey. Resultados: A configuração descendente foi a de maior ocorrência. Indivíduos com configuração horizontal apresentaram índice médio de acerto mais elevado de reconhecimento de fala. Ao considerar o total avaliado, 27,27% dos indivíduos com configuração horizontal revelaram o IR-Max no MCL, assim como 38,18% com configuração descendente e 26,19% com configuração abrupta. O IR-Max foi encontrado no UCL, em 40,90% dos indivíduos com configuração horizontal, 45,45% com configuração descendente e 28,20% com configuração abrupta. Respectivamente, o maior e o menor índice médio de acerto foram encontrados em: G1- 30 e 40 dBNS; G2- 50 e 10 dBNS; G3- 45 e 10 dBNS. Não há uma única intensidade de fala a ser utilizada em todos os tipos de configurações audiométricas, entretanto, os níveis de sensação que identificaram os maiores índices médios de acerto foram: G1- 20 a 30 dBNS, G2- 20 a 50 dBNS; G3- 45 dBNS. O MCL e o UCL-5 dB para a fala não foram eficazes para determinar o IR-Max. Conclusões: O nível de apresentação teve influência no desempenho no reconhecimento de fala para monossílabos em idosos com perda auditiva sensorioneural em todas as configurações audiométricas. A perda auditiva de grau moderado e a configuração audiométrica descendente foram mais frequentes nessa população, seguida da abrupta e horizontal.