158 resultados para Vowels
Resumo:
Fifth-grade children were given a series of word reading tasks. First, two sets of 16 disyllabic words with medial VCV spellings, and with a long initial vowel were selected, varying in frequency but with similar word-initial segments. Nonwords were derived from these sets of words by exchanging initial onsets. Children read these nonwords in a first testing session. In a second test session, children were given the Woodcock Word Identification Test and the set of analogue words from which the nonwords were derived. Initial analyses examined only nonwords derived from words that were correctly read. Both sets of nonwords were more likely to be read with a long initial vowel than a short initial vowel, although this tendency was stronger in nonwords derived from high frequency words. Furthennore, Word Identification ability showed a strong relationship with the preference for long initial vowels in this type of disyllabic nonword, both for nonwords derived from known analogues and for nonwords derived from words that children could not read correctly. This preference was also correlated with the preference for context-sensitive grapheme-phoneme correspondences in the reading of ambiguous monosyllabic nonwords.These results have strong implications for current theories of word reading.
Resumo:
Objective: Stimulability is the ability to produce an adequate sound under specific conditions. This study aimed to describe the stimulability of Brazilian Portuguese-speaking children with and without phonological disorders for the production of liquid sounds with the aid of visual and tactile cues. Patients and Methods: The study sample included 36 children between 5; 0 and 11; 6 years of age, 18 with phonological disorder and 18 without any speech-language disorders. Stimulability was measured for syllable imitation. The stimulability test employed includes 63 syllables with the sounds [1], [(sic)], and [(sic)], as well as seven oral vowels. If the subject was unable to imitate a sound, a visual cue was given. When necessary, a tactile cue was also given. Results: The sound [(sic)] required greater use of sensory cues. Children with phonological disorder needed a greater number of cues. Conclusion: The use of sensory cues seemed to facilitate sound stimulability, making it possible for the children with phonological disorder to accurately produce the sounds modeled. Copyright (C) 2009 S. Karger AG, Basel
Resumo:
Objective: To assess, in patients undergoing glossectomy, the influence of the palatal augmentation prosthesis on the speech intelligibility and acoustic spectrographic characteristics of the formants of oral vowels in Brazilian Portuguese, specifically the first 3 formants (F1 [/a,e,u/], F2 [/o,o,u/], and F3 [/a,o/]). Design: Speech evaluation with and without a palatal augmentation prosthesis using blinded randomized listener judgments. Setting: Tertiary referral center. Patients: Thirty-six patients (33 men and 3 women) aged 30 to 80 (mean [SD], 53.9 [10.5]) years underwent glossectomy (14, total glossectomy; 12, total glossectomy and partial mandibulectomy; 6, hemiglossectomy; and 4, subtotal glossectomy) with use of the augmentation prosthesis for at least 3 months before inclusion in the study. Main Outcome Measures: Spontaneous speech intel-ligibility (assessed by expert listeners using a 4-category scale) and spectrographic formants assessment. Results: We found a statistically significant improvement of spontaneous speech intelligibility and the average number of correctly identified syllables with the use of the prosthesis (P < .05). Statistically significant differences occurred for the F1 values of the vowels /a,e,u/; for F2 values, there was a significant difference of the vowels /o,o,u/; and for F3 values, there was a significant difference of the vowels la,61 (P < .001). Conclusions: The palatal augmentation prosthesis improved the intelligibility of spontaneous speech and syllables for patients who underwent glossectomy. It also increased the F2 and F3 values for all vowels and the F I values for the vowels /o,o,u/. This effect brought the values of many vowel formants closer to normal.
Resumo:
Profound hearing loss is a disability that affects personality and when it involves teenagers before language acquisition, these bio-psychosocial conflicts can be exacerbated, requiring careful evaluation and choice of them for cochlear implant. Aim: To evaluate speech perception by adolescents with profound hearing loss, users of cochlear Implants. Study Design: Prospective. Materials and Methods: Twenty-five individuals with severe or profound pre-lingual hearing loss who underwent cochlear implantation during adolescence, between 10 to 17 years and 11 months, who went through speech perception tests before the implant and 2 years after device activation. For comparison and analysis we used the results from tests of four choice, recognition of vowels and recognition of sentences in a closed setting and the open environment. Results: The average percentage of correct answers in the four choice test before the implant was 46.9% and after 24 months of device use, this value went up to 86.1% in the vowels recognition test, the average difference was 45.13% to 83.13% and the sentences recognition test together in closed and open settings was 19.3% to 60.6% and 1.08% to 20.47% respectively. Conclusion: All patients, although with mixed results, achieved statistical improvement in all speech tests that were employed.
Resumo:
The objective of the study was to analyze comparatively the jitter and shimmer values of spoken voice among women in menacme and menopausal women using or not hormonal replacement therapy (HRT). Forty-five women were studied, divided into the following groups: Control Group (CG), 15 women aged 20-40 years with regular menstrual cycles who did not take hormonal contraceptives, Treated Group (TG), 15 women aged 45-60 years with at least 2 years of menopause, under continuous HRT with I mg estradiol valerate + 90 mu g norgestimate per day for at least 6 months; Untreated Group (UG), 15 women aged 45-60 years with at least 2 years of menopause who did not use HRT. Mean age was 30.3, 54.5, and 56.5 years for CG, TG, and UG, respectively. All subjects were submitted to acoustic analysis of jitter and shimmer for the sustained vowels /e/ and /i/. Mean jitter values were 0.56%, 0.64%, and 0.56% for the vowel /e/ and 0.88%, 0.79%, and 0.68% for the vowel /i/ for CG, TG, and UG, respectively. Mean shimmer values were 4.17%, 4.38%, and 4.77% for the vowel /e/ and 5.19%, 4.59%, and 5.37% for the vowel /i/ for CG, TG, and UG, respectively. There were no significant differences between the groups studied. The results obtained here by the methodology used suggest that there were no significant differences in jitter and shimmer when we assessed the sustained vowels /i/ and /e/ between menopausal women using or not HRT or between young and menopausal women treated or not.
Resumo:
The present research investigated attentional blink startle modulation at lead intervals of 60, 240 and 3500 ms. Letters printed in Gothic or standard fonts, which differed in rated interest, but not valence, served as lead stimuli. Experiment I established that identifying letters as vowels/consonants took longer than reading the letters and that performance in both tasks was slower if letters were printed in Gothic font. In Experiment 2, acoustic blink eliciting stimuli were presented 60, 240 and 3500 ms after onset of the letters in Gothic and in standard font and during intertrial intervals. Half the participants (Group Task) were asked to identify the letters as vowels/consonants whereas the others (Group No-Task) did not perform a task. Relative to control responses, blinks during letters were facilitated at 60 and 3500 ms lead intervals and inhibited at the 240 ms lead interval for both conditions in Group Task. Differences in blink modulation across lead intervals were found in Group No-Task only during Gothic letters with blinks at the 3500 ms lead interval facilitated relative to control blinks. The present results confirm previous findings indicating that attentional processes can modulate startle at very short lead intervals. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Given the importance of syllables in the development of reading, spelling, and phonological awareness, information is needed about how children syllabify spoken words. To what extent is syllabification affected by knowledge of spelling, to what extent by phonology, and which phonological factors are influential? In Experiment 1, six- and seven-year-old children did not show effects of spelling on oral syllabification, performing similarly on words such as habit and rabbit. Spelling influenced the syllabification of older children and adults, with the results suggesting that knowledge of spelling must be well entrenched before it begins to affect oral syllabification. Experiment 2 revealed influences of phonological factors on syllabification that were similar across age groups. Young children, like older children and adults, showed differences between words with short and long vowels (e.g., lemon vs. demon) and words with sonorant and obstruent intervocalic consonants (e.g., melon vs. wagon). (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
Primary objective : To investigate the speed and accuracy of tongue movements exhibited by a sample of children with dysarthria following severe traumatic brain injury (TBI) during speech using electromagnetic articulography (EMA). Methods and procedures : Four children, aged between 12.75-17.17 years with dysarthria following TBI, were assessed using the AG-100 electromagnetic articulography system (Carstens Medizinelektronik). The movement trajectories of receiver coils affixed to each child's tongue were examined during consonant productions, together with a range of quantitative kinematic parameters. The children's results were individually compared against the mean values obtained by a group of eight control children (mean age of 14.67 years, SD 1.60). Main outcomes and results : All four TBI children were perceived to exhibit reduced rates of speech and increased word durations. Objective EMA analysis revealed that two of the TBI children exhibited significantly longer consonant durations compared to the control group, resulting from different underlying mechanisms relating to speed generation capabilities and distances travelled. The other two TBI children did not exhibit increased initial consonant movement durations, suggesting that the vowels and/or final consonants may have been contributing to the increased word durations. Conclusions and clinical implications : The finding of different underlying articulatory kinematic profiles has important implications for the treatment of speech rate disturbances in children with dysarthria following TBI.
Resumo:
Introdução – A análise da forma ou morfometria de estruturas anatómicas, como o trato vocal, pode ser efetuada a partir de imagens bidimensionais (2D) como de aquisições volumétricas (3D) de ressonância magnética (RM). Esta técnica de imagem tem vindo a ter uma utilização crescente no estudo da produção da fala. Objetivos – Demonstrar como pode ser efetuada a morfometria do trato vocal a partir da imagem por ressonância magnética e ainda apresentar padrões anatómicos normais durante a produção das vogais [i a u] e dois padrões articulatórios patológicos em contexto simulado. Métodos – As imagens consideradas foram recolhidas a partir de aquisições 2D (Turbo Spin-eco) e 3D (Flash Gradiente-Eco) de RM em quatro sujeitos durante a produção das vogais em estudo; adicionalmente procedeu-se à avaliação de duas perturbações articulatórias usando o mesmo protocolo de RM. A morfometria do trato vocal foi extraída com recurso a técnicas manuais (para extração de cinco medidas articulatórias) e automáticas (para determinação de volumes) de processamento e análise de imagem. Resultados – Foi possível analisar todo o trato vocal, incluindo a posição e a forma dos articuladores, tendo por base cinco medidas descritivas do posicionamento destes órgãos durante a produção das vogais. A determinação destas medições permitiu identificar quais as estratégias mais comummente adotadas na produção de cada som, nomeadamente a postura articulatória e a variação de cada medida para cada um dos sujeitos em estudo. No contexto de voz falada intersujeitos, foi notória a variabilidade nos volumes estimados do trato vocal para cada som e, em especial, o aumento do volume do trato vocal na perturbação articulatória de sigmatismo. Conclusão – A imagem por RM é, sem dúvida, uma técnica promissora no estudo da fala, inócua, não-invasiva e que fornece informação fiável da morfometria do trato vocal.
Resumo:
The tongue is the most important and dynamic articulator for speech formation, because of its anatomic aspects (particularly, the large volume of this muscular organ comparatively to the surrounding organs of the vocal tract) and also due to the wide range of movements and flexibility that are involved. In speech communication research, a variety of techniques have been used for measuring the three-dimensional vocal tract shapes. More recently, magnetic resonance imaging (MRI) becomes common; mainly, because this technique allows the collection of a set of static and dynamic images that can represent the entire vocal tract along any orientation. Over the years, different anatomical organs of the vocal tract have been modelled; namely, 2D and 3D tongue models, using parametric or statistical modelling procedures. Our aims are to present and describe some 3D reconstructed models from MRI data, for one subject uttering sustained articulations of some typical Portuguese sounds. Thus, we present a 3D database of the tongue obtained by stack combinations with the subject articulating Portuguese vowels. This 3D knowledge of the speech organs could be very important; especially, for clinical purposes (for example, for the assessment of articulatory impairments followed by tongue surgery in speech rehabilitation), and also for a better understanding of acoustic theory in speech formation.
Resumo:
The mechanisms of speech production are complex and have been raising attention from researchers of both medical and computer vision fields. In the speech production mechanism, the articulator’s study is a complex issue, since they have a high level of freedom along this process, namely the tongue, which instigates a problem in its control and observation. In this work it is automatically characterized the tongues shape during the articulation of the oral vowels of Portuguese European by using statistical modeling on MR-images. A point distribution model is built from a set of images collected during artificially sustained articulations of Portuguese European sounds, which can extract the main characteristics of the motion of the tongue. The model built in this work allows under standing more clearly the dynamic speech events involved during sustained articulations. The tongue shape model built can also be useful for speech rehabilitation purposes, specifically to recognize the compensatory movements of the articulators during speech production.
Resumo:
Sendo uma forma natural de interação homem-máquina, o reconhecimento de gestos implica uma forte componente de investigação em áreas como a visão por computador e a aprendizagem computacional. O reconhecimento gestual é uma área com aplicações muito diversas, fornecendo aos utilizadores uma forma mais natural e mais simples de comunicar com sistemas baseados em computador, sem a necessidade de utilização de dispositivos extras. Assim, o objectivo principal da investigação na área de reconhecimento de gestos aplicada à interacção homemmáquina é o da criação de sistemas, que possam identificar gestos específicos e usálos para transmitir informações ou para controlar dispositivos. Para isso as interfaces baseados em visão para o reconhecimento de gestos, necessitam de detectar a mão de forma rápida e robusta e de serem capazes de efetuar o reconhecimento de gestos em tempo real. Hoje em dia, os sistemas de reconhecimento de gestos baseados em visão são capazes de trabalhar com soluções específicas, construídos para resolver um determinado problema e configurados para trabalhar de uma forma particular. Este projeto de investigação estudou e implementou soluções, suficientemente genéricas, com o recurso a algoritmos de aprendizagem computacional, permitindo a sua aplicação num conjunto alargado de sistemas de interface homem-máquina, para reconhecimento de gestos em tempo real. A solução proposta, Gesture Learning Module Architecture (GeLMA), permite de forma simples definir um conjunto de comandos que pode ser baseado em gestos estáticos e dinâmicos e que pode ser facilmente integrado e configurado para ser utilizado numa série de aplicações. É um sistema de baixo custo e fácil de treinar e usar, e uma vez que é construído unicamente com bibliotecas de código. As experiências realizadas permitiram mostrar que o sistema atingiu uma precisão de 99,2% em termos de reconhecimento de gestos estáticos e uma precisão média de 93,7% em termos de reconhecimento de gestos dinâmicos. Para validar a solução proposta, foram implementados dois sistemas completos. O primeiro é um sistema em tempo real capaz de ajudar um árbitro a arbitrar um jogo de futebol robótico. A solução proposta combina um sistema de reconhecimento de gestos baseada em visão com a definição de uma linguagem formal, o CommLang Referee, à qual demos a designação de Referee Command Language Interface System (ReCLIS). O sistema identifica os comandos baseados num conjunto de gestos estáticos e dinâmicos executados pelo árbitro, sendo este posteriormente enviado para um interface de computador que transmite a respectiva informação para os robôs. O segundo é um sistema em tempo real capaz de interpretar um subconjunto da Linguagem Gestual Portuguesa. As experiências demonstraram que o sistema foi capaz de reconhecer as vogais em tempo real de forma fiável. Embora a solução implementada apenas tenha sido treinada para reconhecer as cinco vogais, o sistema é facilmente extensível para reconhecer o resto do alfabeto. As experiências também permitiram mostrar que a base dos sistemas de interação baseados em visão pode ser a mesma para todas as aplicações e, deste modo facilitar a sua implementação. A solução proposta tem ainda a vantagem de ser suficientemente genérica e uma base sólida para o desenvolvimento de sistemas baseados em reconhecimento gestual que podem ser facilmente integrados com qualquer aplicação de interface homem-máquina. A linguagem formal de definição da interface pode ser redefinida e o sistema pode ser facilmente configurado e treinado com um conjunto de gestos diferentes de forma a serem integrados na solução final.
Resumo:
Our aim was to analyse the impact of the characteristics of words used in spelling programmes and the nature of instructional guidelines on the evolution from grapho-perceptive writing to phonetic writing in preschool children. The participants were 50 5-year-old children, divided in five equivalent groups in intelligence, phonological skills and spelling. All the children knew the vowels and the consonants B, D, P, R, T, V, F, M and C, but didn’t use them on spelling. Their spelling was evaluated in a pre and post-test with 36 words beginning with the consonants known. In-between they underwent a writing programme designed to lead them to use the letters P and T to represent the initial phonemes of words. The groups differed on the kind of words used on training (words whose initial syllable matches the name of the initial letter—Exp. G1 and Exp. G2—versus words whose initial syllable is similar to the sound of the initial letter—Exp. G3 and Exp. G4). They also differed on the instruction used in order to lead them to think about the relations between the initial phoneme of words and the initial consonant (instructions designed to make the children think about letter names—Exp. G1 and Exp. G3 —versus instructions designed to make the children think about letter sounds—Exp. G2 and Exp. G4). The 5th was a control group. All the children evolved to syllabic phonetisations spellings. There are no differences between groups at the number of total phonetisations but we found some differences between groups at the quality of the phonetisations.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.