890 resultados para Justification of right to know under freedom of speech doctrine
Resumo:
Williams syndrome (WS) is a neurodevelopmental genetic disorder, often referred as being characterized by dissociation between verbal and non-verbal abilities, although the number of studies disputing this proposal is emerging. Indeed, although they have been traditionally reported as displaying increased speech fluency, this topic has not been fully addressed in research. In previous studies carried out with a small group of individuals with WS, we reported speech breakdowns during conversational and autobiographical narratives suggestive of language difficulties. In the current study, we characterized the speech fluency profile using an ecologically based measure - a narrative task (story generation) was collected from a group of individuals with WS (n = 30) and typically developing group (n = 39) matched in mental age. Oral narratives were elicited using a picture stimulus - the cookie theft picture from Boston Diagnosis Aphasia Test. All narratives were analyzed according to typology and frequency of fluency breakdowns (non-stuttered and stuttered disfluencies). Oral narratives in WS group differed from typically developing group, mainly due to a significant increase in the frequency of disfluencies, particularly in terms of hesitations, repetitions and pauses. This is the first evidence of disfluencies in WS using an ecologically based task (oral narrative task), suggesting that these speech disfluencies may represent a significant marker of language problems in WS. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Pós-graduação em História - FCHS
Resumo:
The aim of this study was to describe the trajectory and the outcomes of speech-language therapy in Prader-Willi syndrome through a longitudinal study of the case of an 8 year-old boy, along four years of speech-language therapy follow-up. The therapy sessions were filmed and documental analysis of information from the child's records regarding anamnesis, evaluation and speech-language therapy reports and multidisciplinary evaluations were carried out. The child presented typical characteristics of Prader-Willi syndrome, such as obesity, hyperfagia, anxiety, behavioral problems and self aggression episodes. Speech-language pathology evaluation showed orofacial hypotony, sialorrhea, hypernasal voice, cognitive deficits, oral comprehension difficulties, communication using gestures and unintelligible isolated words. Initially, speech-language therapy had the aim to promote the language development emphasizing social interaction through recreational activities. With the evolution of the case, the main focus became the development of conversation and narrative abilities. It were observed improvements in attention, symbolic play, social contact and behavior. Moreover, there was an increase in vocabulary, and evolution in oral comprehension and the development of narrative abilities. Hence, speech-language pathology intervention in the case described was effective in different linguistic levels, regarding phonological, syntactic, lexical and pragmatic abilities.
Resumo:
Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated. OBJECTIVE: To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals. METHOD: 31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz active channels using consonantal (/ba/-/da/) and vocalic (/i/-/a/) speech contrasts. Design: A crosssectional prospective cohort study. RESULTS: We found a statistically significant difference between the speech contrast used and the latencies of the N2 (p = 0.00) and P3 (p = 0.00) components, as well as between the active channel considered (Fz/Cz) and the P3 latency and amplitude values. These correlations did not occur for the exogenous components N1 and P2. CONCLUSION: The speech stimulus contrast, vocalic or consonantal, must be taken into account in the analysis of the cortical auditory evoked potential, N2 component, and auditory cognitive P3 potential.
Resumo:
This study aimed to assess speech perception and communication skills in adolescents between ages 8 and 18 that received cochlear implants for pre- and peri-lingual deafness.
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.
Resumo:
Nadeina set out to develop methods of speech development in Russian as a mother tongue, focusing on improving diction, training in voice quality control, intonation control, the removal of dialect, and speech etiquette. She began with training in the receptive skills of language, i.e. reading and listening, since the interpretation of someone else's language plays an important role in language production. Her studies of students' reading speed of students showed that it varies between 40 and 120 words per minute, which is normally considered very slow. She discovered a strong correlation between speed of reading and speaking skills: the slower a person reads the worse is their ability to speak and has designed exercises to improve reading skills. Nadeina also believes that listening to other people's speech is very important, both to analyse its content and in some cases as an example, so listening skills need to be developed. Many people have poor pronunciation habits acquired as children. On the basis of speech samples from young Russians (male and female, aged 17-22), Nadeina analysed the commonest speech faults - nasalisation, hesitation and hemming at the end of sense-groups, etc. Using a group of twenty listeners, she looked for a correlation between how voice quality is perceived and certain voice quality parameters, e.g. pitch range, tremulousness, fluency, whispering, harshness, sonority, tension and audible breath. She found that the less non-linguistic segment variations in speech appeared, the more attractive the speech was rated. The results are included in a textbook aimed at helping people to improve their oral skills and to communicate ideas to an audience. She believes this will assist Russian officials in their attempts to communicate their ideas to different social spheres, and also foreigners learning Russian.
Resumo:
Whereas semantic, logical, and narrative features of verbal humor are well-researched, phonological and prosodic dimensions of verbal funniness are hardly explored. In a 2 × 2 design we varied rhyme and meter in humorous couplets. Rhyme and meter enhanced funniness ratings and supported faster processing. Rhyming couplets also elicited more intense and more positive affective responses, increased subjective comprehensibility and more accurate memory. The humor effect is attributed to special rhyme and meter features distinctive of humoristic poetry in several languages. Verses that employ these formal features make an artful use of typical poetic vices of amateurish poems written for birthday parties or other occasions. Their metrical patterning sounds “mechanical” rather than genuinely “poetic”; they also disregard rules for “good” rhymes. The processing of such verses is discussed in terms of a metacognitive integration of their poetically deviant features into an overall effect of processing ease. The study highlights the importance of nonsemantic rhetorical features in language processing.
Resumo:
Amyotrophic Lateral Sclerosis is a severe disease, which dramatically reduces the speech communication skills of patients as disease progresses. The present study is devoted to define accurate and objective estimates to characterize the loss of communication skills, to help clinicians and therapists in monitoring disease progression and in deciding on rehabilitation interventions. The methodology proposed is based on the perceptual (neuromorphic)definition of speech dinamics, concentrated in vowel sound in character and duration. We present the results from a longitudinal study carried out in an ALS patient during one year. Discussion addresses future actions.
Resumo:
Este trabajo de fin de grado trata sobre el estudio de la energía solar de concentración en todos sus aspectos. Se han analizado sus tecnologías, así como posibles innovaciones que se puedan producir en los próximos años. También se va ha llevado a cabo un estudio de los costes actuales que conlleva el uso de este tipo de generación de energía, así como un análisis de las reducciones que pueden experimentar estos costes. Para poder realizar una comparación posterior con la energía solar fotovoltaica se ha escrito un capítulo dedicado exclusivamente a esta tecnología para conocer cuál es el estado actual. Además se ha realizado un análisis DAFO de los mercados que a priori puedan parecer más beneficiosos y que cuenten con un mayor potencial para el desarrollo de esta tecnología. A modo de conclusión para exponer la comparativa entre esta tecnología y la energía solar fotovoltaica se ha desarrollado un análisis de la viabilidad económica de dos plantas de estas tecnologías para comprobar en qué escenarios resulta más provechosa cada una de ellas. Al final se incluyen unas conclusiones extraídas del desarrollo del trabajo. Abstract This project concerns a study about every aspect about the concentrated solar power. Each type of technology has been analyzed as well as the possible innovations that may occur in the future. Also, the theme regarding the costs of this kind of power generation and an analysis dealing with the potential cost reduction that it may experience has been carried out. Then, in anticipation to do a comparative with the photovoltaic solar power, a whole chapter has been dedicated to this technology, to know what its actual state is. In addition, a SWOT analysis has also been carried out about the countries that at first sight might be a good option to develop the CSP. To conclude and to expose the comparative between these two technologies, a study about the economic viability of two power plants to know under what circumstances are each of them more profitable has been made. At the end some conclusions extracted from the development of this work have been included.
Resumo:
The term "speech synthesis" has been used for diverse technical approaches. In this paper, some of the approaches used to generate synthetic speech in a text-to-speech system are reviewed, and some of the basic motivations for choosing one method over another are discussed. It is important to keep in mind, however, that speech synthesis models are needed not just for speech generation but to help us understand how speech is created, or even how articulation can explain language structure. General issues such as the synthesis of different voices, accents, and multiple languages are discussed as special challenges facing the speech synthesis community.
Resumo:
The conversion of text to speech is seen as an analysis of the input text to obtain a common underlying linguistic description, followed by a synthesis of the output speech waveform from this fundamental specification. Hence, the comprehensive linguistic structure serving as the substrate for an utterance must be discovered by analysis from the text. The pronunciation of individual words in unrestricted text is determined by morphological analysis or letter-to-sound conversion, followed by specification of the word-level stress contour. In addition, many text character strings, such as titles, numbers, and acronyms, are abbreviations for normal words, which must be derived. To further refine these pronunciations and to discover the prosodic structure of the utterance, word part of speech must be computed, followed by a phrase-level parsing. From this structure the prosodic structure of the utterance can be determined, which is needed in order to specify the durational framework and fundamental frequency contour of the utterance. In discourse contexts, several factors such as the specification of new and old information, contrast, and pronominal reference can be used to further modify the prosodic specification. When the prosodic correlates have been computed and the segmental sequence is assembled, a complete input suitable for speech synthesis has been determined. Lastly, multilingual systems utilizing rule frameworks are mentioned, and future directions are characterized.
Resumo:
The integration of speech recognition with natural language understanding raises issues of how to adapt natural language processing to the characteristics of spoken language; how to cope with errorful recognition output, including the use of natural language information to reduce recognition errors; and how to use information from the speech signal, beyond just the sequence of words, as an aid to understanding. This paper reviews current research addressing these questions in the Spoken Language Program sponsored by the Advanced Research Projects Agency (ARPA). I begin by reviewing some of the ways that spontaneous spoken language differs from standard written language and discuss methods of coping with the difficulties of spontaneous speech. I then look at how systems cope with errors in speech recognition and at attempts to use natural language information to reduce recognition errors. Finally, I discuss how prosodic information in the speech signal might be used to improve understanding.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Resumo:
Speech interface technology, which includes automatic speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact on business and personal computer use. Today, powerful and inexpensive microprocessors and improved algorithms are driving commercial applications in computer command, consumer, data entry, speech-to-text, telephone, and voice verification. Robust speaker-independent recognition systems for command and navigation in personal computers are now available; telephone-based transaction and database inquiry systems using both speech synthesis and recognition are coming into use. Large-vocabulary speech interface systems for document creation and read-aloud proofing are expanding beyond niche markets. Today's applications represent a small preview of a rich future for speech interface technology that will eventually replace keyboards with microphones and loud-speakers to give easy accessibility to increasingly intelligent machines.