959 resultados para Speech language therapy
Resumo:
Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations.
Resumo:
Language extinction as a consequence of language shifts is a widespread social phenomenon that affects several million people all over the world today. An important task for social sciences research should therefore be to gain an understanding of language shifts, especially as a way of forecasting the extinction or survival of threatened languages, i.e., determining whether or not the subordinate language will survive in communities with a dominant and a subordinate language. In general, modeling is usually a very difficult task in the social sciences, particularly when it comes to forecasting the values of variables. However, the cellular automata theory can help us overcome this traditional difficulty. The purpose of this article is to investigate language shifts in the speech behavior of individuals using the methodology of the cellular automata theory. The findings on the dynamics of social impacts in the field of social psychology and the empirical data from language surveys on the use of Catalan in Valencia allowed us to define a cellular automaton and carry out a set of simulations using that automaton. The simulation results highlighted the key factors in the progression or reversal of a language shift and the use of these factors allowed us to forecast the future of a threatened language in a bilingual community.
Resumo:
Background: How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task. Results: The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words. Conclusion: The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones.
Resumo:
Background Mesial temporal lobe epilepsy (MTLE) is the most common type of focal epilepsy in adults and can be successfully cured by surgery. One of the main complications of this surgery however is a decline in language abilities. The magnitude of this decline is related to the degree of language lateralization to the left hemisphere. Most fMRI paradigms used to determine language dominance in epileptic populations have used active language tasks. Sometimes, these paradigms are too complex and may result in patient underperformance. Only a few studies have used purely passive tasks, such as listening to standard speech. Methods In the present study we characterized language lateralization in patients with MTLE using a rapid and passive semantic language task. We used functional magnetic resonance imaging (fMRI) to study 23 patients [12 with Left (LMTLE), 11 with Right mesial temporal lobe epilepsy (RMTLE)] and 19 healthy right-handed controls using a 6 minute long semantic task in which subjects passively listened to groups of sentences (SEN) and pseudo sentences (PSEN). A lateralization index (LI) was computed using a priori regions of interest of the temporal lobe. Results The LI for the significant contrasts produced activations for all participants in both temporal lobes. 81.8% of RMTLE patients and 79% of healthy individuals had a bilateral language representation for this particular task. However, 50% of LMTLE patients presented an atypical right hemispheric dominance in the LI. More importantly, the degree of right lateralization in LMTLE patients was correlated with the age of epilepsy onset. Conclusions The simple, rapid, non-collaboration dependent, passive task described in this study, produces a robust activation in the temporal lobe in both patients and controls and is capable of illustrating a pattern of atypical language organization for LMTLE patients. Furthermore, we observed that the atypical right-lateralization patterns in LMTLE patients was associated to earlier age at epilepsy onset. These results are in line with the idea that early onset of epileptic activity is associated to larger neuroplastic changes.
Resumo:
OBJECTIVE: To identify and quantify sources of variability in scores on the speech, spatial, and qualities of hearing scale (SSQ) and its short forms among normal-hearing and hearing-impaired subjects using a French-language version of the SSQ. DESIGN: Multi-regression analyses of SSQ scores were performed using age, gender, years of education, hearing loss, and hearing-loss asymmetry as predictors. Similar analyses were performed for each subscale (Speech, Spatial, and Qualities), for several SSQ short forms, and for differences in subscale scores. STUDY SAMPLE: One hundred normal-hearing subjects (NHS) and 230 hearing-impaired subjects (HIS). RESULTS: Hearing loss in the better ear and hearing-loss asymmetry were the two main predictors of scores on the overall SSQ, the three main subscales, and the SSQ short forms. The greatest difference between the NHS and HIS was observed for the Speech subscale, and the NHS showed scores well below the maximum of 10. An age effect was observed mostly on the Speech subscale items, and the number of years of education had a significant influence on several Spatial and Qualities subscale items. CONCLUSION: Strong similarities between SSQ scores obtained across different populations and languages, and between SSQ and short forms, underline their potential international use.
Resumo:
Language acquisition is a complex process that requires the synergic involvement of different cognitive functions, which include extracting and storing the words of the language and their embedded rules for progressive acquisition of grammatical information. As has been shown in other fields that study learning processes, synchronization mechanisms between neuronal assemblies might have a key role during language learning. In particular, studying these dynamics may help uncover whether different oscillatory patterns sustain more item-based learning of words and rule-based learning from speech input. Therefore, we tracked the modulation of oscillatory neural activity during the initial exposure to an artificial language, which contained embedded rules. We analyzed both spectral power variations, as a measure of local neuronal ensemble synchronization, as well as phase coherence patterns, as an index of the long-range coordination of these local groups of neurons. Synchronized activity in the gamma band (2040 Hz), previously reported to be related to the engagement of selective attention, showed a clear dissociation of local power and phase coherence between distant regions. In this frequency range, local synchrony characterized the subjects who were focused on word identification and was accompanied by increased coherence in the theta band (48 Hz). Only those subjects who were able to learn the embedded rules showed increased gamma band phase coherence between frontal, temporal, and parietal regions.
Resumo:
Performance-based studies on the psychological nature of linguistic competence can conceal significant differences in the brain processes that underlie native versus nonnative knowledge of language. Here we report results from the brain activity of very proficient early bilinguals making a lexical decision task that illustrates this point. Two groups of SpanishCatalan early bilinguals (Spanish-dominant and Catalan-dominant) were asked to decide whether a given form was a Catalan word or not. The nonwords were based on real words, with one vowel changed. In the experimental stimuli, the vowel change involved a Catalan-specific contrast that previous research had shown to be difficult for Spanish natives to perceive. In the control stimuli, the vowel switch involved contrasts common to Spanish and Catalan. The results indicated that the groups of bilinguals did not differ in their behavioral and event-related brain potential measurements for the control stimuli; both groups made very few errors and showed a larger N400 component for control nonwords than for control words. However, significant differences were observed for the experimental stimuli across groups: Specifically, Spanish-dominant bilinguals showed great difficulty in rejecting experimental nonwords. Indeed, these participants not only showed very high error rates for these stimuli, but also did not show an error-related negativity effect in their erroneous nonword decisions. However, both groups of bilinguals showed a larger correctrelated negativity when making correct decisions about the experimental nonwords. The results suggest that although some aspects of a second language system may show a remarkable lack of plasticity (like the acquisition of some foreign contrasts), first-language representations seem to be more dynamic in their capacity of adapting and incorporating new information. &
Resumo:
In this paper, we present the Melodic Analysis of Speech method (MAS) that enables us to carry out complete and objective descriptions of a language's intonation, from a phonetic (melodic) point of view as well as from a phonological point of view. It is based on the acoustic-perceptive method by Cantero (2002), which has already been used in research on prosody in different languages. In this case, we present the results of its application in Spanish and Catalan.
Resumo:
Given the structural and acoustical similarities between speech and music, and possible overlapping cerebral structures in speech and music processing, a possible relationship between musical aptitude and linguistic abilities, especially in terms of second language pronunciation skills, was investigated. Moreover, the laterality effect of the mother tongue was examined with both adults and children by means of dichotic listening scores. Finally, two event-related potential studies sought to reveal whether children with advanced second language pronunciation skills and higher general musical aptitude differed from children with less-advanced pronunciation skills and less musical aptitude in accuracy when preattentively processing mistuned triads and music / speech sound durations. The results showed a significant relationship between musical aptitude, English language pronunciation skills, chord discrimination ability, and sound-change-evoked brain activation in response to musical stimuli (durational differences and triad contrasts). Regular music practice may also have a modulatory effect on the brain’s linguistic organization and cause altered hemispheric functioning in those who have regularly practised music for years. Based on the present results, it is proposed that language skills, both in production and discrimination, are interconnected with perceptual musical skills.
Resumo:
During the process of language development, one of the most important tasks that children must face is that of identifying the grammatical category to which words in their language belong. This is essential in order to be able to form grammatically correct utterances. How do children proceed in order to classify words in their language and assign them to their corresponding grammatical category? The present study investigates the usefulness of phonological information for the categorization of nouns in English, given the fact that it is phonology the first source of information that might be available to prelinguistic infants who lack access to semantic information or complex morphosyntactic information. We analyse four different corpora containing linguistic samples of English speaking mothers addressing their children in order to explore the reliability with which words are represented in mothers’ speech based on several phonological criteria. The results of the analysis confirm the prediction that most of the words to which English learning infants are exposed during the first two years of life can be accounted for in terms of their phonological resemblance
Resumo:
This dissertation considers the segmental durations of speech from the viewpoint of speech technology, especially speech synthesis. The idea is that better models of segmental durations lead to higher naturalness and better intelligibility. These features are the key factors for better usability and generality of synthesized speech technology. Even though the studies are based on a Finnish corpus the approaches apply to all other languages as well. This is possibly due to the fact that most of the studies included in this dissertation are about universal effects taking place on utterance boundaries. Also the methods invented and used here are suitable for any other study of another language. This study is based on two corpora of news reading speech and sentences read aloud. The other corpus is read aloud by a 39-year-old male, whilst the other consists of several speakers in various situations. The use of two corpora is twofold: it involves a comparison of the corpora and a broader view on the matters of interest. The dissertation begins with an overview to the phonemes and the quantity system in the Finnish language. Especially, we are covering the intrinsic durations of phonemes and phoneme categories, as well as the difference of duration between short and long phonemes. The phoneme categories are presented to facilitate the problem of variability of speech segments. In this dissertation we cover the boundary-adjacent effects on segmental durations. In initial positions of utterances we find that there seems to be initial shortening in Finnish, but the result depends on the level of detail and on the individual phoneme. On the phoneme level we find that the shortening or lengthening only affects the very first ones at the beginning of an utterance. However, on average, the effect seems to shorten the whole first word on the word level. We establish the effect of final lengthening in Finnish. The effect in Finnish has been an open question for a long time, whilst Finnish has been the last missing piece for it to be a universal phenomenon. Final lengthening is studied from various angles and it is also shown that it is not a mere effect of prominence or an effect of speech corpus with high inter- and intra-speaker variation. The effect of final lengthening seems to extend from the final to the penultimate word. On a phoneme level it reaches a much wider area than the initial effect. We also present a normalization method suitable for corpus studies on segmental durations. The method uses an utterance-level normalization approach to capture the pattern of segmental durations within each utterance. This prevents the impact of various problematic variations within the corpora. The normalization is used in a study on final lengthening to show that the results on the effect are not caused by variation in the material. The dissertation shows an implementation and prowess of speech synthesis on a mobile platform. We find that the rule-based method of speech synthesis is a real-time software solution, but the signal generation process slows down the system beyond real time. Future aspects of speech synthesis on limited platforms are discussed. The dissertation considers ethical issues on the development of speech technology. The main focus is on the development of speech synthesis with high naturalness, but the problems and solutions are applicable to any other speech technology approaches.
Resumo:
The human language-learning ability persists throughout life, indicating considerable flexibility at the cognitive and neural level. This ability spans from expanding the vocabulary in the mother tongue to acquisition of a new language with its lexicon and grammar. The present thesis consists of five studies that tap both of these aspects of adult language learning by using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) during language processing and language learning tasks. The thesis shows that learning novel phonological word forms, either in the native tongue or when exposed to a foreign phonology, activates the brain in similar ways. The results also show that novel native words readily become integrated in the mental lexicon. Several studies in the thesis highlight the left temporal cortex as an important brain region in learning and accessing phonological forms. Incidental learning of foreign phonological word forms was reflected in functionally distinct temporal lobe areas that, respectively, reflected short-term memory processes and more stable learning that persisted to the next day. In a study where explicitly trained items were tracked for ten months, it was found that enhanced naming-related temporal and frontal activation one week after learning was predictive of good long-term memory. The results suggest that memory maintenance is an active process that depends on mechanisms of reconsolidation, and that these process vary considerably between individuals. The thesis put special emphasis on studying language learning in the context of language production. The neural foundation of language production has been studied considerably less than that of perceptive language, especially on the sentence level. A well-known paradigm in language production studies is picture naming, also used as a clinical tool in neuropsychology. This thesis shows that accessing the meaning and phonological form of a depicted object are subserved by different neural implementations. Moreover, a comparison between action and object naming from identical images indicated that the grammatical class of the retrieved word (verb, noun) is less important than the visual content of the image. In the present thesis, the picture naming was further modified into a novel paradigm in order to probe sentence-level speech production in a newly learned miniature language. Neural activity related to grammatical processing did not differ between the novel language and the mother tongue, but stronger neural activation for the novel language was observed during the planning of the upcoming output, likely related to more demanding lexical retrieval and short-term memory. In sum, the thesis aimed at examining language learning by combining different linguistic domains, such as phonology, semantics, and grammar, in a dynamic description of language processing in the human brain.
Resumo:
The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.
Resumo:
This article is a systematic review of the available literature on the benefits that cognitive behavioral therapy (CBT) offers patients with implanted cardioverter defibrillators (ICDs) and confirms its effectiveness. After receiving the device, some patients fear that it will malfunction, or they remain in a constant state of tension due to sudden electrical discharges and develop symptoms of anxiety and depression. A search with the key words “anxiety”, “depression”, “implantable cardioverter”, “cognitive behavioral therapy” and “psychotherapy” was carried out. The search was conducted in early January 2013. Sources for the search were ISI Web of Knowledge, PubMed, and PsycINFO. A total of 224 articles were retrieved: 155 from PubMed, 69 from ISI Web of Knowledge. Of these, 16 were written in a foreign language and 47 were duplicates, leaving 161 references for analysis of the abstracts. A total of 19 articles were eliminated after analysis of the abstracts, 13 were eliminated after full-text reading, and 11 articles were selected for the review. The collection of articles for literature review covered studies conducted over a period of 13 years (1998-2011), and, according to methodological design, there were 1 cross-sectional study, 1 prospective observational study, 2 clinical trials, 4 case-control studies, and 3 case studies. The criterion used for selection of the 11 articles was the effectiveness of the intervention of CBT to decrease anxiety and depression in patients with ICD, expressed as a ratio. The research indicated that CBT has been effective in the treatment of ICD patients with depressive and anxiety symptoms. Research also showed that young women represented a risk group, for which further study is needed. Because the number of references on this theme was small, further studies should be carried out.