52 resultados para Speech Processing
em Helda - Digital Repository of University of Helsinki
Resumo:
Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.
Resumo:
Speech rhythm is an essential part of speech processing. It is the outcome of the workings of a combination of linguistic and non-linguistic parameters, many of which also have other functions in speech. This study focusses on the acoustic and auditive realization of two linguistic parameters of rhythm: (1) sentence stress, and (2) speech rate and pausing. The aim was to find out how well Finnish comprehensive school pupils realize these two parameters in English and how native speakers of English react to Finnish pupils English rhythm. The material was elicited by means of a story-telling task and questionnaires. Three female and three male pupils representing different levels of oral skills in English were selected as the experimental group. The control group consisted of two female and two male native speakers of English. The stories were analysed acoustically and auditorily with respect to interstress intervals, weak forms, fundamental frequency, pausing, and speech as well as articulation rate. In addition, 52 native speakers of English were asked to rate the intelligibility of the Finnish pupils English with respect to speech rhythm and give their attitudes on what the pupils sounded like. Results showed that Finnish pupils can produce isochronous interstress intervals in English, but that too large a proportion of these intervals contain pauses. A closer analysis of the pauses revealed that Finnish pupils pause too frequently and in inappropriate places when they speak English. Frequent pausing was also found to cause slow speech rates. The findings of the fundamental frequency (F0) measurements indicate that Finnish pupils tend to make a slightly narrower F0 difference between stressed and unstressed syllables than the native speakers of English. Furthermore, Finnish pupils appear to know how to reduce the duration and quality of unstressed sounds, but they fail to do it frequently enough. Native listeners gave lower intelligibility and attitude scores to pupils with more anomalous speech rhythm. Finnish pupils rhythm anomalies seemed to derive from various learning- or learner-related factors rather than from the differences between English and Finnish. This study demonstrates that pausing may be a more important component of English speech rhythm than sentence stress as far as Finnish adolescents are concerned and that interlanguage development is affected by various factors and characterised by jumps or periods of stasis. Other theoretical, methodological and pedagogical implications of the results are also discussed.
Resumo:
Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individuals with autism and AS. In particular, the discrimination of and orienting to speech and non-speech sounds was studied, as well as the abstraction of invariant sound features from speech-sound input. Altogether five studies were conducted with auditory event-related brain potentials (ERP); two studies also included a behavioural sound-identification task. In three studies, the subjects were children with autism, in one study children with AS, and in one study adults with AS. In children with autism, even the early stages of sound encoding were deficient. In addition, these children had altered sound-discrimination processes characterised by enhanced spectral but deficient temporal discrimination. The enhanced pitch discrimination may partly explain the auditory hypersensitivity common in autism, and it may compromise the filtering of relevant auditory information from irrelevant information. Indeed, it was found that when sound discrimination required abstracting invariant features from varying input, children with autism maintained their superiority in pitch processing, but lost it in vowel processing. Finally, involuntary orienting to sound changes was deficient in children with autism in particular with respect to speech sounds. This finding is in agreement with previous studies on autism suggesting deficits in orienting to socially relevant stimuli. In contrast to children with autism, the early stages of sound encoding were fairly unimpaired in children with AS. However, sound discrimination and orienting were rather similarly altered in these children as in those with autism, suggesting correspondences in the auditory phenotype in these two disorders which belong to the same continuum. Unlike children with AS, adults with AS showed enhanced processing of duration changes, suggesting developmental changes in auditory processing in this disorder.
Resumo:
Speech has both auditory and visual components (heard speech sounds and seen articulatory gestures). During all perception, selective attention facilitates efficient information processing and enables concentration on high-priority stimuli. Auditory and visual sensory systems interact at multiple processing levels during speech perception and, further, the classical motor speech regions seem also to participate in speech perception. Auditory, visual, and motor-articulatory processes may thus work in parallel during speech perception, their use possibly depending on the information available and the individual characteristics of the observer. Because of their subtle speech perception difficulties possibly stemming from disturbances at elemental levels of sensory processing, dyslexic readers may rely more on motor-articulatory speech perception strategies than do fluent readers. This thesis aimed to investigate the neural mechanisms of speech perception and selective attention in fluent and dyslexic readers. We conducted four functional magnetic resonance imaging experiments, during which subjects perceived articulatory gestures, speech sounds, and other auditory and visual stimuli. Gradient echo-planar images depicting blood oxygenation level-dependent contrast were acquired during stimulus presentation to indirectly measure brain hemodynamic activation. Lip-reading activated the primary auditory cortex, and selective attention to visual speech gestures enhanced activity within the left secondary auditory cortex. Attention to non-speech sounds enhanced auditory cortex activity bilaterally; this effect showed modulation by sound presentation rate. A comparison between fluent and dyslexic readers' brain hemodynamic activity during audiovisual speech perception revealed stronger activation of predominantly motor speech areas in dyslexic readers during a contrast test that allowed exploration of the processing of phonetic features extracted from auditory and visual speech. The results show that visual speech perception modulates hemodynamic activity within auditory cortex areas once considered unimodal, and suggest that the left secondary auditory cortex specifically participates in extracting the linguistic content of seen articulatory gestures. They are strong evidence for the importance of attention as a modulator of auditory cortex function during both sound processing and visual speech perception, and point out the nature of attention as an interactive process (influenced by stimulus-driven effects). Further, they suggest heightened reliance on motor-articulatory and visual speech perception strategies among dyslexic readers, possibly compensating for their auditory speech perception difficulties.
Resumo:
We use parallel weighted finite-state transducers to implement a part-of-speech tagger, which obtains state-of-the-art accuracy when used to tag the Europarl corpora for Finnish, Swedish and English. Our system consists of a weighted lexicon and a guesser combined with a bigram model factored into two weighted transducers. We use both lemmas and tag sequences in the bigram model, which guarantees reliable bigram estimates.
Resumo:
Asperger Syndrome (AS) belongs to autism spectrum disorders where both verbal and non-verbal communication difficulties are at the core of the impairment. Social communication requires a complex use of affective, linguistic-cognitive and perceptual processes. In the four studies included in the current thesis, some of the linguistic and perceptual factors that are important for face-to-face communication were studied using behavioural methods. In all four studies the results obtained from individuals with AS were compared with typically developed age, gender and IQ matched controls. First, the language skills of school-aged children were characterized in detail with standardized tests that measured different aspects of receptive and expressive language (Study I). The children with AS were found to be worse than the controls in following complex verbal instructions. Next, the visual perception of facial expressions of emotion with varying degrees of visual detail was examined (Study II). Adults with AS were found to have impaired recognition of facial expressions on the basis of very low spatial frequencies which are important for processing global information. Following that, multisensory perception was investigated by looking at audiovisual speech perception (Studies III and IV). Adults with AS were found to perceive audiovisual speech qualitatively differently from typically developed adults, although both groups were equally accurate in recognizing auditory and visual speech presented alone. Finally, the effect of attention on audiovisual speech perception was studied by registering eye gaze behaviour (Study III) and by studying the voluntary control of visual attention (Study IV). The groups did not differ in eye gaze behaviour or in the voluntary control of visual attention. The results of the study series demonstrate that many factors underpinning face-to-face social communication are atypical in AS. In contrast with previous assumptions about intact language abilities, the current results show that children with AS have difficulties in understanding complex verbal instructions. Furthermore, the study makes clear that deviations in the perception of global features in faces expressing emotions as well as in the multisensory perception of speech are likely to harm face-to-face social communication.
Resumo:
The number of drug substances in formulation development in the pharmaceutical industry is increasing. Some of these are amorphous drugs and have glass transition below ambient temperature, and thus they are usually difficult to formulate and handle. One reason for this is the reduced viscosity, related to the stickiness of the drug, that makes them complicated to handle in unit operations. Thus, the aim in this thesis was to develop a new processing method for a sticky amorphous model material. Furthermore, model materials were characterised before and after formulation, using several characterisation methods, to understand more precisely the prerequisites for physical stability of amorphous state against crystallisation. The model materials used were monoclinic paracetamol and citric acid anhydrate. Amorphous materials were prepared by melt quenching or by ethanol evaporation methods. The melt blends were found to have slightly higher viscosity than the ethanol evaporated materials. However, melt produced materials crystallised more easily upon consecutive shearing than ethanol evaporated materials. The only material that did not crystallise during shearing was a 50/50 (w/w, %) blend regardless of the preparation method and it was physically stable at least two years in dry conditions. Shearing at varying temperatures was established to measure the physical stability of amorphous materials in processing and storage conditions. The actual physical stability of the blends was better than the pure amorphous materials at ambient temperature. Molecular mobility was not related to the physical stability of the amorphous blends, observed as crystallisation. Molecular mobility of the 50/50 blend derived from a spectral linewidth as a function of temperature using solid state NMR correlated better with the molecular mobility derived from a rheometer than that of differential scanning calorimetry data. Based on the results obtained, the effect of molecular interactions, thermodynamic driving force and miscibility of the blends are discussed as the key factors to stabilise the blends. The stickiness was found to be affected glass transition and viscosity. Ultrasound extrusion and cutting were successfully tested to increase the processability of sticky material. Furthermore, it was found to be possible to process the physically stable 50/50 blend in a supercooled liquid state instead of a glassy state. The method was not found to accelerate the crystallisation. This may open up new possibilities to process amorphous materials that are otherwise impossible to manufacture into solid dosage forms.
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms
Resumo:
The thesis is about Finnish freestyle-rap, a modern form of oral poetry producing improvised rap lyrics at the moment of performance. The research material consists of interviews with seven freestyle experts and a DVD from Finnish Rap Championships 2005 containing freestyle-rap. The thesis has three goals: describing the different variations or subgenres of freestyle, outlining the esthetics and register of freestyle, and examining the composition of freestyle. The research material acts a central role in the thesis but the theoretical framework is also wide: cornerstones are John Miles Foley's register analysis, oral formulaic theory and William Levelt's psycholinguistic theory on planning of speech. Freestyle is divided into three partially overlapping genres with slight variation in conventions. The genres are freestyle battle, cypher, and live freestyle. Freestyle's esthetics are composed of certain qualities and preferences: the rhyme is a syllabically repeating vocal rhyme. The rapping must be synchronised with the background rhythm and the rhyme comes in the end of the line. The lyrics must be somehow anchored to the surroundings and the moment of performance. The style of rapping, ie. flow, has to be personal one way or the other. Individual formulas are used to some extent in freestyle compostition. When planning the lines, all other content is subordinate to the rhyme, whose phonetic processing is the primary act in the planning of freestyle. This discovery sets Levelt's theory open to doubt.
Resumo:
This dissertation consists of four articles and an introduction. The five parts address the same topic, nonverbal predication in Erzya, from different perspectives. The work is at the same time linguistic typology and Uralic studies. The findings based on a large corpus of empirical Erzya data, which was collected using several different methods and included recordings of the spoken language, made it possible for the present study to apply, then test and finally discuss the previous theories based on cross-linguistic data. Erzya makes use of multiple predication patterns which vary from totally analytic to the morphologically very complex. Nonverbal predicate clause types are classified on the basis of propositional acts in clauses denoting class-membership, identity, property and location. The predicates of these clauses are nouns, adjectives and locational expressions, respectively. The following three predication strategies in Erzya nonverbal predication can be identified: i. the zero-copula construction, ii. the predicative suffix construction and iii. the copula construction. It has been suggested that verbs and nouns cannot be clearly distinguished on morphological grounds when functioning as predicates in Erzya. This study shows that even though predicativity must not be considered a sufficient tool for defining parts of speech in any language, the Erzya lexical classes of adjective, noun and verb can be distinguished from each other also in predicate position. The relative frequency and degree of obligation for using the predicative suffix construction decreases when moving left to right on the scale verb adjective/locative noun ( identificational statement). The predicative suffix is the main pattern in the present tense over the whole domain of nonverbal predication in Standard Erzya, but if it is replaced it is most likely to be with a zero-copula construction in a nominal predication. This study exploits the theory of (a)symmetry for the first time in order to describe verbal vs. nonverbal predication. It is shown that the asymmetry of paradigms and constructions differentiates the lexical classes. Asymmetrical structures are motivated by functional level asymmetry. Variation in predication as such adds to the complexity of the grammar. When symmetric structures are employed, the functional complexity of grammar decreases, even though morphological complexity increases. The genre affects the employment of predication strategies in Erzya. There are differences in the relative frequency of the patterns, and some patterns are totally lacking from some of the data. The clearest difference is that the past tense predicative suffix construction occurs relatively frequently in Standard Erzya, while it occurs infrequently in the other data. Also, the predicative suffixes of the present tense are used more regularly in written Standard Erzya than in any other genre. The genre also affects the incidence of the translative in uľ(ń)ems copula constructions. In translations from Russian to Erzya the translative case is employed relatively frequently in comparison to other data. This study reveals differences between the two Mordvinic languages Erzya and Moksha. The predicative suffixes (bound person markers) of the present tense are used more regularly in Moksha in all kinds of nonverbal predicate clauses compared to Erzya. It should further be observed that identificational statements are encoded with a predicative suffix in Moksha, but seldom in Erzya. Erzya clauses are more frequently encoded using zero-constructions, displaying agreement in number only.
Resumo:
This study investigated questions related to half-occlusion processing in human stereoscopic vision: (1) How does the depth location of a half-occluding figure affect the depth localization of adjacent monocular objects? (2) Is three-dimensional slant around vertical axis (geometric effect) affected by half-occlusion constraints? and (3) How the half-occlusion constraints and surface formation processes are manifested in stereoscopic capture? Our results showed that the depth localization of binocular objects affects the depth localization of discrete monocular objects. We also showed that the visual system has a preference for a frontoparallel surface interpretation if the half-occlusion configuration allows multiple interpretation alternatives. When the surface formation was constrained by textures, our results showed that a process of rematching spreading determines the resulting perception and that the spreading can be limited by illusory contours that support the presence of binocularly unmatched figures. The unmatched figures could be present, if the inducing figures producing the illusory surface contained binocular image differences that provided cues for quantitative da Vinci stereopsis. These findings provide evidence of the significant role of half-occlusions in stereoscopic processing.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.
Resumo:
Humans are a social species with the internal capability to process social information from other humans. To understand others behavior and to react accordingly, it is necessary to infer their internal states, emotions and aims, which are conveyed by subtle nonverbal bodily cues such as postures, gestures, and facial expressions. This thesis investigates the brain functions underlying the processing of such social information. Studies I and II of this thesis explore the neural basis of perceiving pain from another person s facial expressions by means of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). In Study I, observing another s facial expression of pain activated the affective pain system (previously associated with self-experienced pain) in accordance with the intensity of the observed expression. The strength of the response in anterior insula was also linked to the observer s empathic abilities. The cortical processing of facial pain expressions advanced from the visual to temporal-lobe areas at similar latencies (around 300 500 ms) to those previously shown for emotional expressions such as fear or disgust. Study III shows that perceiving a yawning face is associated with middle and posterior STS activity, and the contagiousness of a yawn correlates negatively with amygdalar activity. Study IV explored the brain correlates of interpreting social interaction between two members of the same species, in this case human and canine. Observing interaction engaged brain activity in very similar manner for both species. Moreover, the body and object sensitive brain areas of dog experts differentiated interaction from noninteraction in both humans and dogs whereas in the control subjects, similar differentiation occurred only for humans. Finally, Study V shows the engagement of the brain area associated with biological motion when exposed to the sounds produced by a single human being walking. However, more complex pattern of activation, with the walking sounds of several persons, suggests that as the social situation becomes more complex so does the brain response. Taken together, these studies demonstrate the roles of distinct cortical and subcortical brain regions in the perception and sharing of others internal states via facial and bodily gestures, and the connection of brain responses to behavioral attributes.
Resumo:
The auditory system can detect occasional changes (deviants) in acoustic regularities without the need for subjects to focus their attention on the sound material. Deviant detection is reflected in the elicitation of the mismatch negativity component (MMN) of the event-related potentials. In the studies presented in this thesis, the MMN is used to investigate the auditory abilities for detecting similarities and regularities in sound streams. To investigate the limits of these processes, professional musicians have been tested in some of the studies. The results show that auditory grouping is already more advanced in musicians than in nonmusicians and that the auditory system of musicians can, unlike that of nonmusicians, detect a numerical regularity of always four tones in a series. These results suggest that sensory auditory processing in musicians is not only a fine tuning of universal abilities, but is also qualitatively more advanced than in nonmusicians. In addition, the relationship between the auditory change-detection function and perception is examined. It is shown that, contrary to the generally accepted view, MMN elicitation does not necessarily correlate with perception. The outcome of the auditory change-detection function can be implicit and the implicit knowledge of the sound structure can, after training, be utilized for behaviorally correct intuitive sound detection. These results illustrate the automatic character of the sensory change detection function.