992 resultados para Seismic processing
Resumo:
Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.
Resumo:
Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individuals with autism and AS. In particular, the discrimination of and orienting to speech and non-speech sounds was studied, as well as the abstraction of invariant sound features from speech-sound input. Altogether five studies were conducted with auditory event-related brain potentials (ERP); two studies also included a behavioural sound-identification task. In three studies, the subjects were children with autism, in one study children with AS, and in one study adults with AS. In children with autism, even the early stages of sound encoding were deficient. In addition, these children had altered sound-discrimination processes characterised by enhanced spectral but deficient temporal discrimination. The enhanced pitch discrimination may partly explain the auditory hypersensitivity common in autism, and it may compromise the filtering of relevant auditory information from irrelevant information. Indeed, it was found that when sound discrimination required abstracting invariant features from varying input, children with autism maintained their superiority in pitch processing, but lost it in vowel processing. Finally, involuntary orienting to sound changes was deficient in children with autism in particular with respect to speech sounds. This finding is in agreement with previous studies on autism suggesting deficits in orienting to socially relevant stimuli. In contrast to children with autism, the early stages of sound encoding were fairly unimpaired in children with AS. However, sound discrimination and orienting were rather similarly altered in these children as in those with autism, suggesting correspondences in the auditory phenotype in these two disorders which belong to the same continuum. Unlike children with AS, adults with AS showed enhanced processing of duration changes, suggesting developmental changes in auditory processing in this disorder.
Resumo:
Humans are a social species with the internal capability to process social information from other humans. To understand others behavior and to react accordingly, it is necessary to infer their internal states, emotions and aims, which are conveyed by subtle nonverbal bodily cues such as postures, gestures, and facial expressions. This thesis investigates the brain functions underlying the processing of such social information. Studies I and II of this thesis explore the neural basis of perceiving pain from another person s facial expressions by means of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). In Study I, observing another s facial expression of pain activated the affective pain system (previously associated with self-experienced pain) in accordance with the intensity of the observed expression. The strength of the response in anterior insula was also linked to the observer s empathic abilities. The cortical processing of facial pain expressions advanced from the visual to temporal-lobe areas at similar latencies (around 300 500 ms) to those previously shown for emotional expressions such as fear or disgust. Study III shows that perceiving a yawning face is associated with middle and posterior STS activity, and the contagiousness of a yawn correlates negatively with amygdalar activity. Study IV explored the brain correlates of interpreting social interaction between two members of the same species, in this case human and canine. Observing interaction engaged brain activity in very similar manner for both species. Moreover, the body and object sensitive brain areas of dog experts differentiated interaction from noninteraction in both humans and dogs whereas in the control subjects, similar differentiation occurred only for humans. Finally, Study V shows the engagement of the brain area associated with biological motion when exposed to the sounds produced by a single human being walking. However, more complex pattern of activation, with the walking sounds of several persons, suggests that as the social situation becomes more complex so does the brain response. Taken together, these studies demonstrate the roles of distinct cortical and subcortical brain regions in the perception and sharing of others internal states via facial and bodily gestures, and the connection of brain responses to behavioral attributes.
Resumo:
The auditory system can detect occasional changes (deviants) in acoustic regularities without the need for subjects to focus their attention on the sound material. Deviant detection is reflected in the elicitation of the mismatch negativity component (MMN) of the event-related potentials. In the studies presented in this thesis, the MMN is used to investigate the auditory abilities for detecting similarities and regularities in sound streams. To investigate the limits of these processes, professional musicians have been tested in some of the studies. The results show that auditory grouping is already more advanced in musicians than in nonmusicians and that the auditory system of musicians can, unlike that of nonmusicians, detect a numerical regularity of always four tones in a series. These results suggest that sensory auditory processing in musicians is not only a fine tuning of universal abilities, but is also qualitatively more advanced than in nonmusicians. In addition, the relationship between the auditory change-detection function and perception is examined. It is shown that, contrary to the generally accepted view, MMN elicitation does not necessarily correlate with perception. The outcome of the auditory change-detection function can be implicit and the implicit knowledge of the sound structure can, after training, be utilized for behaviorally correct intuitive sound detection. These results illustrate the automatic character of the sensory change detection function.
Resumo:
In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.
Resumo:
The neural basis of visual perception can be understood only when the sequence of cortical activity underlying successful recognition is known. The early steps in this processing chain, from retina to the primary visual cortex, are highly local, and the perception of more complex shapes requires integration of the local information. In Study I of this thesis, the progression from local to global visual analysis was assessed by recording cortical magnetoencephalographic (MEG) responses to arrays of elements that either did or did not form global contours. The results demonstrated two spatially and temporally distinct stages of processing: The first, emerging 70 ms after stimulus onset around the calcarine sulcus, was sensitive to local features only, whereas the second, starting at 130 ms across the occipital and posterior parietal cortices, reflected the global configuration. To explore the links between cortical activity and visual recognition, Studies II III presented subjects with recognition tasks of varying levels of difficulty. The occipito-temporal responses from 150 ms onwards were closely linked to recognition performance, in contrast to the 100-ms mid-occipital responses. The averaged responses increased gradually as a function of recognition performance, and further analysis (Study III) showed the single response strengths to be graded as well. Study IV addressed the attention dependence of the different processing stages: Occipito-temporal responses peaking around 150 ms depended on the content of the visual field (faces vs. houses), whereas the later and more sustained activity was strongly modulated by the observers attention. Hemodynamic responses paralleled the pattern of the more sustained electrophysiological responses. Study V assessed the temporal processing capacity of the human object recognition system. Above sufficient luminance, contrast and size of the object, the processing speed was not limited by such low-level factors. Taken together, these studies demonstrate several distinct stages in the cortical activation sequence underlying the object recognition chain, reflecting the level of feature integration, difficulty of recognition, and direction of attention.
Resumo:
In the present work, effects of stimulus repetition and change in a continuous stimulus stream on the processing of somatosensory information in the human brain were studied. Human scalp-recorded somatosensory event-related potentials (ERPs) and magnetoencephalographic (MEG) responses rapidly diminished with stimulus repetition when mechanical or electric stimuli were applied to fingers. On the contrary, when the ERPs and multi-unit a ctivity (MUA) were directly recorded from the primary (SI) and secondary (SII) somatosensory cortices in a monkey, there was no marked decrement in the somatosensory responses as a function of stimulus repetition. These results suggest that this rate effect is not due to the response diminution in the SI and SII cortices. Obviously the responses to the first stimulus after a long "silent" period are nhanced due to unspecific initial orientation, originating in more broadly distributed and/or deeper neural structures, perhaps in the prefrontal cortices. With fast repetition rates not only the late unspecific but also some early specific somatosensory ERPs were diminished in amplitude. The fast decrease of the ERPs as a function of stimulus repetition is mainly due to the disappearance of the orientation effect and with faster repetition rates additively due to stimulus specific refractoriness. A sudden infrequent change in the continuous stimulus stream also enhanced somatosensory MEG responses to electric stimuli applied to different fingers. These responses were quite similar to those elicited by the deviant stimuli alone when the frequent standard stimuli were omitted. This enhancement was obviously due to the release from refractoriness because the neural structures generating the responses to the infrequent deviants had more time to recover from the refractoriness than the respective structures for the standards. Infrequent deviant mechanical stimuli among frequent standard stimuli also enhanced somatosensory ERPs and, in addition, they elicited a new negative wave which did not occur in the deviants-alone condition. This extra negativity could be recorded to deviations in the stimulation site and in the frequency of the vibratory stimuli. This response is probably a somatosensory analogue of the auditory mismatch negativity (MMN) which has been suggested to reflect a neural mismatch process between the sensory input and the sensory memory trace.
Resumo:
Cognitive impairments of attention, memory and executive functions are a fundamental feature of the pathophysiology of schizophrenia. The neurophysiological and neurochemical changes in the auditory cortex are shown to underlie cognitive impairmentsin schizophrenia patients. Functional state of the neural substrate of auditory information processing could be objectively and non-invasively probed with auditory event-related potentials (ERPs) and event- related fields (ERFs). In the current work, we explored the neurochemical effect on the neural origins of auditory information processing in relation to schizophrenia. By means of ERPs/ERFs we aimed to determine how neural substrates of auditory information processing are modulated by antipsychotic medication in schizophrenia spectrum patients (Studies I, II) and by neuropharmacological challenges in healthy human subjects (Studies III, IV). First, with auditory ERPs we investigated the effects of olanzapine (Study I) and risperidone (Study II) in a group of patients with schizophrenia spectrum disorders. After 2 and 4 weeks of treatment, olanzapine has no significant effects on mismatch negativity(MMN) and P300, which, as it has been suggested, respectively reflect preattentive and attention-dependent information processing. After 2 weeks of treatment, risperidone has no significant effect on P300, however risperidone reduces P200 amplitude. This latter effect of risperidone on neural resources responsible for P200 generation could be partly explained through the action of dopamine. Subsequently, we used simultaneous EEG/MEG to investigate the effects of memantine (Study III) and methylphenidate (Study IV) in healthy subjects. We found that memantine modulates MMN response without changing other ERP components. This could be interpreted as being due to the possible influence of memantine through the NMDA receptors on auditory change- detection mechanism, with processing of auditory stimuli remaining otherwise unchanged. Further, we found that methylphenidate does not modulate the MMN response. This finding could indicate no association between catecholaminergic activities and electrophysiological measures of preattentive auditory discrimination processes reflected in the MMN. However, methylphenidate decreases the P200 amplitudes. This could be interpreted as a modulation of auditory information processing reflected in P200 by dopaminergic and noradrenergic systems. Taken together, our set of studies indicates a complex pattern of neurochemical influences produced by the antipsychotic drugs in the neural substrate of auditory information processing in patients with schizophrenia spectrum disorders and by the pharmacological challenges in healthy subjects studied with ERPs and ERFs.
Resumo:
Al-5 wt pct Si alloy is processed by upset forging in the temperature range 300 K to 800 K and in the strain rate range 0.02 to 200 s−1. The hardness and tensile properties of the product have been studied. A “safe” window in the strain rate-temperature field has been identified for processing of this alloy to obtain maximum tensile ductility in the product. For the above strain rate range, the temperature range of processing is 550 K to 700 K for obtaining high ductility in the product. On the basis of microstructure and the ductility of the product, the temperature-strain rate regimes of damage due to cavity formation at particles and wedge cracking have been isolated for this alloy. The tensile fracture features recorded on the product specimens are in conformity with the above damage mechanisms. A high temperature treatment above ≈600 K followed by fairly fast cooling gives solid solution strengthening in the alloy at room temperature.
Resumo:
It has been suggested that semantic information processing is modularized according to the input form (e.g., visual, verbal, non-verbal sound). A great deal of research has concentrated on detecting a separate verbal module. Also, it has traditionally been assumed in linguistics that the meaning of a single clause is computed before integration to a wider context. Recent research has called these views into question. The present study explored whether it is reasonable to assume separate verbal and nonverbal semantic systems in the light of the evidence from event-related potentials (ERPs). The study also provided information on whether the context influences processing of a single clause before the local meaning is computed. The focus was on an ERP called N400. Its amplitude is assumed to reflect the effort required to integrate an item to the preceding context. For instance, if a word is anomalous in its context, it will elicit a larger N400. N400 has been observed in experiments using both verbal and nonverbal stimuli. Contents of a single sentence were not hypothesized to influence the N400 amplitude. Only the combined contents of the sentence and the picture were hypothesized to influence the N400. The subjects (n = 17) viewed pictures on a computer screen while hearing sentences through headphones. Their task was to judge the congruency of the picture and the sentence. There were four conditions: 1) the picture and the sentence were congruent and sensible, 2) the sentence and the picture were congruent, but the sentence ended anomalously, 3) the picture and the sentence were incongruent but sensible, 4) the picture and the sentence were incongruent and anomalous. Stimuli from the four conditions were presented in a semi-randomized sequence. Their electroencephalography was simultaneously recorded. ERPs were computed for the four conditions. The amplitude of the N400 effect was largest in the incongruent sentence-picture -pairs. The anomalously ending sentences did not elicit a larger N400 than the sensible sentences. The results suggest that there is no separate verbal semantic system, and that the meaning of a single clause is not processed independent of the context.
Resumo:
A method is presented to find nonstationary random seismic excitations with a constraint on mean square value such that the response variance of a given linear system is maximized. It is also possible to incorporate the dominant input frequency into the analysis. The excitation is taken to be the product of a deterministic enveloping function and a zero mean Gaussian stationary random process. The power spectral density function of this process is determined such that the response variance is maximized. Numerical results are presented for a single-degree system and an earth embankment modeled as shear beam.