987 resultados para sound processing
The mismatch negativity (MMN) response to complex tones and spoken words in individuals with aphasia
Resumo:
Background: The mismatch negativity (MMN) is a fronto-centrally distributed event-related potential (ERP) that is elicited by any discriminable auditory change. It is an ideal neurophysiological tool for measuring the auditory processing skills of individuals with aphasia because it can be elicited even in the absence of attention. Previous MMN studies have shown that acoustic processing of tone or pitch deviance is relatively preserved in aphasia, whereas the basic acoustic processing of speech stimuli can be impaired (e.g., auditory discrimination). However, no MMN study has yet investigated the higher levels of auditory processing, such as language-specific phonological and/or lexical processing, in individuals with aphasia. Aims: The aim of the current study was to investigate the MMN response of normal and language-disordered subjects to tone stimuli and speech stimuli that incorporate the basic auditory processing (acoustic, acoustic-phonetic) levels of non-speech and speech sound processing, and also the language-specific phonological and lexical levels of spoken word processing. Furthermore, this study aimed to correlate the aphasic MMN data with language performance on a variety of tasks specifically targeted at the different levels of spoken word processing. Methods M Procedures: Six adults with aphasia (71.7 years +/- 3.0) and six healthy age-, gender-, and education-matched controls (72.2 years +/- 5.4) participated in the study. All subjects were right-handed and native speakers of English. Each subject was presented with complex harmonic tone stimuli, differing in pitch or duration, and consonant-vowel (CV) speech stimuli (non-word /de:/versus real world/deI/). The probability of the deviant for each tone or speech contrast was 10%. The subjects were also presented with the same stimuli in behavioural discrimination tasks, and were administered a language assessment battery to measure their auditory comprehension skills. Outcomes O Results: The aphasic subjects demonstrated attenuated MMN responses to complex tone duration deviance and to speech stimuli (words and non-words), and their responses to the frequency, duration, and real word deviant stimuli were found to strongly correlate with performance on the auditory comprehension section of the Western Aphasia Battery (WAB). Furthermore, deficits in attentional lexical decision skills demonstrated by the aphasic subjects correlated with a word-related enhancement demonstrated during the automatic MMN paradigm, providing evidence to support the word advantage effect, thought to reflect the activation of language-specific memory traces in the brain for words. Conclusions: These results indicate that the MMN may be used as a technique for investigating general and more specific auditory comprehension skills of individuals with aphasia, using speech and/or non-speech stimuli, independent of the individual's attention. The combined use of the objective MMN technique and current clinical language assessments may result in improved rehabilitative management of aphasic individuals.
Resumo:
SoundCipher is a software library written in the Java language that adds important music and sound features to the Processing environment that is widely used by media artists and otherwise has an orientation toward computational graphics. This article introduces the SoundCipher library and its features, describes its influences and design intentions, and positions it within the field of computer music programming tools. SoundCipher enables the rich history of algorithmic music techniques to be accessible within one of today’s most popular media art platforms. It also provides an accessible means for learning to create algorithmic music and sound programs.
Resumo:
The auditory system can detect occasional changes (deviants) in acoustic regularities without the need for subjects to focus their attention on the sound material. Deviant detection is reflected in the elicitation of the mismatch negativity component (MMN) of the event-related potentials. In the studies presented in this thesis, the MMN is used to investigate the auditory abilities for detecting similarities and regularities in sound streams. To investigate the limits of these processes, professional musicians have been tested in some of the studies. The results show that auditory grouping is already more advanced in musicians than in nonmusicians and that the auditory system of musicians can, unlike that of nonmusicians, detect a numerical regularity of always four tones in a series. These results suggest that sensory auditory processing in musicians is not only a fine tuning of universal abilities, but is also qualitatively more advanced than in nonmusicians. In addition, the relationship between the auditory change-detection function and perception is examined. It is shown that, contrary to the generally accepted view, MMN elicitation does not necessarily correlate with perception. The outcome of the auditory change-detection function can be implicit and the implicit knowledge of the sound structure can, after training, be utilized for behaviorally correct intuitive sound detection. These results illustrate the automatic character of the sensory change detection function.
Resumo:
The assumption that ignoring irrelevant sound in a serial recall situation is identical to ignoring a non-target channel in dichotic listening is challenged. Dichotic listening is open to moderating effects of working memory capacity (Conway et al., 2001) whereas irrelevant sound effects (ISE) are not (Beaman, 2004). A right ear processing bias is apparent in dichotic listening, whereas the bias is to the left ear in the ISE (Hadlington et al., 2004). Positron emission tomography (PET) imaging data (Scott et al., 2004, submitted) show bilateral activation of the superior temporal gyrus (STG) in the presence of intelligible, but ignored, background speech and right hemisphere activation of the STG in the presence of unintelligible background speech. It is suggested that the right STG may be involved in the ISE and a particularly strong left ear effect might occur because of the contralateral connections in audition. It is further suggested that left STG activity is associated with dichotic listening effects and may be influenced by working memory span capacity. The relationship of this functional and neuroanatomical model to known neural correlates of working memory is considered.
Resumo:
The Retrieval-Induced Forgetting (RIF) paradigm includes three phases: (a) study/encoding of category exemplars, (b) practicing retrieval of a sub-set of those category exemplars, and (c) recall of all exemplars. At the final recall phase, recall of items that belong to the same categories as those items that undergo retrieval-practice, but that do not undergo retrieval-practice, is impaired. The received view is that this is because retrieval of target category-exemplars (e.g., ‘Tiger’ in the category Four-legged animal) requires inhibition of non-target category-exemplars (e.g., ‘Dog’ and ‘Lion’) that compete for retrieval. Here, we used the RIF paradigm to investigate whether ignoring auditory items during the retrieval-practice phase modulates the inhibitory process. In two experiments, RIF was present when retrieval-practice was conducted in quiet and when conducted in the presence of spoken words that belonged to a category other than that of the items that were targets for retrieval-practice. In contrast, RIF was abolished when words that either were identical to the retrieval-practice words or were only semantically related to the retrieval-practice words were presented as background speech. The results suggest that the act of ignoring speech can reduce inhibition of the non-practiced category-exemplars, thereby eliminating RIF, but only when the spoken words are competitors for retrieval (i.e., belong to the same semantic category as the to-be-retrieved items).
Resumo:
Patients with schizophrenia are impaired in many aspects of auditory processing, but indirect evidence suggests that intensity perception is intact. However, because the extraction of meaning from dynamic intensity relies on structures that appear to be altered in schizophrenia, we hypothesized that the perception of auditory looming is impaired as well. Twenty inpatients with schizophrenia and 20 control participants, matched for age, gender, and education, gave intensity ratings of rising (looming) and falling intensity sounds with different mean intensities. Intensity change was overestimated in looming as compared with receding sounds in both groups. However, healthy individuals showed a stronger effect at higher mean intensity, in keeping with previous findings, while patients with schizophrenia lacked this modulation. We discuss how this might support the notion of a more general deficit in extracting emotional meaning from different sensory cues, including intensity and pitch.
Resumo:
If a bathymetric echosounder is the essential device to carry on hydrographic surveys, other external sensors are absolutely also necessary (positioning system, motion unit or sound velocity profiler). And because sound doesn‛t go straight away into the whole bathymetric swath its measurement and processing are very sensitive for all the water column. DORIS is the very answer for an operational sound velocity profile processing.
Resumo:
Sound tagging has been studied for years. Among all sound types, music, speech, and environmental sound are three hottest research areas. This survey aims to provide an overview about the state-of-the-art development in these areas.We discuss about the meaning of tagging in different sound areas at the beginning of the journey. Some examples of sound tagging applications are introduced in order to illustrate the significance of this research. Typical tagging techniques include manual, automatic, and semi-automatic approaches.After reviewing work in music, speech and environmental sound tagging, we compare them and state the research progress to date. Research gaps are identified for each research area and the common features and discriminations between three areas are discovered as well. Published datasets, tools used by researchers, and evaluation measures frequently applied in the analysis are listed. In the end, we summarise the worldwide distribution of countries dedicated to sound tagging research for years.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.
Resumo:
Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.
Resumo:
Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individuals with autism and AS. In particular, the discrimination of and orienting to speech and non-speech sounds was studied, as well as the abstraction of invariant sound features from speech-sound input. Altogether five studies were conducted with auditory event-related brain potentials (ERP); two studies also included a behavioural sound-identification task. In three studies, the subjects were children with autism, in one study children with AS, and in one study adults with AS. In children with autism, even the early stages of sound encoding were deficient. In addition, these children had altered sound-discrimination processes characterised by enhanced spectral but deficient temporal discrimination. The enhanced pitch discrimination may partly explain the auditory hypersensitivity common in autism, and it may compromise the filtering of relevant auditory information from irrelevant information. Indeed, it was found that when sound discrimination required abstracting invariant features from varying input, children with autism maintained their superiority in pitch processing, but lost it in vowel processing. Finally, involuntary orienting to sound changes was deficient in children with autism in particular with respect to speech sounds. This finding is in agreement with previous studies on autism suggesting deficits in orienting to socially relevant stimuli. In contrast to children with autism, the early stages of sound encoding were fairly unimpaired in children with AS. However, sound discrimination and orienting were rather similarly altered in these children as in those with autism, suggesting correspondences in the auditory phenotype in these two disorders which belong to the same continuum. Unlike children with AS, adults with AS showed enhanced processing of duration changes, suggesting developmental changes in auditory processing in this disorder.
Resumo:
It has been suggested that semantic information processing is modularized according to the input form (e.g., visual, verbal, non-verbal sound). A great deal of research has concentrated on detecting a separate verbal module. Also, it has traditionally been assumed in linguistics that the meaning of a single clause is computed before integration to a wider context. Recent research has called these views into question. The present study explored whether it is reasonable to assume separate verbal and nonverbal semantic systems in the light of the evidence from event-related potentials (ERPs). The study also provided information on whether the context influences processing of a single clause before the local meaning is computed. The focus was on an ERP called N400. Its amplitude is assumed to reflect the effort required to integrate an item to the preceding context. For instance, if a word is anomalous in its context, it will elicit a larger N400. N400 has been observed in experiments using both verbal and nonverbal stimuli. Contents of a single sentence were not hypothesized to influence the N400 amplitude. Only the combined contents of the sentence and the picture were hypothesized to influence the N400. The subjects (n = 17) viewed pictures on a computer screen while hearing sentences through headphones. Their task was to judge the congruency of the picture and the sentence. There were four conditions: 1) the picture and the sentence were congruent and sensible, 2) the sentence and the picture were congruent, but the sentence ended anomalously, 3) the picture and the sentence were incongruent but sensible, 4) the picture and the sentence were incongruent and anomalous. Stimuli from the four conditions were presented in a semi-randomized sequence. Their electroencephalography was simultaneously recorded. ERPs were computed for the four conditions. The amplitude of the N400 effect was largest in the incongruent sentence-picture -pairs. The anomalously ending sentences did not elicit a larger N400 than the sensible sentences. The results suggest that there is no separate verbal semantic system, and that the meaning of a single clause is not processed independent of the context.