1000 resultados para environmental sounds


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the problem of classifying a subset of environmental sounds in movie audio tracks that indicate specific indexical semiotic use. These environmental sounds are used to signify and enhance events occurring in film scenes. We propose a classification system for detecting the presence of violence and car chase scenes in film by classifying ten various environmental sounds that form the constituent audio events of these scenes using a number of old and new audio features. Experiments with our classification system on pure test sounds resulted in a correct event classification rate of 88.9%. We also present the results of the classifier on the mixed audio tracks of several scenes taken from The Mummy and Lethal Weapon 2. The classification of sound events is the first step towards determining the presence of the complex sound scenes within film audio and describing the thematic content of the scenes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent-infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La recherche sur le phénomène sonore, depuis les théorisations de Pierre Schaeffer entourant le concept de « l’objet sonore », a largement évolué nous permettant d’en saisir toute sa complexité. Poursuivant ce même dessein, nous proposons une approche compréhensive du phénomène sonore dans l’espace public urbain en nous penchant plus spécifiquement sur l’interprétation sonore des usagers empruntant les grandes rues commerciales de la ville et en l’occurrence, celles de Montréal. Au quotidien, le citadin déambule et chemine dans l’espace public en prenant conscience de son environnement à l’aide de ses sens. Outre l’aspect visuel, l’ensemble des autres sens sont, pour la plupart du temps, négligés par les designers de l’espace urbain. Il en résulte une conception du projet urbain relativement pauvre au niveau sonore. Dans ce mémoire, il sera question d’aborder le son sous l’angle de l’expérience subjective telle qu’elle est vécue par les usagers. L’objectif de nos travaux tend donc à approfondir la compréhension de l’expérience sonore de l’usager dans l’espace public urbain afin d’en intégrer les principes en amont du processus de conception. Les théories et méthodes issues du domaine de l’environnement sonore voient leur champ d’investigation élargi par l’anthropologie des sens. La richesse de cette approche permet de mieux saisir les multiples dimensions qui façonnent le vécu sonore des usagers. Le cadre de références puise également dans les pratiques artistiques. L’analyse de ces dernières fait émerger des dimensions utiles à la compréhension de l’expérience sonore. Cette expérimentation a été effectuée à l’aide de différentes méthodes de collecte de données permettant de recueillir un maximum de matière qualitative. Ainsi, des observations, des parcours d’écoute qualifiée, des parcours commentés et finalement des entretiens en profondeur ont été menés. Cette recherche a permis de mieux comprendre le dialogue existant entre le son, l’espace et l’usager en révélant les différentes dimensions de l’expérience sonore de la grande rue commerciale et notamment, celles entourant la culture des sens.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La version intégrale de ce mémoire est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (www.bib.umontreal.ca/MU).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vocal mimicry is one of the more striking aspects of avian vocalization and is widespread across songbirds. However, little is known about how mimics acquire heterospecific and environmental sounds. We investigated geographical and individual variation in the mimetic repertoires of males of a proficient mimic, the spotted bowerbird Ptilonorhynchus maculatus. Male bower owners shared more of their mimetic repertoires with neighbouring bower owners than with more distant males. However, interbower distance did not explain variation in the highly repeatable renditions given by bower owners of two commonly mimicked species. From the similarity between model and mimic vocalizations and the patterns of repertoire sharing among males, we suggest that the bowerbirds are learning their mimetic repertoire from heterospecifics and not from each other.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we study the sound tracks in films and their indexical semiotic usage by developing a classification system that detects complex sound scenes and their constituent sound events in cinema. We investigate two main issues in this paper: Determination of what constitutes the presence of a high level sound scene and inferences about the thematic content of the scene that can be drawn from this presence, and classification of environmental sounds in the audio track of the scene, to assist in the automatic detection of the high level scene. Experiments with our classification system on pure sounds resulted in a correct event classification rate of 88.9%. When the audio content of a number of film scenes was examined, though a lower accuracy resulted with sound event detection due to the presence of mixed sounds, the film audio samples were generally classified with the correct high-level sound scene label, enabling correct inferences about the story content of the scenes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article it is intended to discuss the issue of noise pollution from an unusual point of view: noise pollution is not only the result of sound increase worldwide, but, particularly, the poor quality of our listening habits in modern life as well. In contemporary society we are subject to a considerable amount of stimulus to all our senses: vision, scent, taste and hearing which are becoming more and more insensible due to over exposure in our environment. These increased stimuli make us look for alternatives to reduce our ability to perceive them and be protected from injuries. However, our sensitivity will also decrease. In the specific case of environment noise, over exposure has made us forget the enchantment of certain sounds that used to give us pleasure or evoke good feelings by many ways, making us recall certain good things, bringing particular moments of our lives to our memory or even filling us with strong emotion. The Canadian composer and music educator, R. Murray Schafer, believes that noise pollution is the result of a society who became deaf. Closing our ears to noise protect us from noise pollution but also prevent us from grasping subtleties of listening. Contemporary world does not help us to be aware of sound in the space around us; acquiring this hearing ability is a matter of focus, interest and practice. Sound education exercises are aimed at children, teenagers and adults who want to improve their listening ability to environmental sounds, perceive its proprieties and learn how sound affects us and touches our feelings. The results are easy to accomplish and contribute to our awareness of the sound environment around us and to the conception of the environmental sound as a composition made by everybody and everything through positive actions, strong will and high sensitivity. Copyright © (2011) by the International Institute of Acoustics & Vibration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Música - IA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research investigates techniques to analyse long duration acoustic recordings to help ecologists monitor birdcall activities. It designs a generalized algorithm to identify a broad range of bird species. It allows ecologists to search for arbitrary birdcalls of interest, rather than restricting them to just a very limited number of species on which the recogniser is trained. The algorithm can help ecologists find sounds of interest more efficiently by filtering out large volumes of unwanted sounds and only focusing on birdcalls.