966 resultados para Computer sound processing
Resumo:
Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, whereas later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli probably reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not the lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.
Resumo:
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Resumo:
Cobalt-labelled motoneuron dendrites of the frog spinal cord at the level of the second spinal nerve were photographed in the electron microscope from long series of ultrathin sections. Three-dimensional computer reconstructions of 120 dendrite segments were analysed. The samples were taken from two locations: proximal to cell body and distal, as defined in a transverse plane of the spinal cord. The dendrites showed highly irregular outlines with many 1-2 microns-long 'thorns' (on average 8.5 thorns per 100 microns 2 of dendritic area). Taken together, the reconstructed dendrite segments from the proximal sites had a total length of about 250 microns; those from the distal locations, 180 microns. On all segments together there were 699 synapses. Nine percent of the synapses were on thorns, and many more close to their base on the dendritic shaft. The synapses were classified in four groups. One third of the synapses were asymmetric with spherical vesicles; one half were symmetric with spherical vesicles; and one tenth were symmetric with flattened vesicles. A fourth, small class of asymmetric synapses had dense-core vesicles. The area of the active zones was large for the asymmetric synapses (median value 0.20 microns 2), and small for the symmetric ones (median value 0.10 microns 2), and the difference was significant. On average, the areas of the active zones of the synapses on thin dendrites were larger than those of synapses on large calibre dendrites. About every 4 microns 2 of dendritic area received one contact. There was a significant difference between the areas of the active zones of the synapses at the two locations. Moreover, the number per unit dendritic length was correlated with dendrite calibre. On average, the active zones covered more than 4% of the dendritic area; this value for thin dendrites was about twice as large as that of large calibre dendrites. We suggest that the larger active zones and the larger synaptic coverage of the thin dendrites compensate for the longer electrotonic distance of these synapses from the soma.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.
Resumo:
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Resumo:
Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.
Resumo:
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for rapid processing of the FWD data along with a user manual. The software system automatically reads the FWD raw data collected by the JILS-20 type FWD machine that Iowa DOT owns, processes and analyzes the collected data with the rapid prediction algorithms developed during the phase I study. This system smoothly integrates the FWD data analysis algorithms and the computer program being used to collect the pavement deflection data. This system can be used to assess pavement condition, estimate remaining pavement life, and eventually help assess pavement rehabilitation strategies by the Iowa DOT pavement management team. This report describes the developed software in detail and can also be used as a user-manual for conducting simulation studies and detailed analyses. *********************** Large File ***********************
Resumo:
Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
This paper presents a framework in which samples of bowing gesture parameters are retrieved and concatenated from a database of violin performances by attending to an annotated input score. Resulting bowing parameter signals are then used to synthesize sound by means of both a digital waveguide violin physical model, and an spectral-domainadditive synthesizer.
Resumo:
The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.
Resumo:
For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked.
Resumo:
Working memory, commonly defined as the ability to hold mental representations on line transiently and to manipulate these representations, is known to be a core deficit in schizophrenia. The aim of the present study was to investigate the visuo-spatial component of the working memory in schizophrenia, and more precisely to what extent the dynamic visuo-spatial information processing is impaired in schizophrenia patients. For this purpose we used a computerized paradigm in which 29 patients with schizophrenia (DSMIV, Diagnostic Interview for Genetic Studies) and 29 age and sex matched control subjects (DIGS) had to memorize a plane moving across the computer screen and to identify the observed trajectory among 9 plots proposed together. Each trajectory could be seen max. 3 times if needed. The results showed no difference between schizophrenia patients and controls regarding the number of correct trajectory identified after the first presentation. However, when we determine the mean number of correct trajectories on the basis of 3 trials, we observed that schizophrenia patients are significantly less performant than controls (Mann-Whitney, p _ 0.002). These findings suggest that, although schizophrenia patients are able to memorize some dynamic trajectories as well as controls, they do not profit from the repetition of the trajectory presentation. These findings are congruent with the hypothesis that schizophrenia could induce an unbalance between local and global information processing: the patients may be able to focus on details of the trajectory which could allow them to find the right target (bottom-up processes), but may show difficulty to refer to previous experience in order to filter incoming information (top-down processes) and enhance their visuo-spatial working memory abilities.
Resumo:
ABSTRACT (English)An accurate processing of the order between sensory events at the millisecond time scale is crucial for both sensori-motor and cognitive functions. Temporal order judgment (TOJ) tasks, is the ability of discriminating the order of presentation of several stimuli presented in a rapid succession. The aim of the present thesis is to further investigate the spatio-temporal brain mechanisms supporting TOJ. In three studies we focus on the dependency of TOJ accuracy on the brain states preceding the presentation of TOJ stimuli, the neural correlates of accurate vs. inaccurate TOJ and whether and how TOJ performance can be improved with training.In "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011), we investigated if the brain activity immediately preceding the presentation of the stimuli modulates TOJ performance. By contrasting the electrophysiological activity before the stimulus presentation as a function of TOJ accuracy we observed a stronger pre-stimulus beta (20Hz) oscillatory activity within the left posterior sylvian region (PSR) before accurate than inaccurate TOJ trials.In "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), and "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), we investigated the spatio-temporal brain dynamics underlying auditory TOJ. In both studies we observed a topographic modulation as a function of TOJ performance at ~40ms after the onset of the first sound, indicating the engagement of distinct configurations of intracranial generators. Source estimations in the first study revealed a bilateral PSR activity for both accurate and inaccurate TOJ trials. Moreover, activity within left, but not right, PSR correlated with TOJ performance. Source estimations in the second study revealed a training-induced left lateralization of the initial bilateral (i.e. PSR) brain response. Moreover, the activity within the left PSR region correlated with TOJ performance.Based on these results, we suggest that a "temporal stamp" is established within left PSR on the first sound within the pair at early stages (i.e. ~40ms) of cortical processes, but is critically modulated by inputs from right PSR (Bernasconi et al., 2010a; b). The "temporal stamp" on the first sound may be established via a sensory gating or prior entry mechanism.Behavioral and brain responses to identical stimuli can vary due to attention modulation, vary with experimental and task parameters or "internal noise". In a fourth experiment (Bernasconi et al., 2011b) we investigated where and when "neural noise" manifest during the stimulus processing. Contrasting the AEPs of identical sound perceived as High vs. Low pitch, a topographic modulation occurred at ca. 100ms after the onset of the sound. Source estimation revealed activity within regions compatible with pitch discrimination. Thus, we provided neurophysiological evidence for the variation in perception induced by "neural noise".ABSTRACT (French)Un traitement précis de l'ordre des événements sensoriels sur une échelle de temps de milliseconde est crucial pour les fonctions sensori-motrices et cognitives. Les tâches de jugement d'ordre temporel (JOT), consistant à présenter plusieurs stimuli en succession rapide, sont traditionnellement employées pour étudier les mécanismes neuronaux soutenant le traitement d'informations sensorielles qui varient rapidement. Le but de cette thèse est d'étudier le mécanisme cérébral soutenant JOT. Dans les trois études présentées nous nous sommes concentrés sur les états du cerveau précédant la présentation des stimuli de JOT, les bases neurales pour un JOT correct vs. incorrect et sur la possibilité et les moyens d'améliorer l'exécution du JOT grâce à un entraînement.Dans "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011),, nous nous sommes intéressé à savoir si l'activité oscillatoire du cerveau au pré-stimulus modulait la performance du JOT. Nous avons contrasté l'activité électrophysiologique en fonction de la performance TOJ, mesurant une activité oscillatoire beta au pré-stimulus plus fort dans la région sylvian postérieure gauche (PSR) liée à un JOT correct.Dans "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), et "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), nous avons étudié la dynamique spatio-temporelle dans le cerveau impliqué dans le traitement du JOT auditif. Dans ses deux études, nous avons observé une modulation topographique à ~40ms après le début du premier son, en fonction de la performance JOT, indiquant l'engagement des configurations de générateurs intra- crâniens distincts. La localisation de source dans la première étude indique une activité bilatérale de PSR pour des JOT corrects vs. incorrects. Par ailleurs, l'activité dans PSR gauche, mais pas dans le droit, est corrélée avec la performance du JOT. La localisation de source dans la deuxième étude indiquait une latéralisation gauche induite par l'entraînement d'une réponse initialement bilatérale du cerveau. D'ailleurs, l'activité dans la région PSR gauche corrèlait avec la performance de TOJ.Basé sur ces résultats, nous proposons qu'un « timbre-temporel » soit établi très tôt (c.-à-d. à ~40ms) sur le premier son par le PSR gauche, mais module par l'activité du PSR droite (Bernasconi et al., 2010a ; b). « Le timbre- temporel » sur le premier son peut être établi par le mécanisme neuronal de type « sensory gating » ou « prior entry ».Les réponses comportementales et du cerveau aux stimuli identiques peut varier du à des modulations d'attention ou à des variations dans les paramètres des tâches ou au bruit interne du cerveau. Dans une quatrième expérience (Bernasconi et al. 2011B), nous avons étudié où et quand le »bruit neuronal« se manifeste pendant le traitement des stimuli. En contrastant les AEPs de sons identiques perçus comme aigus vs. grave, nous avons mesuré une modulation topographique à env. 100ms après l'apparition du son. L'estimation de source a révélé une activité dans les régions compatibles avec la discrimination de fréquences. Ainsi, nous avons fourni des preuves neurophysiologiques de la variation de la perception induite par le «bruit neuronal».
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.