983 resultados para auditory scene analysis
Resumo:
Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.
Resumo:
Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity.
Resumo:
Functional magnetic resonance imaging (fMRI) studies can provide insight into the neural correlates of hallucinations. Commonly, such studies require self-reports about the timing of the hallucination events. While many studies have found activity in higher-order sensory cortical areas, only a few have demonstrated activity of the primary auditory cortex during auditory verbal hallucinations. In this case, using self-reports as a model of brain activity may not be sensitive enough to capture all neurophysiological signals related to hallucinations. We used spatial independent component analysis (sICA) to extract the activity patterns associated with auditory verbal hallucinations in six schizophrenia patients. SICA decomposes the functional data set into a set of spatial maps without the use of any input function. The resulting activity patterns from auditory and sensorimotor components were further analyzed in a single-subject fashion using a visualization tool that allows for easy inspection of the variability of regional brain responses. We found bilateral auditory cortex activity, including Heschl's gyrus, during hallucinations of one patient, and unilateral auditory cortex activity in two more patients. The associated time courses showed a large variability in the shape, amplitude, and time of onset relative to the self-reports. However, the average of the time courses during hallucinations showed a clear association with this clinical phenomenon. We suggest that detection of this activity may be facilitated by examining hallucination epochs of sufficient length, in combination with a data-driven approach.
Resumo:
We investigated the effect of image size on saccade amplitudes. First, in a meta-analysis, relevant results from previous scene perception studies are summarised, suggesting the possibility of a linear relationship between mean saccade amplitude and image size. Forty-eight observers viewed 96 colour scene images scaled to four different sizes, while their eye movements were recorded. Mean and median saccade amplitudes were found to be directly proportional to image size, while the mode of the distribution lay in the range of very short saccades. However, saccade amplitudes expressed as percentages of image size were not constant over the different image sizes; on smaller stimulus images, the relative saccades were found to be larger, and vice versa. In sum, and as far as mean and median saccade amplitudes are concerned, the size of stimulus images is the dominant factor. Other factors, such as image properties, viewing task, or measurement equipment, are only of subordinate importance. Thus, the role of stimulus size has to be reconsidered, in theoretical as well as methodological terms.
Resumo:
BACKGROUND: Sedation protocols, including the use of sedation scales and regular sedation stops, help to reduce the length of mechanical ventilation and intensive care unit stay. Because clinical assessment of depth of sedation is labor-intensive, performed only intermittently, and interferes with sedation and sleep, processed electrophysiological signals from the brain have gained interest as surrogates. We hypothesized that auditory event-related potentials (ERPs), Bispectral Index (BIS), and Entropy can discriminate among clinically relevant sedation levels. METHODS: We studied 10 patients after elective thoracic or abdominal surgery with general anesthesia. Electroencephalogram, BIS, state entropy (SE), response entropy (RE), and ERPs were recorded immediately after surgery in the intensive care unit at Richmond Agitation-Sedation Scale (RASS) scores of -5 (very deep sedation), -4 (deep sedation), -3 to -1 (moderate sedation), and 0 (awake) during decreasing target-controlled sedation with propofol and remifentanil. Reference measurements for baseline levels were performed before or several days after the operation. RESULTS: At baseline, RASS -5, RASS -4, RASS -3 to -1, and RASS 0, BIS was 94 [4] (median, IQR), 47 [15], 68 [9], 75 [10], and 88 [6]; SE was 87 [3], 46 [10], 60 [22], 74 [21], and 87 [5]; and RE was 97 [4], 48 [9], 71 [25], 81 [18], and 96 [3], respectively (all P < 0.05, Friedman Test). Both BIS and Entropy had high variabilities. When ERP N100 amplitudes were considered alone, ERPs did not differ significantly among sedation levels. Nevertheless, discriminant ERP analysis including two parameters of principal component analysis revealed a prediction probability PK value of 0.89 for differentiating deep sedation, moderate sedation, and awake state. The corresponding PK for RE, SE, and BIS was 0.88, 0.89, and 0.85, respectively. CONCLUSIONS: Neither ERPs nor BIS or Entropy can replace clinical sedation assessment with standard scoring systems. Discrimination among very deep, deep to moderate, and no sedation after general anesthesia can be provided by ERPs and processed electroencephalograms, with similar P(K)s. The high inter- and intraindividual variability of Entropy and BIS precludes defining a target range of values to predict the sedation level in critically ill patients using these parameters. The variability of ERPs is unknown.
Resumo:
Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.
Resumo:
The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.
Resumo:
BACKGROUND: The origin of auditory hallucinations, which are one of the core symptoms of schizophrenia, is still a matter of debate. It has been hypothesized that alterations in connectivity between frontal and parietotemporal speech-related areas might contribute to the pathogenesis of auditory hallucinations. These networks are assumed to become dysfunctional during the generation and monitoring of inner speech. Magnetic resonance diffusion tensor imaging is a relatively new in vivo method to investigate the directionality of cortical white matter tracts. OBJECTIVE: To investigate, using diffusion tensor imaging, whether previously described abnormal activation patterns observed during auditory hallucinations relate to changes in structural interconnections between the frontal and parietotemporal speech-related areas. METHODS: A 1.5 T magnetic resonance scanner was used to acquire twelve 5-mm slices covering the Sylvian fissure. Fractional anisotropy was assessed in 13 patients prone to auditory hallucinations, in 13 patients without auditory hallucinations, and in 13 healthy control subjects. Structural magnetic resonance imaging was conducted in the same session. Based on an analysis of variance, areas with significantly different fractional anisotropy values between groups were selected for a confirmatory region of interest analysis. Additionally, descriptive voxel-based t tests between the groups were computed. RESULTS: In patients with hallucinations, we found significantly higher white matter directionality in the lateral parts of the temporoparietal section of the arcuate fasciculus and in parts of the anterior corpus callosum compared with control subjects and patients without hallucinations. Comparing patients with hallucinations with patients without hallucinations, we found significant differences most pronounced in the left hemispheric fiber tracts, including the cingulate bundle. CONCLUSION: Our findings suggest that during inner speech, the alterations of white matter fiber tracts in patients with frequent hallucinations lead to abnormal coactivation in regions related to the acoustical processing of external stimuli. This abnormal activation may account for the patients' inability to distinguish self-generated thoughts from external stimulation.
Resumo:
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.
Resumo:
The vestibular system contributes to the control of posture and eye movements and is also involved in various cognitive functions including spatial navigation and memory. These functions are subtended by projections to a vestibular cortex, whose exact location in the human brain is still a matter of debate (Lopez and Blanke, 2011). The vestibular cortex can be defined as the network of all cortical areas receiving inputs from the vestibular system, including areas where vestibular signals influence the processing of other sensory (e.g. somatosensory and visual) and motor signals. Previous neuroimaging studies used caloric vestibular stimulation (CVS), galvanic vestibular stimulation (GVS), and auditory stimulation (clicks and short-tone bursts) to activate the vestibular receptors and localize the vestibular cortex. However, these three methods differ regarding the receptors stimulated (otoliths, semicircular canals) and the concurrent activation of the tactile, thermal, nociceptive and auditory systems. To evaluate the convergence between these methods and provide a statistical analysis of the localization of the human vestibular cortex, we performed an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies using CVS, GVS, and auditory stimuli. We analyzed a total of 352 activation foci reported in 16 studies carried out in a total of 192 healthy participants. The results reveal that the main regions activated by CVS, GVS, or auditory stimuli were located in the Sylvian fissure, insula, retroinsular cortex, fronto-parietal operculum, superior temporal gyrus, and cingulate cortex. Conjunction analysis indicated that regions showing convergence between two stimulation methods were located in the median (short gyrus III) and posterior (long gyrus IV) insula, parietal operculum and retroinsular cortex (Ri). The only area of convergence between all three methods of stimulation was located in Ri. The data indicate that Ri, parietal operculum and posterior insula are vestibular regions where afferents converge from otoliths and semicircular canals, and may thus be involved in the processing of signals informing about body rotations, translations and tilts. Results from the meta-analysis are in agreement with electrophysiological recordings in monkeys showing main vestibular projections in the transitional zone between Ri, the insular granular field (Ig), and SII.
Resumo:
The present study was designed to elucidate sex-related differences in two basic auditory and one basic visual aspect of sensory functioning, namely sensory discrimination of pitch, loudness, and brightness. Although these three aspects of sensory functioning are of vital importance in everyday life, little is known about whether men and women differ from each other in these sensory functions. Participants were 100 male and 100 female volunteers ranging in age from 18 to 30 years. Since sensory sensitivity may be positively related to individual levels of intelligence and musical experience, measures of psychometric intelligence and musical background were also obtained. Reliably better performance for men compared to women was found for pitch and loudness, but not for brightness discrimination. Furthermore, performance on loudness discrimination was positively related to psychometric intelligence, while pitch discrimination was positively related to both psychometric intelligence and levels of musical training. Additional regression analyses revealed that each of three predictor variables (sex, psychometric intelligence, and musical training) accounted for a statistically significant portion of unique variance in pitch discrimination. With regard to loudness discrimination, regression analysis yielded a statistically significant portion of unique variance for sex as a predictor variable, whereas psychometric intelligence just failed to reach statistical significance. The potential influence of sex hormones on sex-related differences in sensory functions is discussed.
Resumo:
In practical forensic casework, backspatter recovered from shooters' hands can be an indicator of self-inflicted gunshot wounds to the head. In such cases, backspatter retrieved from inside the barrel indicates that the weapon found at the death scene was involved in causing the injury to the head. However, systematic research on the aspects conditioning presence, amount and specific patterns of backspatter is lacking so far. Herein, a new concept of backspatter investigation is presented, comprising staining technique, weapon and target medium: the 'triple contrast method' was developed, tested and is introduced for experimental backspatter analysis. First, mixtures of various proportions of acrylic paint for optical detection, barium sulphate for radiocontrast imaging in computed tomography and fresh human blood for PCR-based DNA profiling were generated (triple mixture) and tested for DNA quantification and short tandem repeat (STR) typing success. All tested mixtures yielded sufficient DNA that produced full STR profiles suitable for forensic identification. Then, for backspatter analysis, sealed foil bags containing the triple mixture were attached to plastic bottles filled with 10 % ballistic gelatine and covered by a 2-3-mm layer of silicone. To simulate backspatter, close contact shots were fired at these models. Endoscopy of the barrel inside revealed coloured backspatter containing typable DNA and radiographic imaging showed a contrasted bullet path in the gelatine. Cross sections of the gelatine core exhibited cracks and fissures stained by the acrylic paint facilitating wound ballistic analysis.
Resumo:
Schizophrenia patients show abnormalities in a broad range of task demands. Therefore, an explanation common to all these abnormalities has to be sought independently of any particular task, ideally in the brain dynamics before a task takes place or during resting state. For the neurobiological investigation of such baseline states, EEG microstate analysis is particularly well suited, because it identifies subsecond global states of stable connectivity patterns directly related to the recruitment of different types of information processing modes (e.g., integration of top-down and bottom-up information). Meanwhile, there is an accumulation of evidence that particular microstate networks are selectively affected in schizophrenia. To obtain an overall estimate of the effect size of these microstate abnormalities, we present a systematic meta-analysis over all studies available to date relating EEG microstates to schizophrenia. Results showed medium size effects for two classes of microstates, namely, a class labeled C that was found to be more frequent in schizophrenia and a class labeled D that was found to be shortened. These abnormalities may correspond to core symptoms of schizophrenia, e.g., insufficient reality testing and self-monitoring as during auditory verbal hallucinations. As interventional studies have shown that these microstate features may be systematically affected using antipsychotic drugs or neurofeedback interventions, these findings may help introducing novel diagnostic and treatment options.
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Resumo:
Ocean acidification is predicted to affect marine ecosystems in many ways, including modification of fish behaviour. Previous studies have identified effects of CO2-enriched conditions on the sensory behaviour of fishes, including the loss of natural responses to odours resulting in ecologically deleterious decisions. Many fishes also rely on hearing for orientation, habitat selection, predator avoidance and communication. We used an auditory choice chamber to study the influence of CO2-enriched conditions on directional responses of juvenile clownfish (Amphiprion percula) to daytime reef noise. Rearing and test conditions were based on Intergovernmental Panel on Climate Change predictions for the twenty-first century: current-day ambient, 600, 700 and 900 µatm pCO2. Juveniles from ambient CO2-conditions significantly avoided the reef noise, as expected, but this behaviour was absent in juveniles from CO2-enriched conditions. This study provides, to our knowledge, the first evidence that ocean acidification affects the auditory response of fishes, with potentially detrimental impacts on early survival.