967 resultados para Sound laboratories
Resumo:
BACKGROUND: Human speech is greatly influenced by the speakers' affective state, such as sadness, happiness, grief, guilt, fear, anger, aggression, faintheartedness, shame, sexual arousal, love, amongst others. Attentive listeners discover a lot about the affective state of their dialog partners with no great effort, and without having to talk about it explicitly during a conversation or on the phone. On the other hand, speech dysfunctions, such as slow, delayed or monotonous speech, are prominent features of affective disorders. METHODS: This project was comprised of four studies with healthy volunteers from Bristol (English: n = 117), Lausanne (French: n = 128), Zurich (German: n = 208), and Valencia (Spanish: n = 124). All samples were stratified according to gender, age, and education. The specific study design with different types of spoken text along with repeated assessments at 14-day intervals allowed us to estimate the 'natural' variation of speech parameters over time, and to analyze the sensitivity of speech parameters with respect to form and content of spoken text. Additionally, our project included a longitudinal self-assessment study with university students from Zurich (n = 18) and unemployed adults from Valencia (n = 18) in order to test the feasibility of the speech analysis method in home environments. RESULTS: The normative data showed that speaking behavior and voice sound characteristics can be quantified in a reproducible and language-independent way. The high resolution of the method was verified by a computerized assignment of speech parameter patterns to languages at a success rate of 90%, while the correct assignment to texts was 70%. In the longitudinal self-assessment study we calculated individual 'baselines' for each test person along with deviations thereof. The significance of such deviations was assessed through the normative reference data. CONCLUSIONS: Our data provided gender-, age-, and language-specific thresholds that allow one to reliably distinguish between 'natural fluctuations' and 'significant changes'. The longitudinal self-assessment study with repeated assessments at 1-day intervals over 14 days demonstrated the feasibility and efficiency of the speech analysis method in home environments, thus clearing the way to a broader range of applications in psychiatry. © 2014 S. Karger AG, Basel.
Resumo:
BACKGROUND: While the assessment of analytical precision within medical laboratories has received much attention in scientific enquiry, the degree of as well as the sources causing variation between them remains incompletely understood. In this study, we quantified the variance components when performing coagulation tests with identical analytical platforms in different laboratories and computed intraclass correlations coefficients (ICC) for each coagulation test. METHODS: Data from eight laboratories measuring fibrinogen twice in twenty healthy subjects with one out of 3 different platforms and single measurements of prothrombin time (PT), and coagulation factors II, V, VII, VIII, IX, X, XI and XIII were analysed. By platform, the variance components of (i) the subjects, (ii) the laboratory and the technician and (iii) the total variance were obtained for fibrinogen as well as (i) and (iii) for the remaining factors using ANOVA. RESULTS: The variability for fibrinogen measurements within a laboratory ranged from 0.02 to 0.04, the variability between laboratories ranged from 0.006 to 0.097. The ICC for fibrinogen ranged from 0.37 to 0.66 and from 0.19 to 0.80 for PT between the platforms. For the remaining factors the ICC's ranged from 0.04 (FII) to 0.93 (FVIII). CONCLUSIONS: Variance components that could be attributed to technicians or laboratory procedures were substantial, led to disappointingly low intraclass correlation coefficients for several factors and were pronounced for some of the platforms. Our findings call for sustained efforts to raise the level of standardization of structures and procedures involved in the quantification of coagulation factors.
Resumo:
The Summer Olympic Games constitute the biggest concentration of human sports and activities in a particular place and time since 776 BCE, when the written history of the Olympic Games in Olympia began. Summer and Winter Olympic anti-doping laboratories, accredited by the International Olympic Committee in the past and the World Anti-Doping Agency in the present times, acquire worldwide interest to apply all new analytical advancements in the fight against doping in sports, hoping that this major human event will not become dirty by association with this negative phenomenon. This article summarizes the new analytical progresses, technologies and knowledge used by the Olympic laboratories, which for the vast majority of them are, eventually, incorporated into routine anti-doping analysis.
Resumo:
The primary purpose of this project was to assess the potential of a nondestructive remote sensing system, specifically, ground penetrating subsurface interface radar, for identification and evaluation of D-cracking pavement failures. A secondary purpose was to evaluate the effectiveness of this technique for locating voids under pavements and determining the location of steel reinforcement. From the data collected and the analysis performed to date, the following conclusions can be made regarding the ground penetrating radar system used for this study: (1) steel reinforcement can be accurately located; (2) pavement thickness can be determined; (3) distressed areas in pavements can be located and broadly classified as to severity of deterioration; (4) voids under pavements can be located; and (5) higher resolution recording equipment is required to accurately determine both the thickness of sound pavement remaining over distressed areas and the depth of void areas under pavements.
Resumo:
Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.
Resumo:
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand.
Resumo:
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigmswould benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. Wepresent a framework for modeling bowing control parameters inviolin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals.We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Bézier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed stringphysical modeling and sample-based spectral-domain synthesis.
Resumo:
This paper presents a framework in which samples of bowing gesture parameters are retrieved and concatenated from a database of violin performances by attending to an annotated input score. Resulting bowing parameter signals are then used to synthesize sound by means of both a digital waveguide violin physical model, and an spectral-domainadditive synthesizer.
Resumo:
For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked.
Resumo:
Recent ink dating methods focused mainly on changes in solvent amounts occurring over time. A promising method was developed at the Landeskriminalamt of Munich using thermal desorption (TD) followed by gas chromatography / mass spectrometry (GC/MS) analysis. Sequential extractions of the phenoxyethanol present in ballpoint pen ink entries were carried out at two different temperatures. This method is applied in forensic practice and is currently implemented in several laboratories participating to the InCID group (International Collaboration on Ink Dating). However, harmonization of the method between the laboratories proved to be a particularly sensitive and time consuming task. The main aim of this work was therefore to implement the TD-GC/MS method at the Bundeskriminalamt (Wiesbaden, Germany) in order to evaluate if results were comparable to those obtained in Munich. At first validation criteria such as limits of reliable measurements, linearity and repeatability were determined. Samples were prepared in three different laboratories using the same inks and analyzed using two TDS-GC/MS instruments (one in Munich and one in Wiesbaden). The inter- and intra-laboratory variability of the ageing parameter was determined and ageing curves were compared. While inks stored in similar conditions yielded comparable ageing curves, it was observed that significantly different storage conditions had an influence on the resulting ageing curves. Finally, interpretation models, such as thresholds and trend tests, were evaluated and discussed in view of the obtained results. Trend tests were considered more suitable than threshold models. As both approaches showed limitations, an alternative model, based on the slopes of the ageing curves, was also proposed.
Resumo:
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl's gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Resumo:
The neural response to a violation of sequences of identical sounds is a typical example of the brain's sensitivity to auditory regularities. Previous literature interprets this effect as a pre-attentive and unconscious processing of sensory stimuli. By contrast, a violation to auditory global regularities, i.e. based on repeating groups of sounds, is typically detectable when subjects can consciously perceive them. Here, we challenge the notion that global detection implies consciousness by testing the neural response to global violations in a group of 24 patients with post-anoxic coma (three females, age range 45-87 years), treated with mild therapeutic hypothermia and sedation. By applying a decoding analysis to electroencephalographic responses to standard versus deviant sound sequences, we found above-chance decoding performance in 10 of 24 patients (Wilcoxon signed-rank test, P < 0.001), despite five of them being mildly hypothermic, sedated and unarousable. Furthermore, consistently with previous findings based on the mismatch negativity the progression of this decoding performance was informative of patients' chances of awakening (78% predictive of awakening). Our results show for the first time that detection of global regularities at neural level exists despite a deeply unconscious state.