20 resultados para Auditory Threshold
Resumo:
Gene filtering is a useful preprocessing technique often applied to microarray datasets. However, it is no common practice because clear guidelines are lacking and it bears the risk of excluding some potentially relevant genes. In this work, we propose to model microarray data as a mixture of two Gaussian distributions that will allow us to obtain an optimal filter threshold in terms of the gene expression level.
Resumo:
We develop an analytical approach to the susceptible-infected-susceptible epidemic model that allows us to unravel the true origin of the absence of an epidemic threshold in heterogeneous networks. We find that a delicate balance between the number of high degree nodes in the network and the topological distance between them dictates the existence or absence of such a threshold. In particular, small-world random networks with a degree distribution decaying slower than an exponential have a vanishing epidemic threshold in the thermodynamic limit.
Resumo:
The mismatch negativity is an electrophysiological marker of auditory change detection in the event-related brain potential and has been proposed to reflect an automatic comparison process between an incoming stimulus and the representation of prior items in a sequence. There is evidence for two main functional subcomponents comprising the MMN, generated by temporal and frontal brain areas, respectively. Using data obtained in an MMN paradigm, we performed time-frequency analysis to reveal the changes in oscillatory neural activity in the theta band. The results suggest that the frontal component of the MMN is brought about by an increase in theta power for the deviant trials and, possibly, by an additional contribution of theta phase alignment. By contrast, the temporal component of the MMN, best seen in recordings from mastoid electrodes, is generated by phase resetting of theta rhythm with no concomitant power modulation. Thus, frontal and temporal MMN components do not only differ with regard to their functional significance but also appear to be generated by distinct neurophysiological mechanisms.
Resumo:
BACKGROUND AND PURPOSE: The high variability of CSF volumes partly explains the inconsistency of anesthetic effects, but may also be due to image analysis itself. In this study, criteria for threshold selection are anatomically defined. METHODS: T2 MR images (n = 7 cases) were analyzed using 3-dimentional software. Maximal-minimal thresholds were selected in standardized blocks of 50 slices of the dural sac ending caudally at the L5-S1 intervertebral space (caudal blocks) and middle L3 (rostral blocks). Maximal CSF thresholds: threshold value was increased until at least one voxel in a CSF area appeared unlabeled and decreased until that voxel was labeled again: this final threshold was selected. Minimal root thresholds: thresholds values that selected cauda equina root area but not adjacent gray voxels in the CSF-root interface were chosen. RESULTS: Significant differences were found between caudal and rostral thresholds. No significant differences were found between expert and nonexpert observers. Average max/min thresholds were around 1.30 but max/min CSF volumes were around 1.15. Great interindividual CSF volume variability was detected (max/min volumes 1.6-2.7). CONCLUSIONS: The estimation of a close range of CSF volumes which probably contains the real CSF volume value can be standardized and calculated prior to certain intrathecal procedures
Resumo:
This project addresses methodological and technological challenges in the development of multi-modal data acquisition and analysis methods for the representation of instrumental playing technique in music performance through auditory-motor patterning models. The case study is violin playing: a multi-modal database of violin performances has been constructed by recording different musicians while playing short exercises on different violins. The exercise set and recording protocol have been designed to sample the space defined by dynamics (from piano to forte) and tone (from sul tasto to sul ponticello), for each bow stroke type being played on each of the four strings (three different pitches per string) at two different tempi. The data, containing audio, video, and motion capture streams, has been processed and segmented to facilitate upcoming analyses. From the acquired motion data, the positions of the instrument string ends and the bow hair ribbon ends are tracked and processed to obtain a number of bowing descriptors suited for a detailed description and analysis of the bow motion patterns taking place during performance. Likewise, a number of sound perceptual attributes are computed from the audio streams. Besides the methodology and the implementation of a number of data acquisition tools, this project introduces preliminary results from analyzing bowing technique on a multi-modal violin performance database that is unique in its class. A further contribution of this project is the data itself, which will be made available to the scientific community through the repovizz platform.