966 resultados para Computer sound processing
Resumo:
"Contract W-7405-ENG.36 with the U.S. Atomic Energy Commission."
Resumo:
Tant el medi transmissor com els equips d'enregistrament o reproducció de so introdueixen components de soroll d'alta freqüència als senyals. En aquest treball de final de carrera (TFC), s'ha dissenyat i implementat un sistema de filtrat d'àudio encaminat a filtrar aquestes components d'alta freqüència. Donat que l'oïda humana no pot percebre sons de més de 20 kHz, s'ha considerat aquest límit com a freqüència màxima a mantenir en la senyal.S'ha començat estudiant el senyal problema a través del seu espectre de freqüències simulat mitjançant la transformada discreta de Fourier (DFT, en anglès). Una vegada identificades les components d'alta freqüència a atenuar, s'han estudiat les diferents opcions de filtre passabaix.Inicialment, s'ha valorat la possibilitat del disseny de filtres analògics de Butterworth o Chebyshev, o de filtres digitals de tipus IIR (Infinite Impulse Response) basats en els primers. Tanmateix, malgrat assolir les especificacions en magnitud, mitjançant aquest filtres no s'obté una fase lineal en la banda de pas. Per això, s'ha realitzat un disseny de filtre digital tipus FIR (Finite Infinite Response) que compleix estrictament amb les especificacions i presenta una fase lineal en la banda de pas. S'ha simulat el comportament d'aquest filtre amb el senyal problema per tal d'assegurar el seu correcte funcionament.A continuació, s'ha implementat aquest últim disseny en llenguatge C i compilat per un microcontrolador de l'empresa Microchip. S'han realitzat proves de simulació mitjançant Stimulus del programa MPLAB. En definitiva, s'ha dissenyat un filtre passabaix de tipus FIR per acondicionar una senyal d'àudio que posteriorment s'ha implementat en un microcontrolador de Microchip.
Resumo:
Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.
Resumo:
Report for the scientific sojourn carried out at the Music Technology Area (Sound Processing and Control Lab), Faculty of Music, McGill University, Montreal, Canada, from October to December 2005.The aim of this research is to study the singing voice for controlling virtual musical instrument synthesis. It includes analysis and synthesis algorithms based on spectral audio processing. After digitalising the acoustic voice signal in the computer, a number of expressive descriptors of the singer are extracted. This process is achieved synchronously, thus all the nuance of the singer performance have been tracked. In a second stage, the extracted parameters are mapped to a sound synthesizer, the so-called digital musical instruments. In order achieve it, several tests with music students of the Faculty of Music, McGill University have been developed. These experiments have contributed to evaluate the system and to derive new control strategies to integrate: clarinet synthesis, bass guitar, visual representation of voice signals.
Resumo:
SOUND OBJECTS IN TIME, SPACE AND ACTIONThe term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. At cortical level, sound objects are represented by temporo-spatial activity patterns within distributed neural networks. This investigation concerns temporal, spatial and action aspects as assessed in normal subjects using electrical imaging or measurement of motor activity induced by transcranial magnetic stimulation (TMS).Hearing the same sound again has been shown to facilitate behavioral responses (repetition priming) and to modulate neural activity (repetition suppression). In natural settings the same source is often heard again and again, with variations in spectro-temporal and spatial characteristics. I have investigated how such repeats influence response times in a living vs. non-living categorization task and the associated spatio-temporal patterns of brain activity in humans. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory cortex as a function of the temporal history of exposure to these objects. Often heard sounds are coded by a modulation in a bilateral network. Recently heard sounds, independently of the number of previous exposures, are coded by a modulation of a left-sided network.With sound objects which carry spatial information, I have investigated how spatial aspects of the repeats influence neural representations. Dynamics analyses of distributed source estimations revealed an ultra rapid discrimination of sound objects which are characterized by spatial cues. This discrimination involved two temporo-spatially distinct cortical representations, one associated with position-independent and the other with position-linked representations within the auditory ventral/what stream.Action-related sounds were shown to increase the excitability of motoneurons within the primary motor cortex, possibly via an input from the mirror neuron system. The role of motor representations remains unclear. I have investigated repetition priming-induced plasticity of the motor representations of action sounds with the measurement of motor activity induced by TMS pulses applied on the hand motor cortex. TMS delivered to the hand area within the primary motor cortex yielded larger magnetic evoked potentials (MEPs) while the subject was listening to sounds associated with manual than non- manual actions. Repetition suppression was observed at motoneuron level, since during a repeated exposure to the same manual action sound the MEPs were smaller. I discuss these results in terms of specialized neural network involved in sound processing, which is characterized by repetition-induced plasticity.Thus, neural networks which underlie sound object representations are characterized by modulations which keep track of the temporal and spatial history of the sound and, in case of action related sounds, also of the way in which the sound is produced.LES OBJETS SONORES AU TRAVERS DU TEMPS, DE L'ESPACE ET DES ACTIONSLe terme "objet sonore" décrit une expérience auditive associée avec un événement acoustique produit par une source sonore. Au niveau cortical, les objets sonores sont représentés par des patterns d'activités dans des réseaux neuronaux distribués. Ce travail traite les aspects temporels, spatiaux et liés aux actions, évalués à l'aide de l'imagerie électrique ou par des mesures de l'activité motrice induite par stimulation magnétique trans-crânienne (SMT) chez des sujets sains. Entendre le même son de façon répétitive facilite la réponse comportementale (amorçage de répétition) et module l'activité neuronale (suppression liée à la répétition). Dans un cadre naturel, la même source est souvent entendue plusieurs fois, avec des variations spectro-temporelles et de ses caractéristiques spatiales. J'ai étudié la façon dont ces répétitions influencent le temps de réponse lors d'une tâche de catégorisation vivant vs. non-vivant, et les patterns d'activité cérébrale qui lui sont associés. Des analyses dynamiques d'estimations de sources ont révélé des représentations différenciées des objets sonores au niveau du cortex auditif en fonction de l'historique d'exposition à ces objets. Les sons souvent entendus sont codés par des modulations d'un réseau bilatéral. Les sons récemment entendus sont codé par des modulations d'un réseau du côté gauche, indépendamment du nombre d'expositions. Avec des objets sonores véhiculant de l'information spatiale, j'ai étudié la façon dont les aspects spatiaux des sons répétés influencent les représentations neuronales. Des analyses dynamiques d'estimations de sources ont révélé une discrimination ultra rapide des objets sonores caractérisés par des indices spatiaux. Cette discrimination implique deux représentations corticales temporellement et spatialement distinctes, l'une associée à des représentations indépendantes de la position et l'autre à des représentations liées à la position. Ces représentations sont localisées dans la voie auditive ventrale du "quoi".Des sons d'actions augmentent l'excitabilité des motoneurones dans le cortex moteur primaire, possiblement par une afférence du system des neurones miroir. Le rôle des représentations motrices des sons d'actions reste peu clair. J'ai étudié la plasticité des représentations motrices induites par l'amorçage de répétition à l'aide de mesures de potentiels moteurs évoqués (PMEs) induits par des pulsations de SMT sur le cortex moteur de la main. La SMT appliquée sur le cortex moteur primaire de la main produit de plus grands PMEs alors que les sujets écoutent des sons associée à des actions manuelles en comparaison avec des sons d'actions non manuelles. Une suppression liée à la répétition a été observée au niveau des motoneurones, étant donné que lors de l'exposition répétée au son de la même action manuelle les PMEs étaient plus petits. Ces résultats sont discuté en termes de réseaux neuronaux spécialisés impliqués dans le traitement des sons et caractérisés par de la plasticité induite par la répétition. Ainsi, les réseaux neuronaux qui sous-tendent les représentations des objets sonores sont caractérisés par des modulations qui gardent une trace de l'histoire temporelle et spatiale du son ainsi que de la manière dont le son a été produit, en cas de sons d'actions.
Resumo:
The overall system is designed to permit automatic collection of delamination field data for bridge decks. In addition to measuring and recording the data in the field, the system provides for transferring the recorded data to a personal computer for processing and plotting. This permits rapid turnaround from data collection to a finished plot of the results in a fraction of the time previously required for manual analysis of the analog data captured on a strip chart recorder. In normal operation the Delamtect provides an analog voltage for each of two channels which is proportional to the extent of any delamination. These voltages are recorded on a strip chart for later visual analysis. An event marker voltage, produced by a momentary push button on the handle, is also provided by the Delamtect and recorded on a third channel of the analog recorder.
Resumo:
Middle ear infections (acute otitis media, AOM) are among the most common infectious diseases in childhood, their incidence being greatest at the age of 6–12 months. Approximately 10–30% of children undergo repetitive periods of AOM, referred to as recurrent acute otitis media (RAOM). Middle ear fluid during an AOM episode causes, on average, 20–30 dB of hearing loss lasting from a few days to as much as a couple of months. It is well known that even a mild permanent hearing loss has an effect on language development but so far there is no consensus regarding the consequences of RAOM on childhood language acquisition. The results of studies on middle ear infections and language development have been partly discrepant and the exact effects of RAOM on the developing central auditory nervous system are as yet unknown. This thesis aims to examine central auditory processing and speech production among 2-year-old children with RAOM. Event-related potentials (ERPs) extracted from electroencephalography can be used to objectively investigate the functioning of the central auditory nervous system. For the first time this thesis has utilized auditory ERPs to study sound encoding and preattentive auditory discrimination of speech stimuli, and neural mechanisms of involuntary auditory attention in children with RAOM. Furthermore, the level of phonological development was studied by investigating the number and the quality of consonants produced by these children. Acquisition of consonant phonemes, which are harder to hear than vowels, is a good indicator of the ability to form accurate memory representations of ambient language and has not been studied previously in Finnish-speaking children with RAOM. The results showed that the cortical sound encoding was intact but the preattentive auditory discrimination of multiple speech sound features was atypical in those children with RAOM. Furthermore, their neural mechanisms of auditory attention differed from those of their peers, thus indicating that children with RAOM are atypically sensitive to novel but meaningless sounds. The children with RAOM also produced fewer consonants than their controls. Noticeably, they had a delay in the acquisition of word-medial consonants and the Finnish phoneme /s/, which is acoustically challenging to perceive compared to the other Finnish phonemes. The findings indicate the immaturity of central auditory processing in the children with RAOM, and this might also emerge in speech production. This thesis also showed that the effects of RAOM on central auditory processing are long-lasting because the children had healthy ears at the time of the study. An effective neural network for speech sound processing is a basic requisite of language acquisition, and RAOM in early childhood should be considered as a risk factor for language development.
Resumo:
Background: Voice processing in real-time is challenging. A drawback of previous work for Hypokinetic Dysarthria (HKD) recognition is the requirement of controlled settings in a laboratory environment. A personal digital assistant (PDA) has been developed for home assessment of PD patients. The PDA offers sound processing capabilities, which allow for developing a module for recognition and quantification HKD. Objective: To compose an algorithm for assessment of PD speech severity in the home environment based on a review synthesis. Methods: A two-tier review methodology is utilized. The first tier focuses on real-time problems in speech detection. In the second tier, acoustics features that are robust to medication changes in Levodopa-responsive patients are investigated for HKD recognition. Keywords such as Hypokinetic Dysarthria , and Speech recognition in real time were used in the search engines. IEEE explorer produced the most useful search hits as compared to Google Scholar, ELIN, EBRARY, PubMed and LIBRIS. Results: Vowel and consonant formants are the most relevant acoustic parameters to reflect PD medication changes. Since relevant speech segments (consonants and vowels) contains minority of speech energy, intelligibility can be improved by amplifying the voice signal using amplitude compression. Pause detection and peak to average power rate calculations for voice segmentation produce rich voice features in real time. Enhancements in voice segmentation can be done by inducing Zero-Crossing rate (ZCR). Consonants have high ZCR whereas vowels have low ZCR. Wavelet transform is found promising for voice analysis since it quantizes non-stationary voice signals over time-series using scale and translation parameters. In this way voice intelligibility in the waveforms can be analyzed in each time frame. Conclusions: This review evaluated HKD recognition algorithms to develop a tool for PD speech home-assessment using modern mobile technology. An algorithm that tackles realtime constraints in HKD recognition based on the review synthesis is proposed. We suggest that speech features may be further processed using wavelet transforms and used with a neural network for detection and quantification of speech anomalies related to PD. Based on this model, patients' speech can be automatically categorized according to UPDRS speech ratings.
Resumo:
In this article, we present a new framework oriented to teach Computer Vision related subjects called JavaVis. It is a computer vision library divided in three main areas: 2D package is featured for classical computer vision processing; 3D package, which includes a complete 3D geometric toolset, is used for 3D vision computing; Desktop package comprises a tool for graphic designing and testing of new algorithms. JavaVis is designed to be easy to use, both for launching and testing existing algorithms and for developing new ones.
Resumo:
Into the Bends of Time is a 40-minute work in seven movements for a large chamber orchestra with electronics, utilizing real-time computer-assisted processing of music performed by live musicians. The piece explores various combinations of interactive relationships between players and electronics, ranging from relatively basic processing effects to musical gestures achieved through stages of computer analysis, in which resulting sounds are crafted according to parameters of the incoming musical material. Additionally, some elements of interaction are multi-dimensional, in that they rely on the participation of two or more performers fulfilling distinct roles in the interactive process with the computer in order to generate musical material. Through processes of controlled randomness, several electronic effects induce elements of chance into their realization so that no two performances of this work are exactly alike. The piece gets its name from the notion that real-time computer-assisted processing, in which sound pressure waves are transduced into electrical energy, converted to digital data, artfully modified, converted back into electrical energy and transduced into sound waves, represents a “bending” of time.
The Bill Evans Trio featuring bassist Scott LaFaro and drummer Paul Motian is widely regarded as one of the most important and influential piano trios in the history of jazz, lauded for its unparalleled level of group interaction. Most analyses of Bill Evans’ recordings, however, focus on his playing alone and fail to take group interaction into account. This paper examines one performance in particular, of Victor Young’s “My Foolish Heart” as recorded in a live performance by the Bill Evans Trio in 1961. In Part One, I discuss Steve Larson’s theory of musical forces (expanded by Robert S. Hatten) and its applicability to jazz performance. I examine other recordings of ballads by this same trio in order to draw observations about normative ballad performance practice. I discuss meter and phrase structure and show how the relationship between the two is fixed in a formal structure of repeated choruses. I then develop a model of perpetual motion based on the musical forces inherent in this structure. In Part Two, I offer a full transcription and close analysis of “My Foolish Heart,” showing how elements of group interaction work with and against the musical forces inherent in the model of perpetual motion to achieve an unconventional, dynamic use of double-time. I explore the concept of a unified agential persona and discuss its role in imparting the song’s inherent rhetorical tension to the instrumental musical discourse.
Resumo:
OBJECTIVES: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. DESIGN: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. RESULTS: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. CONCLUSIONS: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.
Resumo:
Background: Schizophrenia is likely to be a consequence of DNA alterations that, together with environmental factors, will lead to protein expression differences and the ultimate establishment of the illness. The superior temporal gyrus is implicated in schizophrenia and executes functions such as the processing of speech, language skills and sound processing. Methods: We performed an individual comparative proteome analysis using two-dimensional gel electrophoresis of 9 schizophrenia and 6 healthy control patients' left posterior superior temporal gyrus (Wernicke's area - BA22p) identifying by mass spectrometry several protein expression alterations that could be related to the disease. Results: Our analysis revealed 11 downregulated and 14 upregulated proteins, most of them related to energy metabolism. Whereas many of the identified proteins have been previously implicated in schizophrenia, such as fructose-bisphosphate aldolase C, creatine kinase and neuron-specific enolase, new putative disease markers were also identified such as dihydrolipoyl dehydrogenase, tropomyosin 3, breast cancer metastasis-suppressor 1, heterogeneous nuclear ribonucleoproteins C1/C2 and phosphate carrier protein, mitochondrial precursor. Besides, the differential expression of peroxiredoxin 6 (PRDX6) and glial fibrillary acidic protein (GFAP) were confirmed by western blot in schizophrenia prefrontal cortex. Conclusion: Our data supports a dysregulation of energy metabolism in schizophrenia as well as suggests new markers that may contribute to a better understanding of this complex disease.
Resumo:
Eye tracking as an interface to operate a computer is under research for a while and new systems are still being developed nowadays that provide some encouragement to those bound to illnesses that incapacitates them to use any other form of interaction with a computer. Although using computer vision processing and a camera, these systems are usually based on head mount technology being considered a contact type system. This paper describes the implementation of a human-computer interface based on a fully non-contact eye tracking vision system in order to allow people with tetraplegia to interface with a computer. As an assistive technology, a graphical user interface with special features was developed including a virtual keyboard to allow user communication, fast access to pre-stored phrases and multimedia and even internet browsing. This system was developed with the focus on low cost, user friendly functionality and user independency and autonomy.
Resumo:
RESUME Les améliorations méthodologiques des dernières décennies ont permis une meilleure compréhension de la motilité gastro-intestinale. Il manque toutefois une méthode qui permette de suivre la progression du chyme le long du tube gastro-intestinal. Pour permettre l'étude de la motilité de tout le tractus digestif humain, une nouvelle technique, peu invasive, a été élaborée au Département de Physiologie, en collaboration avec l'EPFL. Appelée "Magnet Tracking", la technique est basée sur la détection du champ magnétique généré par des matériaux ferromagnétiques avalés. A cet usage, une pilule magnétique, une matrice de capteurs et un logiciel ont été développés. L'objet de ce travail est de démontrer la faisabilité d'un examen de la motilité gastro-intestinale chez l'Homme par cette méthode. L'aimant est un cylindre (ø 6x7 mm, 0.2 cm3) protégé par une gaine de silicone. Le système de mesure est constitué d'une matrice de 4x4 capteurs et d'un ordinateur portable. Les capteurs fonctionnent sur l'effet Hall. Grâce à l'interface informatique, l'évolution de la position de l'aimant est suivie en temps réel à travers tout le tractus digestif. Sa position est exprimée en fonction du temps ou reproduite en 3-D sous forme d'une trajectoire. Différents programmes ont été crées pour analyser la dynamique des mouvements de l'aimant et caractériser la motilité digestive. Dix jeunes volontaires en bonne santé ont participé à l'étude. L'aimant a été avalé après une nuit de jeûne et son séjour intra digestif suivi pendant 2 jours consécutifs. Le temps moyen de mesure était de 34 heures. Chaque sujet a été examiné une fois sauf un qui a répété sept fois l'expérience. Les sujets restaient en décubitus dorsal, tranquilles et pouvaient interrompre la mesure s'ils le désiraient. Ils sont restés à jeûne le premier jour. L'évacuation de l'aimant a été contrôlée chez tous les sujets. Tous les sujets ont bien supporté l'examen. Le marqueur a pu être détecté de l'oesophage au rectum. La trajectoire ainsi constituée représente une conformation de l'anatomie digestive : une bonne superposition de celle-ci à l'anatomie est obtenue à partir des images de radiologie conventionnelle (CT-scan, lavement à la gastrografine). Les mouvements de l'aimant ont été caractérisés selon leur périodicité, leur amplitude ou leur vitesse pour chaque segment du tractus digestif. Ces informations physiologiques sont bien corrélées à celles obtenues par des méthodes établies d'étude de la motilité gastro-intestinale. Ce travail démontre la faisabilité d'un examen de la motilité gastro-intestinal chez l'Homme par la méthode de Magnet Tracking. La technique fournit les données anatomiques et permet d'analyser en temps réel la dynamique des mouvements du tube digestif. Cette méthode peu invasive ouvre d'intéressantes perspectives pour l'étude de motilité dans des conditions physiologiques et pathologiques. Des expériences visant à valider cette approche en tant que méthode clinique sont en voie de réalisation dans plusieurs centres en Suisse et à l'étranger. SUMMARY Methodological improvements realised over the last decades have permitted a better understanding of gastrointestinal motility. Nevertheless, a method allowing a continuous following of lumina' contents is still lacking. In order to study the human digestive tract motility, a new minimally invasive technique was developed at the Department of Physiology in collaboration with Swiss Federal Institute of Technology. The method is based on the detection of magnetic field generated by swallowed ferromagnetic materials. The aim of our work was to demonstrate the feasibility of this new approach to study the human gastrointestinal motility. The magnet used was a cylinder (ø6x7mm, 0.2 cm3) coated with silicon. The magnet tracking system consisted of a 4x4 matrix of sensors based on the Hall effect Signals from the sensors were digitised and sent to a laptop computer for processing and storage. Specific software was conceived to analyse in real time the progression of the magnet through the gastrointestinal tube. Ten young and healthy volunteers were enrolled in the study. After a fasting period of 12 hours, they swallowed the magnet. The pill was then tracked for two consecutive days for 34 hours on average. Each subject was studied once except one who was studied seven times. Every subject laid on his back for the entire experiment but could interrupt it at anytime. Evacuation of the magnet was controlled in all subjects. The examination was well tolerated. The pill could be followed from the esophagus to the rectum. The trajectory of the magnet represented a "mould" of the anatomy of the digestive tube: a good superimposition with radiological anatomy (gastrografin contrast and CT) was obtained. Movements of the magnet were characterized by periodicity, velocity, and amplitude of displacements for every segment of the digestive tract. The physiological information corresponded well to data from current methods of studying gastrointestinal motility. This work demonstrates the feasibility of the new approach in studies of human gastrointestinal motility. The technique allows to correlate in real time the dynamics of digestive movements with the anatomical data. This minimally invasive method is ready for studies of human gastrointestinal motility under physiological as well as pathological conditions. Studies aiming at validation of this new approach as a clinically relevant tool are being realised in several centres in Switzerland and abroad. Abstract: A new minimally invasive technique allowing for anatomical mapping and motility studies along the entire human digestive system is presented. The technique is based on continuous tracking of a small magnet progressing through the digestive tract. The coordinates of the magnet are calculated from signals recorded by 16 magnetic field sensors located over the abdomen. The magnet position, orientation and trajectory are displayed in real time. Ten young healthy volunteers were followed during 34 h. The technique was well tolerated and no complication was encountered, The information obtained was 3-D con-figuration of the digestive tract and dynamics of the magnet displacement (velocity, transit time, length estimation, rhythms). In the same individual, repea-ted examination gave very reproducible results. The anatomical and physiological information obtained corresponded well to data from current methods and imaging. This simple, minimally invasive technique permits examination of the entire digestive tract and is suitable for both research and clinical studies. In combination with other methods, it may represent a useful tool for studies of Cl motility with respect to normal and pathological conditions.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.