15 resultados para Time code (Audio-visual technology)

em Universit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT are functionally linked and temporally synchronized during time encoding whereas they are functionally independent and operate serially (V1 followed by V5/MT) while maintaining temporal information in working memory. These data challenge the traditional view of V1 and V5/MT as visuo-spatial features detectors and highlight the functional contribution and the temporal dynamics of these brain regions in the processing of time in millisecond range. The present project resulted in the paper entitled: 'How the visual brain encodes and keeps track of time' by Paolo Salvioni, Lysiann Kalmbach, Micah Murray and Domenica Bueti that is now submitted for publication to the Journal of Neuroscience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tools of visualisation occupy a central place in medicine. Far from being simple accessories of glance, they literally constitute objects of medicine. Such empirical acknowledgement and epistemological position open a vast field of investigation: visual technologies of medical knowledge. This article studies the development and transformation of medical objects which have permitted to assess the role of temporality in the epistemology of medicine. It firstly examines the general problem of the relationships between cinema, animated image and medicine and secondly, the contribution of the German doctor Martin Weiser to medical cinematography as a method. Finally, a typology is sketched out organising the variety of the visual technology of movement under the perspective of the development of specific visual techniques in medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been demonstrated in earlier studies that patients with a cochlear implant have increased abilities for audio-visual integration because the crude information transmitted by the cochlear implant requires the persistent use of the complementary speech information from the visual channel. The brain network for these abilities needs to be clarified. We used an independent components analysis (ICA) of the activation (H2 (15) O) positron emission tomography data to explore occipito-temporal brain activity in post-lingually deaf patients with unilaterally implanted cochlear implants at several months post-implantation (T1), shortly after implantation (T0) and in normal hearing controls. In between-group analysis, patients at T1 had greater blood flow in the left middle temporal cortex as compared with T0 and normal hearing controls. In within-group analysis, patients at T0 had a task-related ICA component in the visual cortex, and patients at T1 had one task-related ICA component in the left middle temporal cortex and the other in the visual cortex. The time courses of temporal and visual activities during the positron emission tomography examination at T1 were highly correlated, meaning that synchronized integrative activity occurred. The greater involvement of the visual cortex and its close coupling with the temporal cortex at T1 confirm the importance of audio-visual integration in more experienced cochlear implant subjects at the cortical level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT This thesis is composed of two main parts. The first addressed the question of whether the auditory and somatosensory systems, like their visual counterpart, comprise parallel functional pathways for processing identity and spatial attributes (so-called `what' and `where' pathways, respectively). The second part examined the independence of control processes mediating task switching across 'what' and `where' pathways in the auditory and visual modalities. Concerning the first part, electrical neuroimaging of event-related potentials identified the spatio-temporal mechanisms subserving auditory (see Appendix, Study n°1) and vibrotactile (see Appendix, Study n°2) processing during two types of blocks of trials. `What' blocks varied stimuli in their frequency independently of their location.. `Where' blocks varied the same stimuli in their location independently of their frequency. Concerning the second part (see Appendix, Study n°3), a psychophysical task-switching paradigm was used to investigate the hypothesis that the efficacy of control processes depends on the extent of overlap between the neural circuitry mediating the different tasks at hand, such that more effective task preparation (and by extension smaller switch costs) is achieved when the anatomical/functional overlap of this circuitry is small. Performance costs associated with switching tasks and/or switching sensory modalities were measured. Tasks required the analysis of either the identity or spatial location of environmental objects (`what' and `where' tasks, respectively) that were presented either visually or acoustically on any given trial. Pretrial cues informed participants of the upcoming task, but not of the sensory modality. - In the audio-visual domain, the results showed that switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. In addition, switch costs between the senses were correlated only when the sensory modality of the task repeated across trials and not when it switched. The collective evidence not only supports the independence of control processes mediating task switching and modality switching, but also the hypothesis that switch costs reflect competitive interterence between neural circuits that in turn can be diminished when these neural circuits are distinct. - In the auditory and somatosensory domains, the findings show that a segregation of location vs. recognition information is observed across sensory systems and that these happen around 100ms for both sensory modalities. - Also, our results show that functionally specialized pathways for audition and somatosensation involve largely overlapping brain regions, i.e. posterior superior and middle temporal cortices and inferior parietal areas. Both these properties (synchrony of differential processing and overlapping brain regions) probably optimize the relationships across sensory modalities. - Therefore, these results may be indicative of a computationally advantageous organization for processing spatial anal identity information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contribution of visual and nonvisual mechanisms to spatial behavior of rats in the Morris water maze was studied with a computerized infrared tracking system, which switched off the room lights when the subject entered the inner circular area of the pool with an escape platform. Naive rats trained under light-dark conditions (L-D) found the escape platform more slowly than rats trained in permanent light (L). After group members were swapped, the L-pretrained rats found under L-D conditions the same target faster and eventually approached latencies attained during L navigation. Performance of L-D-trained rats deteriorated in permanent darkness (D) but improved with continued D training. Thus L-D navigation improves gradually by procedural learning (extrapolation of the start-target azimuth into the zero-visibility zone) but remains impaired by lack of immediate visual feedback rather than by absence of the snapshot memory of the target view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud computing has recently become very popular, and several bioinformatics applications exist already in that domain. The aim of this article is to analyse a current cloud system with respect to usability, benchmark its performance and compare its user friendliness with a conventional cluster job submission system. Given the current hype on the theme, user expectations are rather high, but current results show that neither the price/performance ratio nor the usage model is very satisfactory for large-scale embarrassingly parallel applications. However, for small to medium scale applications that require CPU time at certain peak times the cloud is a suitable alternative.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The acquisition duration of most three-dimensional (3D) coronary magnetic resonance angiography (MRA) techniques is considerably prolonged, thereby precluding breathholding as a mechanism to suppress respiratory motion artifacts. Splitting the acquired 3D volume into multiple subvolumes or slabs serves to shorten individual breathhold duration. Still, problems associated with misregistration due to inconsistent depths of expiration and diaphragmatic drift during sustained respiration remain to be resolved. We propose the combination of an ultrafast 3D coronary MRA imaging sequence with prospective real-time navigator technology, which allows correction of the measured volume position. 3D volume splitting using prospective real-time navigator technology, was successfully applied for 3D coronary MRA in five healthy individuals. An ultrafast 3D interleaved hybrid gradient-echoplanar imaging sequence, including T2Prep for contrast enhancement, was used with the navigator localized at the basal anterior wall of the left ventricle. A 9-cm-thick volume, with in-plane spatial resolution of 1.1 x 2.2 mm, was acquired during five breathholds of 15-sec duration each. Consistently, no evidence of misregistration was observed in the images. Extensive contiguous segments of the left anterior descending coronary artery (48 +/- 18 mm) and the right coronary artery (75 +/- 5 mm) could be visualized. This technique has the potential for screening for anomalous coronary arteries, making it well suited as part of a larger clinical MR examination. In addition, this technique may also be applied as a scout scan, which allows an accurate definition of imaging planes for subsequent high-resolution coronary MRA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.