992 resultados para Visual signals
Resumo:
We analyzed the coherence of electroencephalographic (EEG) signals recorded symmetrically from the two hemispheres, while subjects (n = 9) were viewing visual stimuli. Considering the many common features of the callosal connectivity in mammals, we expected that, as in our animal studies, interhemispheric coherence (ICoh) would increase only with bilateral iso-oriented gratings located close to the vertical meridian of the visual field, or extending across it. Indeed, a single grating that extended across the vertical meridian significantly increased the EEG ICoh in normal adult subjects. These ICoh responses were obtained from occipital and parietal derivations and were restricted to the gamma frequency band. They were detectable with different EEG references and were robust across and within subjects. Other unilateral and bilateral stimuli, including identical gratings that were effective in anesthetized animals, did not affect ICoh in humans. This fact suggests the existence of regulatory influences, possibly of a top-down kind, on the pattern of callosal activation in conscious human subjects. In addition to establishing the validity of EEG coherence analysis for assaying cortico-cortical connectivity, this study extends to the human brain the finding that visual stimuli cause interhemispheric synchronization, particularly in frequencies of the gamma band. It also indicates that the synchronization is carried out by cortico-cortical connection and suggests similarities in the organization of visual callosal connections in animals and in man.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.
Resumo:
Evidence of multisensory interactions within low-level cortices and at early post-stimulus latencies has prompted a paradigm shift in conceptualizations of sensory organization. However, the mechanisms of these interactions and their link to behavior remain largely unknown. One behaviorally salient stimulus is a rapidly approaching (looming) object, which can indicate potential threats. Based on findings from humans and nonhuman primates suggesting there to be selective multisensory (auditory-visual) integration of looming signals, we tested whether looming sounds would selectively modulate the excitability of visual cortex. We combined transcranial magnetic stimulation (TMS) over the occipital pole and psychophysics for "neurometric" and psychometric assays of changes in low-level visual cortex excitability (i.e., phosphene induction) and perception, respectively. Across three experiments we show that structured looming sounds considerably enhance visual cortex excitability relative to other sound categories and white-noise controls. The time course of this effect showed that modulation of visual cortex excitability started to differ between looming and stationary sounds for sound portions of very short duration (80 ms) that were significantly below (by 35 ms) perceptual discrimination threshold. Visual perceptions are thus rapidly and efficiently boosted by sounds through early, preperceptual and stimulus-selective modulation of neuronal excitability within low-level visual cortex.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Resumo:
Background: oscillatory activity, which can be separated in background and oscillatory burst pattern activities, is supposed to be representative of local synchronies of neural assemblies. Oscillatory burst events should consequently play a specific functional role, distinct from background EEG activity – especially for cognitive tasks (e.g. working memory tasks), binding mechanisms and perceptual dynamics (e.g. visual binding), or in clinical contexts (e.g. effects of brain disorders). However extracting oscillatory events in single trials, with a reliable and consistent method, is not a simple task. Results: in this work we propose a user-friendly stand-alone toolbox, which models in a reasonable time a bump time-frequency model from the wavelet representations of a set of signals. The software is provided with a Matlab toolbox which can compute wavelet representations before calling automatically the stand-alone application. Conclusion: The tool is publicly available as a freeware at the address: http:// www.bsp.brain.riken.jp/bumptoolbox/toolbox_home.html
Resumo:
The orchestration of collaborative learning processes in face-to-facephysical settings, such as classrooms, requires teachers to coordinate students indicating them who belong to each group, which collaboration areas areassigned to each group, and how they should distribute the resources or roles within the group. In this paper we present an Orchestration Signal system,composed of wearable Personal Signal devices and an Orchestration Signal manager. Teachers can configure color signals in the manager so that they are transmitted to the wearable devices to indicate different orchestration aspects.In particular, the paper describes how the system has been used to carry out a Jigsaw collaborative learning flow in a classroom where students received signals indicating which documents they should read, in which group they were and in which area of the classroom they were expected to collaborate. The evaluation results show that the proposed system facilitates a dynamic, visual and flexible orchestration.
Resumo:
In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals
Resumo:
N-acetyl-aspartyl-glutamate (NAAG) and its hydrolysis product N-acetyl-L-aspartate (NAA) are among the most important brain metabolites. NAA is a marker of neuron integrity and viability, while NAAG modulates glutamate release and may have a role in neuroprotection and synaptic plasticity. Investigating on a quantitative basis the role of these metabolites in brain metabolism in vivo by magnetic resonance spectroscopy (MRS) is a major challenge since the main signals of NAA and NAAG largely overlap. This is a preliminary study in which we evaluated NAA and NAAG changes during a visual stimulation experiment using functional MRS. The paradigm used consisted of a rest period (5 min and 20 s), followed by a stimulation period (10 min and 40 s) and another rest period (10 min and 40 s). MRS from 17 healthy subjects were acquired at 3T with TR/TE = 2000/288 ms. Spectra were averaged over subjects and quantified with LCModel. The main outcomes were that NAA concentration decreased by about 20% with the stimulus, while the concentration of NAAG concomitantly increased by about 200%. Such variations fall into models for the energy metabolism underlying neuronal activation that point to NAAG as being responsible for the hyperemic vascular response that causes the BOLD signal. They also agree with the fact that NAAG and NAA are present in the brain at a ratio of about 1:10, and with the fact that the only known metabolic pathway for NAAG synthesis is from NAA and glutamate.
Resumo:
This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC
Resumo:
In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from different image regions are combined according to a Bayesian estimator --- the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we find that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions.
Resumo:
Introducción: una de las causas de pobre ganancia visual luego de un tratamiento exitoso de desprendimiento de retina, sin complicaciones, es el daño de los fotoreceptores, reflejada en una disrupción de la capa de la zona elipsoide y membrana limitante externa (MLE). En otras patologías se ha demostrado que la hiperautofluorescencia foveal se correlaciona con la integridad de la zona elipsoide y MLE y una mejor recuperación visual. Objetivos: evaluar la asociación entre la hiperautofluorescencia foveal, la integridad de la capa de la zona elipsoide y recuperación visual luego de desprendimiento de retina regmatógeno (DRR) exitosamente tratado. Evaluar la concordancia inter-evaluador de estos exámenes. Metodología: estudio de corte transversal de autofluorescencia foveal y tomografía óptica coherente macular de dominio espectral en 65 pacientes con DRR evaluados por 3 evaluadores independientes. La concordancia inter-evaluador se estudio mediante Kappa de Cohen y la asociación entre las diferentes variables mediante la prueba chi cuadrado y pruebas Z para comparación de proporciones. Resultados: La concordancia de la autofluorescencia fue razonable y la de la tomografía óptica coherente macular buena a muy buena. Sujetos que presentaron hiperautofluorescencia foveal asociada a integridad de la capa de la zona elipsoide tuvieron 20% más de posibilidad de recuperar agudeza visual final mejor a 20/50 que los que no cumplieron éstas características. Conclusión: Existe una asociación clínicamente importante entre la hiperautofluorescencia foveal, la integridad de la capa de zona elipsoide y la mejor agudeza visual final, sin embargo ésta no fue estadísticamente significativa (p=0.39)
Resumo:
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Resumo:
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.
Resumo:
Visual control of locomotion is essential for most mammals and requires coordination between perceptual processes and action systems. Previous research on the neural systems engaged by self-motion has focused on heading perception, which is only one perceptual subcomponent. For effective steering, it is necessary to perceive an appropriate future path and then bring about the required change to heading. Using function magnetic resonance imaging in humans, we reveal a role for the parietal eye fields (PEFs) in directing spatially selective processes relating to future path information. A parietal area close to PEFs appears to be specialized for processing the future path information itself. Furthermore, a separate parietal area responds to visual position error signals, which occur when steering adjustments are imprecise. A network of three areas, the cerebellum, the supplementary eye fields, and dorsal premotor cortex, was found to be involved in generating appropriate motor responses for steering adjustments. This may reflect the demands of integrating visual inputs with the output response for the control device.