983 resultados para Visual cue integration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human subjects overestimate the change of rising intensity sounds compared with falling intensity sounds. Rising sound intensity has therefore been proposed to be an intrinsic warning cue. In order to test this hypothesis, we presented rising, falling, and constant intensity sounds to healthy humans and gathered psychophysiological and behavioral responses. Brain activity was measured using event-related functional magnetic resonance imaging. We found that rising compared with falling sound intensity facilitates autonomic orienting reflex and phasic alertness to auditory targets. Rising intensity sounds produced neural activity in the amygdala, which was accompanied by activity in intraparietal sulcus, superior temporal sulcus, and temporal plane. Our results indicate that rising sound intensity is an elementary warning cue eliciting adaptive responses by recruiting attentional and physiological resources. Regions involved in cross-modal integration were activated by rising sound intensity, while the right-hemisphere phasic alertness network could not be supported by this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Out-of-body experiences (OBEs) are illusory perceptions of one's body from an elevated disembodied perspective. Recent theories postulate a double disintegration process in the personal (visual, proprioceptive and tactile disintegration) and extrapersonal (visual and vestibular disintegration) space as the basis of OBEs. Here we describe a case which corroborates and extends this hypothesis. The patient suffered from peripheral vestibular damage and presented with OBEs and lucid dreams. Analysis of the patient's behaviour revealed a failure of visuo-vestibular integration and abnormal sensitivity to visuo-tactile conflicts that have previously been shown to experimentally induce out-of-body illusions (in healthy subjects). In light of these experimental findings and the patient's symptomatology we extend an earlier model of the role of vestibular signals in OBEs. Our results advocate the involvement of subcortical bodily mechanisms in the occurrence of OBEs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of correlation processes in design systems has as a target measurements in 3D directly and according to the users criteria in order to generate the required database for the development of the project. In the phase of photogrammetric works, internal and external orientation parameters are calculated and stereo models are created from standard images. The aforementioned are integrated in the system where the measurement of the selected items is done by applying developed correlation algorithms. The processing period has the tools to carry out the calculations in an easy and automatic way, as well as image measurement techniques to acquire the most correct information. The proposed software development is done on Visual Studio platforms for PC, applying the most apt codes and symbols according to the terms of reference required for the design. The results of generating the data base in an interactive way with the geometric study of the structures, facilitates and improves the quality of the works in the projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human visual system is able to effortlessly integrate local features to form our rich perception of patterns, despite the fact that visual information is discretely sampled by the retina and cortex. By using a novel perturbation technique, we show that the mechanisms by which features are integrated into coherent percepts are scale-invariant and nonlinear (phase and contrast polarity independent). They appear to operate by assigning position labels or “place tags” to each feature. Specifically, in the first series of experiments, we show that the positional tolerance of these place tags in foveal, and peripheral vision is about half the separation of the features, suggesting that the neural mechanisms that bind features into forms are quite robust to topographical jitter. In the second series of experiment, we asked how many stimulus samples are required for pattern identification by human and ideal observers. In human foveal vision, only about half the features are needed for reliable pattern interpolation. In this regard, human vision is quite efficient (ratio of ideal to real ≈ 0.75). Peripheral vision, on the other hand is rather inefficient, requiring more features, suggesting that the stimulus may be relatively underrepresented at the stage of feature integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The superficial gray layer of the superior colliculus contains a map that represents the visual field, whereas the underlying intermediate gray layer contains a vector map of the saccades that shift the direction of gaze. These two maps are aligned so that a particular region of the visual field is represented directly above the neurons that orient the highest acuity area of the retina toward that region. Although it has been proposed that the transmission of information from the visuosensory to the motor map plays an important role in the generation of visually guided saccades, experiments have failed to demonstrate any functional linkage between the two layers. We examined synaptic transmission between these layers in vitro by stimulating the superficial layer while using whole-cell patch-clamp methods to measure the responses of intermediate layer neurons. Stimulation of superficial layer neurons evoked excitatory postsynaptic currents in premotor cells. This synaptic input was columnar in organization, indicating that the connections between the layers link corresponding regions of the visuosensory and motor maps. Excitatory postsynaptic currents were large enough to evoke action potentials and often occurred in clusters similar in duration to the bursts of action potentials that premotor cells use to command saccades. Our results indicate the presence of functional connections between the superficial and intermediate layers and show that such connections could play a significant role in the generation of visually guided saccades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whenever we open our eyes, we are confronted with an overwhelming amount of visual information. Covert attention allows us to select visual information at a cued location, without eye movements, and to grant such information priority in processing. Covert attention can be voluntarily allocated, to a given location according to goals, or involuntarily allocated, in a reflexive manner, to a cue that appears suddenly in the visual field. Covert attention improves discriminability in a wide variety of visual tasks. An important unresolved issue is whether covert attention can also speed the rate at which information is processed. To address this issue, it is necessary to obtain conjoint measures of the effects of covert attention on discriminability and rate of information processing. We used the response-signal speed-accuracy tradeoff (SAT) procedure to derive measures of how cueing a target location affects speed and accuracy in a visual search task. Here, we show that covert attention not only improves discriminability but also accelerates the rate of information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional roles of the cortical backward signal in long-term memory formation were studied in monkeys performing a visual pair-association task. Before the monkeys learned the task, the anterior commissure was transected, disconnecting the anterior temporal cortex of each hemisphere. After training with 12 pairs of pictures, single units were recorded from the inferotemporal cortex of the monkeys as the control. By injecting a grid of ibotenic acid, we unilaterally lesioned the entorhinal and perirhinal cortex, which provides massive direct and indirect backward projections ipsilaterally to the inferotemporal cortex. After the lesion, the monkeys fixated the cue stimulus normally, relearned the preoperatively learned set (set A), and learned a new set (set B) of paired associates. Then, single units were recorded from the same area as for the prelesion control. We found that (i) in spite of the lesion, the sampled neurons responded strongly and selectively to both the set A and set B patterns and (ii) the paired associates elicited significantly correlated responses in the control neurons before the lesion but not in the cells tested after the lesion, either for set A or set B stimuli. We conclude that the ability of inferotemporal neurons to represent association between picture pairs was lost after the lesion of entorhinal and perirhinal cortex, most likely through disruption of backward neural signals to the inferotemporal neurons, while the ability of the neurons to respond to a particular visual stimulus was left intact.