894 resultados para Motion perception (Vision)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explored transient changes in EEG microstates and spatial Omega complexity associated with changes in multistable perception. 21-channel EEG was recorded from 13 healthy subjects viewing an alternating dot pattern that induced illusory motion with ambiguous direction. Baseline epochs with stable motion direction were compared to epochs immediately preceding stimuli that were perceived with changed motion direction ('reference stimuli'). About 750 ms before reference stimuli, Omega complexity decreased as compared to baseline, and two of four classes of EEG microstates changed their probability of occurrence. About 300 ms before reference stimuli, Omega complexity increased and the previous deviations of EEG microstates were reversed. Given earlier results on Omega complexity and microstates, these sub-second EEG changes might parallel longer-lasting fluctuations in vigilance. Assumedly, the discontinuities of illusory motion thus occur during sub-second dips in arousal, and the following reconstruction of the illusion coincides with a state of relative over-arousal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the role of horizontal body motion on the processing of numbers. We hypothesized that leftward self-motion leads to shifts in spatial attention and therefore facilitates the processing of small numbers, and vice versa, we expected that rightward self-motion facilitates the processing of large numbers. Participants were displaced by means of a motion platform during a parity judgment task. We found a systematic influence of self-motion direction on number processing, suggesting that the processing of numbers is intertwined with the processing of self-motion perception. The results differed from known spatial numerical compatibility effects in that self-motion exerted a differential influence on inner and outer numbers of the given interval. The results highlight the involvement of sensory body motion information in higher-order spatial cognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. METHODOLOGY/PRINCIPAL FINDINGS: We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. CONCLUSIONS/SIGNIFICANCE: The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a "vestibular mirror neuron system".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain lesions in the visual associative cortex are known to impair visual perception, i.e., the capacity to correctly perceive different aspects of the visual world, such as motion, color, or shapes. Visual perception can be influenced by non-invasive brain stimulation such as transcranial direct current stimulation (tDCS). In a recently developed technique called high definition (HD) tDCS, small HD-electrodes are used instead of the sponge electrodes in the conventional approach. This is believed to achieve high focality and precision over the target area. In this paper we tested the effects of cathodal and anodal HD-tDCS over the right V5 on motion and shape perception in a single blind, within-subject, sham controlled, cross-over trial. The purpose of the study was to prove the high focality of the stimulation only over the target area. Twenty one healthy volunteers received 20 min of 2 mA cathodal, anodal and sham stimulation over the right V5 and their performance on a visual test was recorded. The results showed significant improvement in motion perception in the left hemifield after cathodal HD-tDCS, but not in shape perception. Sham and anodal HD-tDCS did not affect performance. The specific effect of influencing performance of visual tasks by modulating the excitability of the neurons in the visual cortex might be explained by the complexity of perceptual information needed for the tasks. This provokes a "noisy" activation state of the encoding neuronal patterns. We speculate that in this case cathodal HD-tDCS may focus the correct perception by decreasing global excitation and thus diminishing the "noise" below threshold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primate visual system offers unprecedented opportunities for investigating the neural basis of cognition. Even the simplest visual discrimination task requires processing of sensory signals, formation of a decision, and orchestration of a motor response. With our extensive knowledge of the primate visual and oculomotor systems as a base, it is now possible to investigate the neural basis of simple visual decisions that link sensation to action. Here we describe an initial study of neural responses in the lateral intraparietal area (LIP) of the cerebral cortex while alert monkeys discriminated the direction of motion in a visual display. A subset of LIP neurons carried high-level signals that may comprise a neural correlate of the decision process in our task. These signals are neither sensory nor motor in the strictest sense; rather they appear to reflect integration of sensory signals toward a decision appropriate for guiding movement. If this ultimately proves to be the case, several fascinating issues in cognitive neuroscience will be brought under rigorous physiological scrutiny.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primate visual motion system performs numerous functions essential for survival in a dynamic visual world. Prominent among these functions is the ability to recover and represent the trajectories of objects in a form that facilitates behavioral responses to those movements. The first step toward this goal, which consists of detecting the displacement of retinal image features, has been studied for many years in both psychophysical and neurobiological experiments. Evidence indicates that achievement of this step is computationally straightforward and occurs at the earliest cortical stage. The second step involves the selective integration of retinal motion signals according to the object of origin. Realization of this step is computationally demanding, as the solution is formally underconstrained. It must rely--by definition--upon utilization of retinal cues that are indicative of the spatial relationships within and between objects in the visual scene. Psychophysical experiments have documented this dependence and suggested mechanisms by which it may be achieved. Neurophysiological experiments have provided evidence for a neural substrate that may underlie this selective motion signal integration. Together they paint a coherent portrait of the means by which retinal image motion gives rise to our perceptual experience of moving objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"August 1993."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here, we describe a motion stimulus in which the quality of rotation is fractal. This makes its motion unavailable to the translationbased motion analysis known to underlie much of our motion perception. In contrast, normal rotation can be extracted through the aggregation of the outputs of translational mechanisms. Neural adaptation of these translation-based motion mechanisms is thought to drive the motion after-effect, a phenomenon in which prolonged viewing of motion in one direction leads to a percept of motion in the opposite direction. We measured the motion after-effects induced in static and moving stimuli by fractal rotation. The after-effects found were an order of magnitude smaller than those elicited by normal rotation. Our findings suggest that the analysis of fractal rotation involves different neural processes than those for standard translational motion. Given that the percept of motion elicited by fractal rotation is a clear example of motion derived from form analysis, we propose that the extraction of fractal rotation may reflect the operation of a general mechanism for inferring motion from changes in form.