37 resultados para human visual masking

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Threat-relevant stimuli such as fear faces are prioritized by the human visual system. Recent research suggests that this prioritization begins during unconscious processing: A specialized (possibly subcortical) pathway evaluates the threat relevance of visual input, resulting in preferential access to awareness for threat stimuli. Our data challenge this claim. We used a continuous flash suppression (CFS) paradigm to present emotional face stimuli outside of awareness. It has been shown using CFS that salient (e.g., high contrast) and recognizable stimuli (faces, words) become visible more quickly than less salient or less recognizable stimuli. We found that although fearful faces emerge from suppression faster than other faces, this was wholly explained by their low-level visual properties, rather than their emotional content. We conclude that, in the competition for visual awareness, the visual system prefers and promotes unconscious stimuli that are more “face-like,” but the emotional content of a face has no effect on stimulus salience.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Previous studies have shown that the human posterior cingulate contains a visual processing area selective for optic flow (CSv). However, other studies performed in both humans and monkeys have identified a somatotopic motor region at the same location (CMA). Taken together, these findings suggested the possibility that the posterior cingulate contains a single visuomotor integration region. To test this idea we used fMRI to identify both visual and motor areas of the posterior cingulate in the same brains and to test the activity of those regions during a visuomotor task. Results indicated that rather than a single visuomotor region the posterior cingulate contains adjacent but separate motor and visual regions. CSv lies in the fundus of the cingulate sulcus, while CMA lies in the dorsal bank of the sulcus, slightly superior in terms of stereotaxic coordinates. A surprising and novel finding was that activity in CSv was suppressed during the visuomotor task, despite the visual stimulus being identical to that used to localize the region. This may provide an important clue to the specific role played by this region in the utilization of optic flow to control self-motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amygdala was more responsive to fearful (larger) eye whites than to happy (smaller) eye whites presented in a masking paradigm that mitigated subjects' awareness of their presence and aberrant nature. These data demonstrate that the amygdala is responsive to elements of.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To identify the role of Notch signaling in the human corneal epithelium. METHODS. Localization of Notch1, Notch2, Delta1, and Jagged1 in the human corneal epithelium was observed with the use of indirect immunofluorescence microscopy. Gene and protein expression of Notch receptors and ligands in human corneal epithelial cells was determined by RT-PCR and Western blot analysis, respectively. The effects of Notch inhibition (by {gamma}-secretase inhibition) and activation (by recombinant Jagged1) on epithelial cell proliferation (Ki67) and differentiation (CK3) were analyzed after Western blotting and immunocytochemistry. RESULTS. Immunofluorescent labeling localized Notch1 and Notch2 to suprabasal epithelial cell layers, whereas Delta1 and Jagged1 were observed throughout the corneal epithelium. Notch1, Notch2, Delta1, and Jagged1 genes and proteins were expressed in human corneal epithelial cells. {gamma}-Secretase inhibition resulted in decreased Notch1 and Notch2 expression, with an accompanying decrease in Ki67 and increased CK3 expression. The activation of Notch by Jagged1 resulted in the upregulation of active forms of Notch1 and 2 proteins (P < 0.05), with a concurrent increase in Ki67 (P < 0.05) and a decrease in CK3 (P < 0.05) expression. Interestingly, {gamma}-secretase inhibition in a three-dimensional, stratified corneal epithelium equivalent had no effect on Ki67 or CK3 expression. In contrast, Jagged1 activation resulted in decreased CK3 expression (P < 0.05), though neither Notch activation nor inhibition affected cell proliferation in the 3D tissue equivalent. CONCLUSIONS. Notch family members and ligands are expressed in the human corneal epithelium and appear to play pivotal roles in corneal epithelial cell differentiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficacy of explicit and implicit learning paradigms was examined during the very early stages of learning the perceptual-motor anticipation task of predicting ball direction from temporally occluded footage of soccer penalty kicks. In addition, the effect of instructional condition on point-of-gaze during learning was examined. A significant improvement in horizontal prediction accuracy was observed in the explicit learning group; however, similar improvement was evident in a placebo group who watched footage of soccer matches. Only the explicit learning intervention resulted in changes in eye movement behaviour and increased awareness of relevant postural cues. Results are discussed in terms of methodological and practical issues regarding the employment of implicit perceptual training interventions. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the “far road” and “near road” mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants were required to gauge their current direction of travel (rather than directly control it). During forward egomotion, the distant road edges provided future path information, which participants used to improve their heading judgments. During backward egomotion, the road edges did not enhance performance because they no longer provided prospective information. This behavioral dissociation was reflected at the neural level, where only simulated forward travel increased activation in a region of the superior parietal lobe and the medial intraparietal sulcus. Providing only near road information during a forward heading judgment task resulted in activation in the motion complex. We propose a complementary role for the posterior parietal cortex and motion complex in detecting future path information and maintaining current lane positioning, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.