64 resultados para Human Visual System
em CentAUR: Central Archive University of Reading - UK
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
It is now possible to directly link the human nervous system to a computer and thence onto the Internet. From an electronic and mental viewpoint this means that the Internet becomes an extension of the human nervous system (and vice versa). Such a connection on a regular or mass basis will have far reaching effects for society. In this article the authors discuss their own practical implant self-experimentation, especially insofar as it relates to extending the human nervous system. Trials involving an intercontinental link up are described. As well as technical aspects of the work, social, moral and ethical issues, as perceived by the authors, are weighed against potential technical gains. The authors also look at technical limitations inherent in the co-evolution of Internet implanted individuals as well as the future distribution of intelligence between human and machine.
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
Threat-relevant stimuli such as fear faces are prioritized by the human visual system. Recent research suggests that this prioritization begins during unconscious processing: A specialized (possibly subcortical) pathway evaluates the threat relevance of visual input, resulting in preferential access to awareness for threat stimuli. Our data challenge this claim. We used a continuous flash suppression (CFS) paradigm to present emotional face stimuli outside of awareness. It has been shown using CFS that salient (e.g., high contrast) and recognizable stimuli (faces, words) become visible more quickly than less salient or less recognizable stimuli. We found that although fearful faces emerge from suppression faster than other faces, this was wholly explained by their low-level visual properties, rather than their emotional content. We conclude that, in the competition for visual awareness, the visual system prefers and promotes unconscious stimuli that are more “face-like,” but the emotional content of a face has no effect on stimulus salience.
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
To steer a course through the world, people are almost entirely dependent on visual information, of which a key component is optic flow. In many models of locomotion, heading is described as the fundamental control variable; however, it has also been shown that fixating points along or near one's future path could be the basis of an efficient control solution. Here, the authors aim to establish how well observers can pinpoint instantaneous heading and path, by measuring their accuracy when looking at these features while traveling along straight and curved paths. The results showed that observers could identify both heading and path accurately (similar to 3 degrees) when traveling along straight paths, but on curved paths they were more accurate at identifying a point on their future path (similar to 5 degrees) than indicating their instantaneous heading (similar to 13 degrees). Furthermore, whereas participants could track changes in the tightness of their path, they were unable to accurately track the rate of change of heading. In light of these results, the authors suggest it is unlikely that heading is primarily used by the visual system to support active steering.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
This paper presents results to indicate the potential applications of a direct connection between the human nervous system and a computer network. Actual experimental results obtained from a human subject study are given, with emphasis placed on the direct interaction between the human nervous system and possible extra-sensory input. An brief overview of the general state of neural implants is given, as well as a range of application areas considered. An overall view is also taken as to what may be possible with implant technology as a general purpose human-computer interface for the future.
Resumo:
In this paper an attempt is described to increase the range of human sensory capabilities by means of implant technology. The key aim is to create an additional sense by feeding signals directly to the human brain, via the nervous system rather than via a presently operable human sense. Neural implant technology was used to directly interface a human nervous system with a computer in a one off trial. The output from active ultrasonic sensors was then employed to directly stimulate the human nervous system. An experimental laboratory set up was used as a test bed to assess the usefulness of this sensory addition.
Resumo:
A look is taken here at how the use of implant technology is rapidly diminishing the effects of certain neural illnesses and distinctly increasing the range of abilities of those affected. An indication is given of a number of problem areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking the human brain directly with a computer. In order to assess the possible opportunities, both human and animal studies are reported on. The main thrust of the paper is however a discussion of neural implant experimentation linking the human nervous system bi-directionally with the internet. With this in place neural signals were transmitted to various technological devices to directly control them, in some cases via the internet, and feedback to the brain was obtained from such as the fingertips of a robot hand, ultrasonic (extra) sensory input and neural signals directly from another human's nervous system. Consideration is given to the prospects for neural implant technology in the future, both in the short term as a therapeutic device and in the long term as a form of enhancement, including the realistic potential for thought communication potentially opening up commercial opportunities. Clearly though, an individual whose brain is part human - part machine can have abilities that far surpass those with a human brain alone. Will such an individual exhibit different moral and ethical values to those of a human.? If so, what effects might this have on society?
Resumo:
A look is taken here at how the use of implant technology is rapidly diminishing the effects of certain neural illnesses and distinctly increasing the range of abilities of those affected. An indication is given of a number of problem areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking the human brain directly with a computer. In order to assess the possible opportunities, both human and animal studies are reported on. The main thrust of the paper is, however, a discussion of neural implant experimentation linking the human nervous system bi-directionally with the internet. With this in place, neural signals were transmitted to various technological devices to directly control them, in some cases via the internet, and feedback to the brain was obtained from, for example, the fingertips of a robot hand, and ultrasonic (extra) sensory input and neural signals directly from another human's nervous system. Consideration is given to the prospects for neural implant technology in the future, both in the short term as a therapeutic device and in the long term as a form of enhancement, including the realistic potential for thought communication-potentially opening up commercial opportunities. Clearly though, an individual whose brain is part human-part machine can have abilities that far surpass those with a human brain alone. Will such an individual exhibit different moral and ethical values from those of a human? If so, what effects might this have on society? (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Previous studies have demonstrated that when we observe somebody else executing an action many areas of our own motor systems are active. It has been argued that these motor activations are evidence that we motorically simulate observed actions; this motoric simulation may support various functions such as imitation and action understanding. However, whether motoric simulation is indeed the function of motor activations during action observation is controversial, due to inconsistency in findings. Previous studies have demonstrated dynamic modulations in motor activity when we execute actions. Therefore, if we do motorically simulate observed actions, our motor systems should also be modulated dynamically, and in a corresponding fashion, during action observation. Using magnetoencephalography (MEG), we recorded the cortical activity of human participants while they observed actions performed by another person. Here, we show that activity in the human motor system is indeed modulated dynamically during action observation. The finding that activity in the motor system is modulated dynamically when observing actions can explain why studies of action observation using functional magnetic resonance imaging (fMRI) have reported conflicting results, and is consistent with the hypothesis that we motorically simulate observed actions.