67 resultados para Human vision system
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
The paper describes a novel integrated vision system in which two autonomous visual modules are combined to interpret a dynamic scene. The first module employs a 3D model-based scheme to track rigid objects such as vehicles. The second module uses a 2D deformable model to track non-rigid objects such as people. The principal contribution is a novel method for handling occlusion between objects within the context of this hybrid tracking system. The practical aim of the work is to derive a scene description that is sufficiently rich to be used in a range of surveillance tasks. The paper describes each of the modules in outline before detailing the method of integration and the handling of occlusion in particular. Experimental results are presented to illustrate the performance of the system in a dynamic outdoor scene involving cars and people.
Resumo:
It is now possible to directly link the human nervous system to a computer and thence onto the Internet. From an electronic and mental viewpoint this means that the Internet becomes an extension of the human nervous system (and vice versa). Such a connection on a regular or mass basis will have far reaching effects for society. In this article the authors discuss their own practical implant self-experimentation, especially insofar as it relates to extending the human nervous system. Trials involving an intercontinental link up are described. As well as technical aspects of the work, social, moral and ethical issues, as perceived by the authors, are weighed against potential technical gains. The authors also look at technical limitations inherent in the co-evolution of Internet implanted individuals as well as the future distribution of intelligence between human and machine.
Resumo:
Computer vision applications generally split their problem into multiple simpler tasks. Likewise research often combines algorithms into systems for evaluation purposes. Frameworks for modular vision provide interfaces and mechanisms for algorithm combination and network transparency. However, these don’t provide interfaces efficiently utilising the slow memory in modern PCs. We investigate quantitatively how system performance varies with different patterns of memory usage by the framework for an example vision system.
Resumo:
A vision system for recognizing rigid and articulated three-dimensional objects in two-dimensional images is described. Geometrical models are extracted from a commercial computer aided design package. The models are then augmented with appearance and functional information which improves the system's hypothesis generation, hypothesis verification, and pose refinement. Significant advantages over existing CAD-based vision systems, which utilize only information available in the CAD system, are realized. Examples show the system recognizing, locating, and tracking a variety of objects in a robot work-cell and in natural scenes.
Resumo:
Model based vision allows use of prior knowledge of the shape and appearance of specific objects to be used in the interpretation of a visual scene; it provides a powerful and natural way to enforce the view consistency constraint. A model based vision system has been developed within ESPRIT VIEWS: P2152 which is able to classify and track moving objects (cars and other vehicles) in complex, cluttered traffic scenes. The fundamental basis of the method has been previously reported. This paper presents recent developments which have extended the scope of the system to include (i) multiple cameras, (ii) variable camera geometry, and (iii) articulated objects. All three enhancements have easily been accommodated within the original model-based approach
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging pattems, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents results to indicate the potential applications of a direct connection between the human nervous system and a computer network. Actual experimental results obtained from a human subject study are given, with emphasis placed on the direct interaction between the human nervous system and possible extra-sensory input. An brief overview of the general state of neural implants is given, as well as a range of application areas considered. An overall view is also taken as to what may be possible with implant technology as a general purpose human-computer interface for the future.
Resumo:
In this paper an attempt is described to increase the range of human sensory capabilities by means of implant technology. The key aim is to create an additional sense by feeding signals directly to the human brain, via the nervous system rather than via a presently operable human sense. Neural implant technology was used to directly interface a human nervous system with a computer in a one off trial. The output from active ultrasonic sensors was then employed to directly stimulate the human nervous system. An experimental laboratory set up was used as a test bed to assess the usefulness of this sensory addition.
Resumo:
A look is taken here at how the use of implant technology is rapidly diminishing the effects of certain neural illnesses and distinctly increasing the range of abilities of those affected. An indication is given of a number of problem areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking the human brain directly with a computer. In order to assess the possible opportunities, both human and animal studies are reported on. The main thrust of the paper is however a discussion of neural implant experimentation linking the human nervous system bi-directionally with the internet. With this in place neural signals were transmitted to various technological devices to directly control them, in some cases via the internet, and feedback to the brain was obtained from such as the fingertips of a robot hand, ultrasonic (extra) sensory input and neural signals directly from another human's nervous system. Consideration is given to the prospects for neural implant technology in the future, both in the short term as a therapeutic device and in the long term as a form of enhancement, including the realistic potential for thought communication potentially opening up commercial opportunities. Clearly though, an individual whose brain is part human - part machine can have abilities that far surpass those with a human brain alone. Will such an individual exhibit different moral and ethical values to those of a human.? If so, what effects might this have on society?