48 resultados para eye-tracker
Resumo:
Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.
Resumo:
Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
Perceptual multimedia quality is of paramount importance to the continued take-up and proliferation of multimedia applications: users will not use and pay for applications if they are perceived to be of low quality. Whilst traditionally distributed multimedia quality has been characterised by Quality of Service (QoS) parameters, these neglect the user perspective of the issue of quality. In order to redress this shortcoming, we characterise the user multimedia perspective using the Quality of Perception (QoP) metric, which encompasses not only a user’s satisfaction with the quality of a multimedia presentation, but also his/her ability to analyse, synthesise and assimilate informational content of multimedia. In recognition of the fact that monitoring eye movements offers insights into visual perception, as well as the associated attention mechanisms and cognitive processes, this paper reports on the results of a study investigating the impact of differing multimedia presentation frame rates on user QoP and eye path data. Our results show that provision of higher frame rates, usually assumed to provide better multimedia presentation quality, do not significantly impact upon the median coordinate value of eye path data. Moreover, higher frame rates do not significantly increase level of participant information assimilation, although they do significantly improve overall user enjoyment and quality perception of the multimedia content being shown.
Resumo:
Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.
Resumo:
The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.
Resumo:
Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit
Resumo:
Jean-François Lyotard's 1973 essay ‘Acinema’ is explicitly concerned with the cinematic medium, but has received scant critical attention. Lyotard's acinema conceives of an experimental, excessive form of film-making that uses stillness and movement to shift away from the orderly process of meaning-making within mainstream cinema. What motivates this present paper is a striking link between Lyotard's writing and contemporary Hollywood production; both are concerned with a sense of excess, especially within moments of motion. Using Charlie's Angels (McG, 2000) as a case study – a film that has been critically dismissed as ‘eye candy for the blind’ – my methodology brings together two different discourses, high culture theory and mainstream film-making, to test out and propose the value of Lyotard's ideas for the study of contemporary film. Combining close textual analysis and engagement with key scholarship on film spectacle, I reflexively engage with the process of film analysis and re-direct attention to a neglected essay by a major theorist, in order to stimulate further engagement with his work.