123 resultados para Eye location.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Saccadic eye-movements to a visual target are less accurate if there are distracters close to its location (local distracters). The addition of more distracters, remote from the target location (remote distracters), invokes an involuntary increase in the response latency of the saccade and attenuates the effect of local distracters on accuracy. This may be due to the target and distracters directly competing (direct route) or to the remote distracters acting to impair the ability to disengage from fixation (indirect route). To distinguish between these we examined the development of saccade competition by recording saccade latency and accuracy responses made to a target and local distracter compared with those made with an addition of a remote distracter. The direct route would predict that the remote distracter impacts on the developing competition between target and local distracter, while the indirect route would predict no change as the accuracy benefit here derives from accessing the same competitive process but at a later stage. We found that the presence of the remote distracter did not change the pattern of accuracy improvement. This suggests that the remote distracter was acting along an indirect route that inhibits disengagement from fixation, slows saccade initiation, and enables more accurate saccades to be made.
Resumo:
This paper argues that transatlantic hybridity connects space, visual style and ideological point of view in British television action-adventure fiction of the 1960s–1970s. It analyses the relationship between the physical location of TV series production at Elstree Studios, UK, the representation of place in programmes, and the international trade in television fiction between the UK and USA. The TV series made at Elstree by the ITC and ABC companies and their affiliates linked Britishness with an international modernity associated with the USA, while also promoting national specificity. To do this, they drew on film production techniques that were already common for TV series production in Hollywood. The British series made at Elstree adapted versions of US industrial organization and television formats, and made programmes expected to be saleable to US networks, on the basis of British experiences in TV co-production with US companies and of the international cinema and TV market.
Resumo:
To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system aid for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems.
Resumo:
We present a study of the geographic location of lightning affecting the ionospheric sporadic-E (Es) layer over the ionospheric monitoring station at Chilton, UK. Data from the UK Met Office's Arrival Time Difference (ATD) lightning detection system were used to locate lightning strokes in the vicinity of the ionospheric monitoring station. A superposed epoch study of this data has previously revealed an enhancement in the Es layer caused by lightning within 200km of Chilton. In the current paper, we use the same data to investigate the location of the lightning strokes which have the largest effect on the Es layer above Chilton. We find that there are several locations where the effect of lightning on the ionosphere is most significant statistically, each producing different ionospheric responses. We interpret this as evidence that there is more than one mechanism combining to produce the previously observed enhancement in the ionosphere.
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Resumo:
This paper investigates the extent to which office activity contributes to travel-related CO2 emission. Using ‘end-user’ figures[1], travel accounts for 32% of UK CO2 emission (Commission for Integrated Transport, 2007) and commuting and business travel accounts for a fifth of transport-related CO2 emissions, equating to 6.4% of total UK emissions (Building Research Establishment, 2000). Figures from the Department for Transport (2006) report that 70% of commuting trips were made by car, accounting for 73% of all commuting miles travelled. In assessing the environmental performance of an office building, the paper questions whether commuting and business travel-related CO2 emission is being properly assessed. For example, are office buildings in locations that are easily accessible by public transport being sufficiently rewarded? The de facto method for assessing the environmental performance of office buildings in the UK is the Building Research Establishment’s Environmental Assessment Method (BREEAM). Using data for Bristol, this paper examines firstly whether BREEAM places sufficient weight on travel-related CO2 emission in comparison with building operation-related CO2 emission, and secondly whether the methodology for assigning credits for travel-related CO2 emission efficiency is capable of discerning intra-urban differences in location such as city centre and out-of-town. The results show that, despite CO2 emission per worker from building operation and travel being comparable, there is a substantial difference in the credit-weighting allocated to each. Under the current version of BREEAM for offices, only a maximum of 4% of the available credits can be awarded for ensuring the office location is environmentally sustainable. The results also show that all locations within the established city centre of Bristol will receive maximum BREEAM credits. Given the parameters of the test there is little to distinguish one city centre location from another and out of town only one office location receives any credits. It would appear from these results that the assessment method is not able to discern subtle differences in the sustainability of office locations