932 resultados para Gaze depth


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main biogeochemical nutrient distributions, along with ambient ocean temperature and the light field, control ocean biological productivity. Observations of nutrients are much sparser than physical observations of temperature and salinity, yet it is critical to validate biogeochemical models against these sparse observations if we are to successfully model biological variability and trends. Here we use data from the Bermuda Atlantic Time-series Study and the World Ocean Database 2005 to demonstrate quantitatively that over the entire globe a significant fraction of the temporal variability of phosphate, silicate and nitrate within the oceans is correlated with water density. The temporal variability of these nutrients as a function of depth is almost always greater than as a function of potential density, with he largest reductions in variability found within the main pycnocline. The greater nutrient variability as a function of depth occurs when dynamical processes vertically displace nutrient and density fields together on shorter timescales than biological adjustments. These results show that dynamical processes can have a significant impact on the instantaneous nutrient distributions. These processes must therefore be considered when modeling biogeochemical systems, when comparing such models with observations, or when assimilating data into such models.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amphicoma ( Glaphyridae) beetles are important pollinators of red bowl-shaped flowers in the Mediterranean. The role of color and shape in flower choice is well studied but the roles of inclination, depth, and height have seldom been investigated. Under field conditions, models were used to experimentally manipulate these three characters and visitation rates of beetles were recorded. Models with red horizontal surfaces were visited significantly more often than models with red vertical surfaces. Shallow flower models were visited significantly more than deeper equivalents. Models below or at the height of natural flower populations elicited significantly more landings than models above the height of flowers. Inclination, depth, and height characteristics are all likely to be important components in the flower preferences exhibited by pollinating beetles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earlier studies showed that the disparity with respect to other visible points could not explain stereoacuity performance, nor could various spatial derivatives of disparity [Glennerster, A., McKee, S. P., & Birch, M. D. (2002). Evidence of surface-based processing of binocular disparity. Current Biology, 12:825-828; Petrov, Y., & Glennerster, A. (2004). The role of the local reference in stereoscopic detection of depth relief. Vision Research, 44:367-376.] Two possible cues remain: (i) local changes in disparity gradient or (ii) disparity with respect to an interpolated line drawn through the reference points. Here, we aimed to distinguish between these two cues. Subjects judged.. in a two AFC paradigm, whether a target dot was in front of a plane defined by three reference dots or, in other experiments, in front of a line defined by two reference dots. We tested different slants of the reference line or plane and different locations of the target relative to the reference points. For slanted reference lines or plane, stereoacuity changed little as the target position was varied. For judgments relative to a frontoparallel reference line, stereoacuity did vary with target position, but less than would be predicted by disparity gradient change. This provides evidence that disparity with respect to the reference plane is an important cue. We discuss the potential advantages of this measure in generating a representation of surface relief that is invariant to viewpoint transformations. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A desktop tool for replay and analysis of gaze-enhanced multiparty virtual collaborative sessions is described. We linked three CAVE (TM)-like environments, creating a multiparty collaborative virtual space where avatars are animated with 3D gaze as well as head and hand motions in real time. Log files are recorded for subsequent playback and analysis Using the proposed software tool. During replaying the user can rotate the viewpoint and navigate in the simulated 3D scene. The playback mechanism relies on multiple distributed log files captured at every site. This structure enables an observer to experience latencies of movement and information transfer for every site as this is important fir conversation analysis. Playback uses an event-replay algorithm, modified to allow fast traversal of the scene by selective rendering of nodes, and to simulate fast random access. The tool's is analysis module can show each participant's 3D gaze points and areas where gaze has been concentrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud optical depth is one of the most poorly observed climate variables. The new “cloud mode” capability in the Aerosol Robotic Network (AERONET) will inexpensively yet dramatically increase cloud optical depth observations in both number and accuracy. Cloud mode optical depth retrievals from AERONET were evaluated at the Atmospheric Radiation Measurement program’s Oklahoma site in sky conditions ranging from broken clouds to overcast. For overcast cases, the 1.5 min average AERONET cloud mode optical depths agreed to within 15% of those from a standard ground‐based flux method. For broken cloud cases, AERONET retrievals also captured rapid variations detected by the microwave radiometer. For 3 year climatology derived from all nonprecipitating clouds, AERONET monthly mean cloud optical depths are generally larger than cloud radar retrievals because of the current cloud mode observation strategy that is biased toward measurements of optically thick clouds. This study has demonstrated a new way to enhance the existing AERONET infrastructure to observe cloud optical properties on a global scale.