48 resultados para eye-tracker
em CentAUR: Central Archive University of Reading - UK
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Objective: Early mother-infant interactions are impaired in the context of infant cleft lip, and are associated with adverse child psychological outcomes, but the nature of these interaction difficulties is not yet fully understood. The aim of this study was to explore adult gaze behaviour and cuteness perception, which are particularly important during early social exchanges, in response to infants with cleft lip, in order to investigate potential foundations for the interaction difficulties seen in this population. Methods: Using an eye-tracker, eye movements were recorded as adult participants viewed images of infant faces with and without cleft lip. Participants also rated each infant on a scale of cuteness. Results: Participants fixated significantly longer on the mouths of infants with cleft lip, which occurred at the expense of fixation on eyes. Severity of cleft lip was associated with the strength of fixation bias, with participants looking even longer at the mouths of infants with the most severe clefts. Infants with cleft lip were rated as significantly less cute than unaffected infants. Men rated infants as less cute than women overall, but gave particularly low ratings to infants with cleft lip Conclusions: Results demonstrate that the limited disturbance in infant facial configuration of cleft lip can significantly alter adult gaze patterns and cuteness perception. Our findings could have important implications for early interactions, and may help in the development of interventions to foster healthy development in infants with cleft lip.
Resumo:
The amygdala was more responsive to fearful (larger) eye whites than to happy (smaller) eye whites presented in a masking paradigm that mitigated subjects' awareness of their presence and aberrant nature. These data demonstrate that the amygdala is responsive to elements of.
Resumo:
This paper concerns the prospective implementation of the proposed 'corporate killing' offence. These proposals suggested that the Health and Safety Executive (HSE)-the body currently responsible for regulating work-related health and safety issues-should handle cases in which a 'corporate killing' charge is a possibility. Relatively little attention has been paid to this issue of implementation. An empirical investigation was undertaken to assess the compatibility of the HSE's methodology and enforcement philosophy with the new offence. It was found that inspectors categorize themselves as enforcers of criminal law, see enforcement action as valuable and support the new offence, but disagree over its use. They also broadly supported the HSE taking responsibility for the new offence. This suggests that 'corporate killing' may not necessarily be incompatible with the HSE's modus operandi, and there may be positive reasons forgiving the HSE this responsibility.
Resumo:
Growing pot poinsettia and similar crops involves careful crop monitoring and management to ensure that height specifications are met. Graphical tracking represents a target driven approach to decision support with simple interpretation. HDC (Horticultural Development Council) Poinsettia Tracker implements a graphical track based on the Generalised Logistic Curve, similar to that of other tracking packages. Any set of curve parameters can be used to track crop progress. However, graphical tracks must be expected to be site and cultivar specific. By providing a simple Curve fitting function, growers can easily develop their own site and variety specific ideal tracks based on past records with increasing quality as more seasons' data are added. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the design evolution process of a composite leaf spring for freight rail applications. Three designs of eye-end attachment for composite leaf springs are described. The material used is glass fibre reinforced polyester. Static testing and finite element analysis have been carried out to obtain the characteristics of the spring. Load-deflection curves and strain measurement as a function of load for the three designs tested have been plotted for comparison with FEA predicted values. The main concern associated with the first design is the delamination failure at the interface of the fibres that have passed around the eye and the spring body, even though the design can withstand 150 kN static proof load and one million cycles fatigue load. FEA results confirmed that there is a high interlaminar shear stress concentration in that region. The second design feature is an additional transverse bandage around the region prone to delamination. Delamination was contained but not completely prevented. The third design overcomes the problem by ending the fibres at the end of the eye section.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
Eye-movements have long been considered a problem when trying to understand the visual control of locomotion. They transform the retinal image from a simple expanding pattern of moving texture elements (pure optic flow), into a complex combination of translation and rotation components (retinal flow). In this article we investigate whether there are measurable advantages to having an active free gaze, over a static gaze or tracking gaze, when steering along a winding path. We also examine patterns of free gaze behavior to determine preferred gaze strategies during active locomotion. Participants were asked to steer along a computer-simulated textured roadway with free gaze, fixed gaze, or gaze tracking the center of the roadway. Deviation of position from the center of the road was recorded along with their point of gaze. It was found that visually tracking the middle of the road produced smaller steering errors than for fixed gaze. Participants performed best at the steering task when allowed to sample naturally from the road ahead with free gaze. There was some variation in the gaze strategies used, but sampling was predominantly of areas proximal to the center of the road. These results diverge from traditional models of flow analysis.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
When two people discuss something they can see in front of them, what is the relationship between their eye movements? We recorded the gaze of pairs of subjects engaged in live, spontaneous dialogue. Cross-recurrence analysis revealed a coupling between the eye movements of the two conversants. In the first study, we found their eye movements were coupled across several seconds. In the second, we found that this coupling increased if they both heard the same background information prior to their conversation. These results provide a direct quantification of joint attention during unscripted conversation and show that it is influenced by knowledge in the common ground.
Resumo:
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.
Resumo:
Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.