123 resultados para Eye location.
Resumo:
The antioxidant activity of hydroxytyrosol, hydroxytyrosol acetate, oleuropein, 3,4-dihydroxyphenylelenolic acid (3,4-DHPEA-EA) and 3,4-dihydroxyphenyielenolic acid dialdehyde (3,4-DHPEA-EDA) towards oxidation initiated by 2,2'-azobis (2-amidinopropane) hydrochloride in a soybean phospholipid liposome system was studied. The antioxidant activity of these olive oil phenols was similar and the duration of the lag phase was almost twice that of alpha-tocopherol. Trolox(R), a water-soluble analogue of alpha-tocopherol, showed the worst antioxidant activity. However, oxidation before the end of the lag phase was inhibited less effectively by the olive oil phenols than by alpha-tocopherol and Trolox(R). Synergistic effects (11-20% increase in lag phase) were observed in the antioxidant activity of combinations of alpha-tocopherol with olive oil phenols both with and without ascorbic acid. Fluorescence anisotropy of probes and fluorescence quenching studies showed that the olive oil phenols did not penetrate into the membrane, but their effectiveness as antioxidants showed they were associated with the surface of the phospholipid bilayer. (C) 2003 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
An important step in liposome characterization is to determine the location of a drug within the liposome. This work thus investigated the interaction of dipalmitoylphosphatidylcholine liposomes with drugs of varied water solubility, polar surface area (PSA) and partition coefficient using high sensitivity differential scanning calorimetry. Lipophilic estradiol (ES) interacted strongest with the acyl chains of the lipid membrane, followed by the somewhat polar 5-fluorouracil (5-FU). Strongly hydrophilic mannitol (MAN) showed no evidence of interaction but water soluble polymers inulin (IN) and an antisense oligonucleotide (OLG), which have very high PSAs, interacted with the lipid head groups. Accordingly, the drugs could be classified as: hydrophilic ones situated in the aqueous core and which may interact with the head groups; those located at the water-bilayer interface with some degree of penetration into the lipid bilayer; those lipophilic drugs constrained within the bilayer. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.
Resumo:
Eye-movements have long been considered a problem when trying to understand the visual control of locomotion. They transform the retinal image from a simple expanding pattern of moving texture elements (pure optic flow), into a complex combination of translation and rotation components (retinal flow). In this article we investigate whether there are measurable advantages to having an active free gaze, over a static gaze or tracking gaze, when steering along a winding path. We also examine patterns of free gaze behavior to determine preferred gaze strategies during active locomotion. Participants were asked to steer along a computer-simulated textured roadway with free gaze, fixed gaze, or gaze tracking the center of the roadway. Deviation of position from the center of the road was recorded along with their point of gaze. It was found that visually tracking the middle of the road produced smaller steering errors than for fixed gaze. Participants performed best at the steering task when allowed to sample naturally from the road ahead with free gaze. There was some variation in the gaze strategies used, but sampling was predominantly of areas proximal to the center of the road. These results diverge from traditional models of flow analysis.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
When two people discuss something they can see in front of them, what is the relationship between their eye movements? We recorded the gaze of pairs of subjects engaged in live, spontaneous dialogue. Cross-recurrence analysis revealed a coupling between the eye movements of the two conversants. In the first study, we found their eye movements were coupled across several seconds. In the second, we found that this coupling increased if they both heard the same background information prior to their conversation. These results provide a direct quantification of joint attention during unscripted conversation and show that it is influenced by knowledge in the common ground.
Resumo:
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.
Resumo:
Individuals with Williams syndrome (WS) display poor visuo-spatial cognition relative to verbal abilities. Furthermore, whilst perceptual abilities are delayed, visuo-spatial construction abilities are comparatively even weaker, and are characterised by a local bias. We investigated whether his differentiation in visuo-spatial abilities can be explained by a deficit in coding spatial location in WS. This can be measured by assessing participants' understanding of the spatial relations between objects within a visual scene. Coordinate and categorical spatial relations were investigated independently in four participant groups: 21 individuals with WS; 21 typically developing (TD) children matched for non-verbal ability; 20 typically developing controls of a lower non-verbal ability; and 21 adults. A third task measured understanding of visual colour relations. Results indicated first, that the comprehension of categorical and coordinate spatial relations is equally poor in WS. Second, that the comprehension of visual relations is also at an equivalent level to spatial relational understanding in this population. These results can explain the difference in performance on visuo-spatial perception and construction tasks in WS. In addition, both the WS and control groups displayed response biases in the spatial tasks. However, the direction of bias differed across the groups. This finding is explored in relation to current theories of spatial location coding. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.
Resumo:
Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.
Resumo:
Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.
Resumo:
Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.