102 resultados para pediatric dry eye
Resumo:
Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.
Resumo:
Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Accommodation is considered to be a symmetrical response and to be driven by the least ametropic and nonamblyopic eye in anisometropia. We report the case of a 4-year-old child with anisometropic amblyopia who accommodates asymmetrically, reliably demonstrating normal accommodation in the nonamblyopic eye and antiaccommodation of the amblyopic eye to near targets. The abnormal accommodation of the amblyopic eye remained largely unchanged during 7 subsequent testing sessions undertaken over the course of therapy. We suggest that a congenital dysinnervation syndrome may result in relaxation of accommodation in relation to near cues and might be a hitherto unconsidered additional etiological factor in anisometropic amblyopia.
Resumo:
A cause and effect relationship between glucagon-like peptide 1 (7, 36) amide (GLP-1) and cholecystokinin (CCK) and DMI regulation has not been established in ruminants. Three randomized complete block experiments were conducted to determine the effect of feeding fat or infusing GLP-1 or CCK intravenously on DMI, nutrient digestibility, and Cr rate of passage (using Cr(2)O(3) as a marker) in wethers. A total of 18 Targhee × Hampshire wethers (36.5 ± 2.5 kg of BW) were used, and each experiment consisted of four 21-d periods (14 d for adaptation and 7 d for infusion and sampling). Wethers allotted to the control treatments served as the controls for all 3 experiments; experiments were performed simultaneously. The basal diet was 60% concentrate and 40% forage. In Exp. 1, treatments were the control (0% added fat) and addition of 4 or 6% Ca salts of palm oil fatty acids (DM basis). Treatments in Exp. 2 and 3 were the control and 3 jugular vein infusion dosages of GLP-1 (0.052, 0.103, or 0.155 µg•kg of BW(-1)•d(-1)) or CCK (0.069, 0.138, or 0.207 µg•kg of BW(-1)•d(-1)), respectively. Increases in plasma GLP-1 and CCK concentrations during hormone infusions were comparable with increases observed when increasing amounts of fat were fed. Feeding fat and infusion of GLP-1 tended (linear, P = 0.12; quadratic, P = 0.13) to decrease DMI. Infusion of CCK did not affect (P > 0.21) DMI. Retention time of Cr in the total gastrointestinal tract decreased (linear, P < 0.01) when fat was fed, but was not affected by GLP-1 or CCK infusion. In conclusion, jugular vein infusion produced similar plasma CCK and GLP-1 concentrations as observed when fat was fed. The effects of feeding fat on DMI may be partially regulated by plasma concentration of GLP-1, but are not likely due solely to changes in a single hormone concentration.
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
This study investigates the production of alginate microcapsules, which have been coated with the polysaccharide chitosan, and evaluates some of their properties with the intention of improving the gastrointestinal viability of a probiotic (Bifidobacterium breve) by encapsulation in this system. The microcapsules were dried by a variety of methods, and the most suitable was chosen. The work described in this Article is the first report detailing the effects of drying on the properties of these microcapsules and the viability of the bacteria within relative to wet microcapsules. The pH range over which chitosan and alginate form polyelectrolyte complexes was explored by spectrophotometry, and this extended into swelling studies on the microcapsules over a range of pHs associated with the gastrointestinal tract. It was shown that chitosan stabilizes the alginate microcapsules at pHs above 3, extending the stability of the capsules under these conditions. The effect of chitosan exposure time on the coating thickness was investigated for the first time by confocal laser scanning microscopy, and its penetration into the alginate matrix was shown to be particularly slow. Coating with chitosan was found to increase the survival of B. breve in simulated gastric fluid as well as prolong its release upon exposure to intestinal pH.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
With the increasing frequency and magnitude of warmer days during the summer in the UK, bedding plants which were a traditional part of the urban green landscape are perceived as unsustainable and water-demanding. During recent summers when bans on irrigation have been imposed, use and sales of bedding plants have dropped dramatically having a negative financial impact on the nursery industry. Retaining bedding species as a feature in public and even private spaces in future may be conditional on them being managed in a manner that minimises their water use. Using Petunia x hybrida ‘Hurrah White’ we aimed to discover which irrigation approach was the most efficient for maintaining plants’ ornamental quality (flower numbers, size and longevity), shoot and root growth under water deficit and periods of complete water withdrawal. Plants were grown from plugs for 51 days in wooden rhizotrons (0.35 m (h) x 0.1 m (w) x 0.065 m (d)); the rhizotrons’ front comprised clear Perspex which enabled us to monitor root growth closely. Irrigation treatments were: 1. watering with the amount which constitutes 50% of container capacity by conventional surface drip-irrigation (‘50% TOP’); 2. 50% as sub-irrigation at 10 cm depth (‘50% SUB’); 3. ‘split’ irrigation: 25% as surface drip- and 25% as sub-irrigation at 15 cm depth (‘25/25 SPLIT’); 4. 25% as conventional surface drip-irrigation (‘25% TOP’). Plants were irrigated daily at 18:00 apart from days 34-36 (inclusive) when water was withdrawn for all the treatments. Plants in ‘50% SUB’ had the most flowers and their size was comparable to that of ‘50% TOP’. Differences between treatments in other ‘quality’ parameters (height, shoot number) were biologically small. There was less root growth at deeper soil surface levels for ‘50% TOP’ which indicated that irrigation methods like ‘50% SUB’ and ‘25/25 SPLIT’ and stronger water deficits encouraged deeper root growth. It is suggested that sub-irrigation at 10 cm depth with water amounts of 50% container capacity would result in the most root growth with the maximum flowering for Petunia. Leaf stomatal conductance appeared to be most sensitive to the changes in substrate moisture content in the deepest part of the soil profile, where most roots were situated.