967 resultados para Visual immersive environments


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Virtual Reality (VR) has been used in a variety of forms to assist in the treatment of a wide range of psychological illness. VR can also fulfil the need that psychologists have for safe environments in which to conduct experiments. Currently the main barrier against using this technology is the complexity in developing applications. This paper presents two different co-operative psychological applications which have been developed using a single framework. These applications require different levels of co-operation between the users and clients, ranging from full psychologist involvement to their minimal intervention. This paper will also discuss our approach to developing these different environments and our experiences to date in utilising these environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a novel way of interacting with an immersive virtual environment which involves inexpensive motion-capture using the Wii Remote®. A software framework is also presented to visualize and share this information across two remote CAVETM-like environments. The resulting application can be used to assist rehabilitation by sending motion information across remote sites. The application’s software and hardware components are scalable enough to be used on a desktop computer when home-based rehabilitation is preferred.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Higher visual functions can be defined as cognitive processes responsible for object recognition, color and shape perception, and motion detection. People with impaired higher visual functions after unilateral brain lesion are often tested with paper pencil tests, but such tests do not assess the degree of interaction between the healthy brain hemisphere and the impaired one. Hence, visual functions are not tested separately in the contralesional and ipsilesional visual hemifields. METHODS: A new measurement setup, that involves real-time comparisons of shape and size of objects, orientation of lines, speed and direction of moving patterns, in the right or left visual hemifield, has been developed. The setup was implemented in an immersive environment like a hemisphere to take into account the effects of peripheral and central vision, and eventual visual field losses. Due to the non-flat screen of the hemisphere, a distortion algorithm was needed to adapt the projected images to the surface. Several approaches were studied and, based on a comparison between projected images and original ones, the best one was used for the implementation of the test. Fifty-seven healthy volunteers were then tested in a pilot study. A Satisfaction Questionnaire was used to assess the usability of the new measurement setup. RESULTS: The results of the distortion algorithm showed a structural similarity between the warped images and the original ones higher than 97%. The results of the pilot study showed an accuracy in comparing images in the two visual hemifields of 0.18 visual degrees and 0.19 visual degrees for size and shape discrimination, respectively, 2.56° for line orientation, 0.33 visual degrees/s for speed perception and 7.41° for recognition of motion direction. The outcome of the Satisfaction Questionnaire showed a high acceptance of the battery by the participants. CONCLUSIONS: A new method to measure higher visual functions in an immersive environment was presented. The study focused on the usability of the developed battery rather than the performance at the visual tasks. A battery of five subtasks to study the perception of size, shape, orientation, speed and motion direction was developed. The test setup is now ready to be tested in neurological patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The compound eyes of mantis shrimps (stomatopod crustaceans) include an unparalleled diversity of visual pigments and spectral receptor classes in retinas of each species. We compared the visual pigment and spectral receptor classes of 12 species of gonodactyloid stomatopods from a variety of photo environments, from intertidal to deep water ( > 50 m), to learn how spectral tuning in the different photoreceptor types is modified within different photic environments. Results show that receptors of the peripheral photoreceptors, those outside the midband which are responsible for standard visual tasks such as spatial vision and motion detection, reveal the well-known pattern of decreasing lambda(max) with increasing depth. Receptors of midband rows 5 and 6, which are specialized for polarization vision, are similar in all species, having visual lambda(max)-values near 500 nm, independent of depth. Finally the spectral receptors of midband rows 1 to 4 are tuned for maximum coverage of the spectrum of irradiance available in the habitat of each species. The quality of the visual worlds experienced by each species we studied must vary considerably, but all appear to exploit the full capabilities offered by their complex visual systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conducted two psychophysical experiments to investigate the relationship between processing mechanisms for exocentric distance and direction. In the first experiment, the task was to discriminate exocentric distances. In the second one, the task was to discriminate exocentric directions. The individual effects of distance and direction on each task were dissociated by analyzing their corresponding psychophysical functions. Under stereoscopicviewing conditions, distancejudgments of excentric intervals were not affected by exocentric direction. However, directionjudgments were influenced by the distance between the pair of stimuli. Therefore, the mechanism processing exocentric direction is dependent on exocentric distance, but the mechanism processing exocentric distance does not require exocentric: direction measures. As a result, we suggest that exocentric distance and direction are hierarchically processed, with distance preceding direction. Alternatively, and more probably, a necessary condition for processing the exocentric direction between two stimuli may be to know the location of each of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speleologist’s perform their activity in demanding visual conditions of very low luminance - many visual tasks involve resolution of detail under conditions of low contrast. Work related conditions in a cave as exposure to heat, chemicals, dust and poor lighting conditions could influence the integrity of the visual system and predispose the eye to diseases that eventually affect vision. Poor lighting conditions cause a variety of symptoms of visual discomfort and may increase the risk of accidents. Good visual acuity is crucial for several and has an important role for safety purposes. The aim of this study was to evaluate lighting conditions and optical filters effects on visual performance in speleologists exposed to cave environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a collaborative virtual learning environment, which includes technologies such as 3D virtual representations, learning and content management systems, remote experiments, and collaborative learning spaces, among others. It intends to facilitate the construction, management and sharing of knowledge among teachers and students, in a global perspective. The environment proposes the use of 3D social representations for accessing learning materials in a dynamic and interactive form, which is regarded to be closer to the physical reality experienced by teachers and students in a learning context. A first implementation of the proposed extended immersive learning environment, in the area of solid mechanics, is also described, including the access to theoretical contents and a remote experiment to determine the elastic modulus of a given object.These instructions give you basic guidelines for preparing camera-ready papers for conference proceedings. Use this document as a template if you are using Microsoft Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further. Define all symbols used in the abstract. Do not cite references in the abstract.