909 resultados para Panoramic projections. Virtual Environments. Navigation in 3D environments. Virtual Reality
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Haptic interfaces can provide highly realistic interaction with objects within their workspace, but the task of interacting with objects over large areas or volumes is made difficult by the limits of interface travel. This paper details the development of a custom haptic interface - for navigating a large virtual environment (a simulated supermarket), and investigation into different control methods which allow for haptic interaction over extremely large workspaces.
Resumo:
The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.
Resumo:
While navigation systems for cars are in widespread use, only recently, indoor navigation systems based on smartphone apps became technically feasible. Hence tools in order to plan and evaluate particular designs of information provision are needed. Since tests in real infrastructures are costly and environmental conditions cannot be held constant, one must resort to virtual infrastructures. This paper presents the development of an environment for the support of the design of indoor navigation systems whose center piece consists in a hands-free navigation method using the Microsoft Kinect in the four-sided Definitely Affordable Virtual Environment (DAVE). Navigation controls using the user's gestures and postures as the input to the controls are designed and implemented. The installation of expensive and bulky hardware like treadmills is avoided while still giving the user a good impression of the distance she has traveled in virtual space. An advantage in comparison to approaches using a head mounted display is that the DAVE allows the users to interact with their smartphone. Thus the effects of different indoor navigation systems can be evaluated already in the planning phase using the resulting system
Resumo:
Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.
Resumo:
The aim of this article was to study the effect of virtual-reality exposure to situations that are emotionally significant for patients with eating disorders (ED) on the stability of body-image distortion and body-image dissatisfaction. A total of 85 ED patients and 108 non-ED students were randomly exposed to four experimental virtual environments: a kitchen with low-calorie food, a kitchen with high-calorie food, a restaurant with low-calorie food, and a restaurant with high-calorie food. In the interval between the presentation of each situation, body-image distortion and body-image dissatisfaction were assessed. Several 2 x 2 x 2 repeated measures analyses of variance (high-calorie vs. low-calorie food x presence vs. absence of people x ED group vs. control group) showed that ED participants had significantly higher levels of body-image distortion and body dissatisfaction after eating high-calorie food than after eating low-calorie food, while control participants reported a similar body image in all situations. The results suggest that body-image distortion and body-image dissatisfaction show both trait and state features. On the one hand, ED patients show a general predisposition to overestimate their body size and to feel more dissatisfied with their body image than controls. On the other hand, these body-image disturbances fluctuate when participants are exposed to virtual situations that are emotionally relevant for them.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
Panoramic rendering is the visualization of three-dimensional objects in a virtual environment through a wide viewing angle. This work investigated if the use of panoramas is able to promote faster searches in a virtual environment. Panoramas allow the presentation of space through less need to change the orientation of the camera, especially for the case of projections spanning 360º surrounding the user, which can benefit searching. However, the larger the angle, more distorted is the visualization of the environment, causing confusion in navigation. The distortion is even bigger when the user changes the pitch of the camera, by looking up or down. In this work we developed a technique to eliminate specifically the distortions caused by changes in pitch, which was called hemispheric projection. Experiments were done to evaluate the performance of search navigation through perspective, cylindrical and hemispherical projections. The results indicate that navigating with perspective projection is superior than navigating with panoramic projections, possibly due to factors such as (i) lack of experience of the participants in understanding the scenes displayed as panoramas, (ii) the inherent presence of distortion in panoramic projections and (iii) a lower display resolution because the objects are presented in smaller sizes in panoramic projections, making the perception of details more difficult. However, the hemispherical projection was better than the cylindrical, indicating that the developed technique provides benefits for navigation compared to current techniques of panoramic projection. The hemispheric projection also provided the least amount of changes of camera orientation, which is an indication that the hemispheric projections may be particularly useful in situations where there are restrictions on the ease to change the orientation. Future research will investigate the performance of cameras interactions on slower devices, such as using only keyboard, or brain-machine interfaces
Resumo:
The aim of this work is to present a new methodology, based on vector and geometrical techniques, for determining the position of an intruder in a residence (3D problem). Initially, modifications in the electromagnetic responses of the environment, caused by movements of the trespasser, are detected. It is worth mentioning that slight movements are detected by high frequency components of the used pulse. The differences between the signals (before and after any movement) are used to define a sphere and ellipsoids, which are used for estimating the position of the invader. In this work, multiple radars are used in a cooperative manner. The multiple estimates obtained are used to determine a mean position and its standard deviation, introducing the concept of sphere of estimates. The electromagnetic simulations were performed by using the FDTD method. Results were obtained for single and double floor residences.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
We report the fabrication, functionalization and testing of microdevices for cell culture and cell traction force measurements in three-dimensions (3D). The devices are composed of bent cantilevers patterned with cell-adhesive spots not lying on the same plane, and thus suspending cells in 3D. The cantilevers are soft enough to undergo micrometric deflections when cells pull on them, allowing cell forces to be measured by means of optical microscopy. Since individual cantilevers are mechanically independent of each other, cell traction forces are determined directly from cantilever deflections. This proves the potential of these new devices as a tool for the quantification of cell mechanics in a system with well-defined 3D geometry and mechanical properties.
Resumo:
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.
Resumo:
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.