5 resultados para stereoscopic

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE External beam radiation therapy is currently considered the most common treatment modality for intraocular tumors. Localization of the tumor and efficient compensation of tumor misalignment with respect to the radiation beam are crucial. According to the state of the art procedure, localization of the target volume is indirectly performed by the invasive surgical implantation of radiopaque clips or is limited to positioning the head using stereoscopic radiographies. This work represents a proof-of-concept for direct and noninvasive tumor referencing based on anterior eye topography acquired using optical coherence tomography (OCT). METHODS A prototype of a head-mounted device has been developed for automatic monitoring of tumor position and orientation in the isocentric reference frame for LINAC based treatment of intraocular tumors. Noninvasive tumor referencing is performed with six degrees of freedom based on anterior eye topography acquired using OCT and registration of a statistical eye model. The proposed prototype was tested based on enucleated pig eyes and registration accuracy was measured by comparison of the resulting transformation with tilt and torsion angles manually induced using a custom-made test bench. RESULTS Validation based on 12 enucleated pig eyes revealed an overall average registration error of 0.26 ± 0.08° in 87 ± 0.7 ms for tilting and 0.52 ± 0.03° in 94 ± 1.4 ms for torsion. Furthermore, dependency of sampling density on mean registration error was quantitatively assessed. CONCLUSIONS The tumor referencing method presented in combination with the statistical eye model introduced in the past has the potential to enable noninvasive treatment and may improve quality, efficacy, and flexibility of external beam radiotherapy of intraocular tumors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.