43 resultados para swd: Virtual environments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently recording the environment illumination. A sampling algorithm selects a few bright parts in one of the wide angle images and the corresponding points in the second camera image. The 3D position can then be calculated using epipolar geometry. Finally, the selected point lights are used in a multi pass algorithm to draw the virtual object with shadows. To validate our approach, we compare the appearance and shadows of the synthetic objects with real objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image-based Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural self-occlusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of image-based graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories (Fig. 9), based on how the scene/illumination information is captured: Reflectance function-based, Basis function-based and Plenoptic function-based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about the sampling density and types of light source(s), relevant issues of IBRL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents two studies pertaining to the use of virtual characters applied in clinical forensic rehabilitation of sex offenders. The first study is about the validation of the perceived age of virtual characters designed to simulate primary and secondary sexual character of typical adult and child individuals. The second study puts to use these virtual characters in comparing a group of sex offenders and a group of non deviant individuals on their sexual arousal responses as recorded in virtual immersion. Finally, two clinical vignettes illustrating the use of made-to-measure virtual characters to more closely fit sexual preferences are presented in Discussion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article presents the design process of intelligent virtual human patients that are used for the enhancement of clinical skills. The description covers the development from conceptualization and character creation to technical components and the application in clinical research and training. The aim is to create believable social interactions with virtual agents that help the clinician to develop skills in symptom and ability assessment, diagnosis, interview techniques and interpersonal communication. The virtual patient fulfills the requirements of a standardized patient producing consistent, reliable and valid interactions in portraying symptoms and behaviour related to a specific clinical condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on a Virtual Reality theater experiment named Il était Xn fois, conducted by artists and computer scientists working in cognitive science. It offered the opportunity for knowledge and ideas exchange between these groups, highlighting the benefits of collaboration of this kind. Section 1 explains the link between enaction in cognitive science and virtual reality, and specifically the need to develop an autonomous entity which enhances presence in an artificial world. Section 2 argues that enactive artificial intelligence is able to produce such autonomy. This was demonstrated by the theatrical experiment, "Il était Xn fois" (in English: Once upon Xn time), explained in section 3. Its first public performance was in 2009, by the company Dérézo. The last section offers the view that enaction can form a common ground between the artistic and computer science areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind and warmth sensations proved to be able to enhance users' state of presence in Virtual Reality applications. Still, only few projects deal with their detailed effect on the user and general ways of implementing such stimuli. This work tries to fill this gap: After analyzing requirements for hardware and software concerning wind and warmth simulations, a hardware and also a software setup for the application in a CAVE environment is proposed. The setup is evaluated with regard to technical details and requirements, but also - in the form of a pilot study - in view of user experience and presence. Our setup proved to comply with the requirements and leads to satisfactory results. To our knowledge, the low cost simulation system (approx. 2200 Euro) presented here is one of the most extensive, most flexible and best evaluated systems for creating wind and warmth stimuli in CAVE-based VR applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3D) immersive virtual worlds have been touted as being capable of facilitating highly interactive, engaging, multimodal learning experiences. Much of the evidence gathered to support these claims has been anecdotal but the potential that these environments hold to solve traditional problems in online and technology-mediated education—primarily learner isolation and student disengagement—has resulted in considerable investments in virtual world platforms like Second Life, OpenSimulator, and Open Wonderland by both professors and institutions. To justify this ongoing and sustained investment, institutions and proponents of simulated learning environments must assemble a robust body of evidence that illustrates the most effective use of this powerful learning tool. In this authoritative collection, a team of international experts outline the emerging trends and developments in the use of 3D virtual worlds for teaching and learning. They explore aspects of learner interaction with virtual worlds, such as user wayfinding in Second Life, communication modes and perceived presence, and accessibility issues for elderly or disabled learners. They also examine advanced technologies that hold potential for the enhancement of learner immersion and discuss best practices in the design and implementation of virtual world-based learning interventions and tasks. By evaluating and documenting different methods, approaches, and strategies, the contributors to Learning in Virtual Worlds offer important information and insight to both scholars and practitioners in the field. AU Press is an open access publisher and the book is available for free in PDF format as well as for purchase on our website: http://bit.ly/1W4yTRA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.