892 resultados para Dimensional Modeling and Virtual Reality
Resumo:
When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
This manuscript details a technique for estimating gesture accuracy within the context of motion-based health video games using the MICROSOFT KINECT. We created a physical therapy game that requires players to imitate clinically significant reference gestures. Player performance is represented by the degree of similarity between the performed and reference gestures and is quantified by collecting the Euler angles of the player's gestures, converting them to a three-dimensional vector, and comparing the magnitude between the vectors. Lower difference values represent greater gestural correspondence and therefore greater player performance. A group of thirty-one subjects was tested. Subjects achieved gestural correspondence sufficient to complete the game's objectives while also improving their ability to perform reference gestures accurately.
Resumo:
Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. However, interaction performance in 3D mid-air is often reduced and depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In particular, in contrast to traditional desktop-based setups, users often cannot rest their arms in a comfortable pose during the interaction. In this article we analyze the impact of comfort on 3D selection tasks in an immersive desktop setup. First, in a pre-study we identified how comfortable or uncomfortable specific interaction positions and poses are for users who are standing upright. Then, we investigated differences in 3D selection task performance when users interact with their hands in a comfortable or uncomfortable body pose, while sitting on a chair in front of a table while the VE was displayed on a headmounted display (HMD). We conducted a Fitts' Law experiment to evaluate selection performance in different poses. The results suggest that users achieve a significantly higher performance in a comfortable pose when they rest their elbow on the table.
Resumo:
In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
OBJECTIVES To evaluate prosthetic parameters in the edentulous anterior maxilla for decision making between fixed and removable implant prosthesis using virtual planning software. MATERIAL AND METHODS CT- or DVT-scans of 43 patients (mean age 62 ± 8 years) with an edentulous maxilla were analyzed with the NobelGuide software. Implants (≥3.5 mm diameter, ≥10 mm length) were virtually placed in the optimal three-dimensional prosthetic position of all maxillary front teeth. Anatomical and prosthetic landmarks, including the cervical crown point (C-Point), the acrylic flange border (F-Point), and the implant-platform buccal-end (I-Point) were defined in each middle section to determine four measuring parameters: (1) acrylic flange height (FLHeight), (2) mucosal coverage (MucCov), (3) crown-Implant distance (CID) and (4) buccal prosthesis profile (ProsthProfile). Based on these parameters, all patients were assigned to one of three classes: (A) MucCov ≤ 0 mm and ProsthProfile≥45(0) allowing for fixed prosthesis, (B) MucCov = 0-5 mm and/or ProsthProfile = 30(0) -45(0) probably allowing for fixed prosthesis, and (C) MucCov ≥ 5 mm and/or ProsthProfile ≤ 30(0) where removable prosthesis is favorable. Statistical analyses included descriptive methods and non-parametric tests. RESULTS Mean values were for FLHeight 10.0 mm, MucCov 5.6 mm, CID 7.4 mm, and ProsthProfile 39.1(0) . Seventy percent of patients fulfilled class C criteria (removable), 21% class B (probably fixed), and 2% class A (fixed), while in 7% (three patients) bone volume was insufficient for implant planning. CONCLUSIONS The proposed classification and virtual planning procedure simplify the decision-making process regarding type of prosthesis and increase predictability of esthetic treatment outcomes. It was demonstrated that in the majority of cases, the space between the prosthetic crown and implant platform had to be filled with prosthetic materials.
Resumo:
When observers are presented with two visual targets appearing in the same position in close temporal proximity, a marked reduction in detection performance of the second target has often been reported, the so-called attentional blink phenomenon. Several studies found a similar decrement of P300 amplitudes during the attentional blink period as observed with detection performances of the second target. However, whether the parallel courses of second target performances and corresponding P300 amplitudes resulted from the same underlying mechanisms remained unclear. The aim of our study was therefore to investigate whether the mechanisms underlying the AB can be assessed by fixed-links modeling and whether this kind of assessment would reveal the same or at least related processes in the behavioral and electrophysiological data. On both levels of observation three highly similar processes could be identified: an increasing, a decreasing and a u-shaped trend. Corresponding processes from the behavioral and electrophysiological data were substantially correlated, with the two u-shaped trends showing the strongest association with each other. Our results provide evidence for the assumption that the same mechanisms underlie attentional blink task performance at the electrophysiological and behavioral levels as assessed by fixed-links models.
Resumo:
BACKGROUND "The feeling of being there" is one possible way to describe the phenomenon of feeling present in a virtual environment and to act as if this environment is real. One brain area, which is hypothesized to be critically involved in modulating this feeling (also called presence) is the dorso-lateral prefrontal cortex (dlPFC), an area also associated with the control of impulsive behavior. METHODS In our experiment we applied transcranial direct current stimulation (tDCS) to the right dlPFC in order to modulate the experience of presence while watching a virtual roller coaster ride. During the ride we also registered electro-dermal activity. Subjects also performed a test measuring impulsiveness and answered a questionnaire about their presence feeling while they were exposed to the virtual roller coaster scenario. RESULTS Application of cathodal tDCS to the right dlPFC while subjects were exposed to a virtual roller coaster scenario modulates the electrodermal response to the virtual reality stimulus. In addition, measures reflecting impulsiveness were also modulated by application of cathodal tDCS to the right dlPFC. CONCLUSION Modulating the activation with the right dlPFC results in substantial changes in responses of the vegetative nervous system and changed impulsiveness. The effects can be explained by theories discussing the top-down influence of the right dlPFC on the "impulsive system".
Resumo:
The aim of this study was to validate oxygen-sensitive 3He-MRI in noninvasive determination of the regional, two- and three-dimensional distribution of oxygen partial pressure. In a gas-filled elastic silicon ventilation bag used as a lung phantom, oxygen sensitive two- and three-dimensional 3He-MRI measurements were performed at different oxygen concentrations which had been equilibrated in a range of normal and pathologic values. The oxygen partial pressure distribution was determined from 3He-MRI using newly developed software allowing for mapping of oxygen partial pressure. The reference bulk oxygen partial pressure inside the phantom was measured by conventional respiratory gas analysis. In two-dimensional measurements, image-based and gas-analysis results correlated with r=0.98; in three-dimensional measurements the between-methods correlation coefficient was r=0.89. The signal-to-noise ratio of three-dimensional measurements was about half of that of two-dimensional measurements and became critical (below 3) in some data sets. Oxygen-sensitive 3He-MRI allows for noninvasive determination of the two- and three-dimensional distribution of oxygen partial pressure in gas-filled airspaces.
Resumo:
At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.
Resumo:
Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. Virtual reality (VR) environments have already been successfully used in cognitive rehabilitation and show increased potential for use in neuropsychological evaluation allowing for greater ecological validity while being more engaging and user friendly. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory, and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87% (90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD," the suggested VAP-M platform appears as an appealing alternative.
Resumo:
Humans possess a highly developed sensitivity for facial features. This sensitivity is also deployed to non-human beings and inanimate objects such as cars. In the present study we aimed to investigate whether car design has a bearing on the behaviour of pedestrians. Methods: An immersive virtual reality environment with a zebra crossing was used to determine a) whether the minimum accepted distance for crossing the street is bigger for cars with dominant appearance than for cars with friendly appearance (Block 1) and b) whether the speed of dominant cars are overestimated compared to friendly cars (Block 2). In Block 1, the participant's task was to cross the road in front of an approaching car at the latest moment. The point of time when entering and leaving the street was measured. In Block 2 they were asked to estimate the speed of each passing car. An independent sample rated dominant cars as being more dominant, angry and hostile than friendly cars. Results: None of the predictions regarding the car design was confirmed. Instead, there was an effect of starting position: From the centre island, participants entered the road significantly later (smaller accepted distance) and left the road later than when starting from the pavement. Consistently, the speed of the cars was estimated significantly lower when standing on the centre island compared to the pavement. When entering the visual size of the cars as factor (instead of dominance), we found that participants started to cross the road significantly later in front of small cars compared to big cars and that the speed of smaller cars was overestimated compared to big cars (size-speed bias). Conclusions: Car size and starting position, not car design seem to have an influence on road crossing behaviour.