134 resultados para swd: Humanoider Roboter
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
This paper reports on a Virtual Reality theater experiment named Il était Xn fois, conducted by artists and computer scientists working in cognitive science. It offered the opportunity for knowledge and ideas exchange between these groups, highlighting the benefits of collaboration of this kind. Section 1 explains the link between enaction in cognitive science and virtual reality, and specifically the need to develop an autonomous entity which enhances presence in an artificial world. Section 2 argues that enactive artificial intelligence is able to produce such autonomy. This was demonstrated by the theatrical experiment, "Il était Xn fois" (in English: Once upon Xn time), explained in section 3. Its first public performance was in 2009, by the company Dérézo. The last section offers the view that enaction can form a common ground between the artistic and computer science areas.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
The competitive industrial context compels companies to speed-up every new product design. In order to keep designing products that meet the needs of the end user, a human centered concurrent product design methodology has been proposed. Its setting up is complicated by the difficulties of collaboration between experts involved inthe design process. In order to ease this collaboration, we propose the use of virtual reality as an intermediate design representation in the form of light and specialized immersive convergence support applications. In this paper, we present the As Soon As Possible (ASAP) methodology making possible the development of these tools while ensuring their usefulness and usability. The relevance oft his approach is validated by an industrial use case through the design of an ergonomic-style convergence support tool.
Resumo:
We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.
Resumo:
Recent developments in the area of interactive entertainment have suggested to combine stereoscopic visualization with multi-touch displays, which has the potential to open up new vistas for natural interaction with interactive three-dimensional (3D) applications. However, the question arises how the user interfaces for system control in such 3D setups should be designed in order to provide an effective user experience. In this article we introduce 3D GUI widgets for interaction with stereoscopic touch displays. The design of the widgets was inspired to skeuomorphism and affordances in such a way that the user should be able to operate the virtual objects in the same way as their real-world equivalents. We evaluate the developed widgets and compared them with their 2D counterparts in the scope of an example application in order to analyze the usability of and user behavior with the widgets. The results reveal differences in user behavior with and without stereoscopic display during touch interaction, and show that the developed 2D as well as 3D GUI widgets can be used effectively in different applications.
Resumo:
This manuscript details a technique for estimating gesture accuracy within the context of motion-based health video games using the MICROSOFT KINECT. We created a physical therapy game that requires players to imitate clinically significant reference gestures. Player performance is represented by the degree of similarity between the performed and reference gestures and is quantified by collecting the Euler angles of the player's gestures, converting them to a three-dimensional vector, and comparing the magnitude between the vectors. Lower difference values represent greater gestural correspondence and therefore greater player performance. A group of thirty-one subjects was tested. Subjects achieved gestural correspondence sufficient to complete the game's objectives while also improving their ability to perform reference gestures accurately.
Resumo:
Wind and warmth sensations proved to be able to enhance users' state of presence in Virtual Reality applications. Still, only few projects deal with their detailed effect on the user and general ways of implementing such stimuli. This work tries to fill this gap: After analyzing requirements for hardware and software concerning wind and warmth simulations, a hardware and also a software setup for the application in a CAVE environment is proposed. The setup is evaluated with regard to technical details and requirements, but also - in the form of a pilot study - in view of user experience and presence. Our setup proved to comply with the requirements and leads to satisfactory results. To our knowledge, the low cost simulation system (approx. 2200 Euro) presented here is one of the most extensive, most flexible and best evaluated systems for creating wind and warmth stimuli in CAVE-based VR applications.
Resumo:
Skin segmentation is a challenging task due to several influences such as unknown lighting conditions, skin colored background, and camera limitations. A lot of skin segmentation approaches were proposed in the past including adaptive (in the sense of updating the skin color online) and non-adaptive approaches. In this paper, we compare three skin segmentation approaches that are promising to work well for hand tracking, which is our main motivation for this work. Hand tracking can widely be used in VR/AR e.g. navigation and object manipulation. The first skin segmentation approach is a well-known non-adaptive approach. It is based on a simple, pre-computed skin color distribution. Methods two and three adaptively estimate the skin color in each frame utilizing clustering algorithms. The second approach uses a hierarchical clustering for a simultaneous image and color space segmentation, while the third approach is a pure color space clustering, but with a more sophisticated clustering approach. For evaluation, we compared the segmentation results of the approaches against a ground truth dataset. To obtain the ground truth dataset, we labeled about 500 images captured under various conditions.
Resumo:
Informatik- und insbesondere Programmierunterricht sind heute ein wichtiger Bestandteil der schulischen Ausbildung. Vereinfachte Entwicklungsumgebungen, die auf die Abstraktion typischer Programmierkonzepte in Form von grafischen Bausteinen setzen, unterstützen diesen Trend. Zusätzliche Attraktivität wird durch die Verwendung exotischer Laufzeitumgebungen (z. B. Roboter) geschaffen. Die in diesem Paper vorgestellte Plattform “ScratchDrone” führt ergänzend zu diesen Angeboten eine moderne Flugdrohne als innovative Laufzeitumgebung für Scratch-Programme ein. Die Programmierung kann dabei dank modularer Systemarchitektur auf verschiedenen Abstraktionsebenen erfolgen, abhängig vom Lernfortschritt der Schüler. Kombiniert mit einem mehrstufigen didaktischen Modell, der Herausforderung der Bewegung im 3D-Raum sowie der natürlichen menschlichen Faszination für das Fliegen wird so eine hohe Lernmotivation bei jungen Programmieranfängern erreicht.
Resumo:
Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. However, interaction performance in 3D mid-air is often reduced and depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In particular, in contrast to traditional desktop-based setups, users often cannot rest their arms in a comfortable pose during the interaction. In this article we analyze the impact of comfort on 3D selection tasks in an immersive desktop setup. First, in a pre-study we identified how comfortable or uncomfortable specific interaction positions and poses are for users who are standing upright. Then, we investigated differences in 3D selection task performance when users interact with their hands in a comfortable or uncomfortable body pose, while sitting on a chair in front of a table while the VE was displayed on a headmounted display (HMD). We conducted a Fitts' Law experiment to evaluate selection performance in different poses. The results suggest that users achieve a significantly higher performance in a comfortable pose when they rest their elbow on the table.
Resumo:
Human behavior is a major factor modulating the consequences of road tunnel accidents. We investigated the effect of information and instruction on drivers' behavior as well as the usability of virtual environments to simulate such emergency situations. Tunnel safety knowledge of the general population was assessed using an online questionnaire, and tunnel safety behavior was investigated in a virtual reality experiment. Forty-four participants completed three drives through a virtual road tunnel and were confronted with a traffic jam, no event, and an accident blocking the road. Participants were randomly assigned to a control group (no intervention), an informed group who read a brochure containing safety information prior to the tunnel drives, or an informed and instructed group who read the same brochure and received additional instructions during the emergency situation. Informed participants showed better and quicker safety behavior than the control group. Self-reports of anxiety were assessed three times during each drive. Anxiety was elevated during and after the emergency situation. The findings demonstrate problematic safety behavior in the control group and that knowledge of safety information fosters adequate behavior in tunnel emergencies. Enhanced anxiety ratings during the emergency situation indicate external validity of the virtual environment.
Resumo:
In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.
Resumo:
In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.