13 resultados para swd: Human computer interaction
em Digital Peer Publishing
Resumo:
The grasping of virtual objects has been an active research field for several years. Solutions providing realistic grasping rely on special hardware or require time-consuming parameterizations. Therefore, we introduce a flexible grasping algorithm enabling grasping without computational complex physics. Objects can be grasped and manipulated with multiple fingers. In addition, multiple objects can be manipulated simultaneously with our approach. Through the usage of contact sensors the technique is easily configurable and versatile enough to be used in different scenarios.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.
Resumo:
In this paper we provide a framework that enables the rapid development of applications using non-standard input devices. Flash is chosen as programming language since it can be used for quickly assembling applications. We overcome the difficulties of Flash to access external devices by introducing a very generic concept: The state information generated by input devices is transferred to a PC where a program collects them, interprets them and makes them available on a web server. Application developers can now integrate a Flash component that accesses the data stored in XML format and directly use it in their application.
Resumo:
This article illustrates the detection of 6 degrees of freedom (DOF) for Virtual Environment interactions using a modified simple laser pointer device and a camera. The laser pointer is combined with a diffraction rating to project a unique laser grid onto the projection planes used in projection-based immersive VR setups. The distortion of the projected grid is used to calculate the translational and rotational degrees of freedom required for human-computer interaction purposes.
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.
Resumo:
Electronic apppliances are increasingly a part of our everyday lives. In particular, mobile devices, with their reduced dimensions with power rivaling desktop computers, have substantially augmented our communication abilities offering instant availability, anywhere, to everyone. These devices have become essential for human communication but also include a more comprehensive tool set to support productivity and leisure applications. However, the many applications commonly available are not adapted to people with special needs. Rather, most popular devices are targeted at teenagers or young adults with excellent eyesight and coordination. What is worse, most of the commonly used assistive control interfaces are not available in a mobile environment where user's position, accommodation and capacities can vary even widely. To try and address people with special needs new approaches and techniques are sorely needed. This paper presents a control interface to allow tetraplegic users to interact with electronic devices. Our method uses myographic information (Electromyography or EMG) collected from residually controlled body areas. User evaluations validate electromyography as a daily wearable interface. In particular our results show that EMG can be used even in mobility contexts.
Resumo:
Incorporating physical activity and exertion into pervasive gaming applications can provide health and social benefits. Prior research has resulted in several prototypes of pervasive games that encourage exertion as interaction form; however, no detailed critical account of the various approaches exists. We focus on networked exertion games and detail some of our work while identifying the remaining issues towards providing a coherent framework. We outline common lessons learned and use them as the basis for generalizations for the design of networked exertion games. We propose possible directions of further investigation, hoping to provide guidance for future work to facilitate greater awareness and exposure of exertion games and their benefits.
Resumo:
In this paper, we propose the use of specific system architecture, based on mobile device, for navigation in urban environments. The aim of this work is to assess how virtual and augmented reality interface paradigms can provide enhanced location based services using real-time techniques in the context of these two different technologies. The virtual reality interface is based on faithful graphical representation of the localities of interest, coupled with sensory information on the location and orientation of the user, while the augmented reality interface uses computer vision techniques to capture patterns from the real environment and overlay additional way-finding information, aligned with real imagery, in real-time. The knowledge obtained from the evaluation of the virtual reality navigational experience has been used to inform the design of the augmented reality interface. Initial results of the user testing of the experimental augmented reality system for navigation are presented.
Resumo:
Funded by the US-EU Atlantis Program, the International Cooperation in Ambient Computing Education Project is establishing an international knowledge-building community for developing a broader computer science curriculum aimed at preparing students for real-world problems in a multidisciplinary, global world. The project is collaboration among Troy University (USA), University of Sunderland (UK), FernUniversität in Hagen (Germany), Universidade do Algarve (Portugal), University of Arkansas at Little Rock (USA) and San Diego State University (USA). The curriculum will include aspects of social science, cognitive science, human-computer interaction, organizational studies, global studies, and particular application areas as well as core computer science subjects. Programs offered at partner institutions will form trajectories through the curriculum. A degree will be defined in terms of combinations of trajectories which will satisfy degree requirements set by accreditation organizations. This is expected to lead to joint- or dual-degree programs among the partner institutions in the future. This paper describes the goals and activities of the project and discusses implementation issues.
Resumo:
Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.
Resumo:
In this paper we present a hybrid method to track human motions in real-time. With simplified marker sets and monocular video input, the strength of both marker-based and marker-free motion capturing are utilized: A cumbersome marker calibration is avoided while the robustness of the marker-free tracking is enhanced by referencing the tracked marker positions. An improved inverse kinematics solver is employed for real-time pose estimation. A computer-visionbased approach is applied to refine the pose estimation and reduce the ambiguity of the inverse kinematics solutions. We use this hybrid method to capture typical table tennis upper body movements in a real-time virtual reality application.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.