8 resultados para User-Computer Interface
em Digital Peer Publishing
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.
Resumo:
This paper proposes an extension to the televisionwatching paradigm that permits an end-user to enrich broadcast content. Examples of this enriched content are: virtual edits that allow the order of presentation within the content to be changed or that allow the content to be subsetted; conditional text, graphic or video objects that can be placed to appear within content and triggered by viewer interaction; additional navigation links that can be added to structure how other users view the base content object. The enriched content can be viewed directly within the context of the TV viewing experience. It may also be shared with other users within a distributed peer group. Our architecture is based on a model that allows the original content to remain unaltered, and which respects DRM restrictions on content reuse. The fundamental approach we use is to define an intermediate content enhancement layer that is based on the W3C’s SMIL language. Using a pen-based enhancement interface, end-users can manipulate content that is saved in a home PDR setting. This paper describes our architecture and it provides several examples of how our system handles content enhancement. We also describe a reference implementation for creating and viewing enhancements.
Resumo:
In this paper, we propose the use of specific system architecture, based on mobile device, for navigation in urban environments. The aim of this work is to assess how virtual and augmented reality interface paradigms can provide enhanced location based services using real-time techniques in the context of these two different technologies. The virtual reality interface is based on faithful graphical representation of the localities of interest, coupled with sensory information on the location and orientation of the user, while the augmented reality interface uses computer vision techniques to capture patterns from the real environment and overlay additional way-finding information, aligned with real imagery, in real-time. The knowledge obtained from the evaluation of the virtual reality navigational experience has been used to inform the design of the augmented reality interface. Initial results of the user testing of the experimental augmented reality system for navigation are presented.
Resumo:
Electronic apppliances are increasingly a part of our everyday lives. In particular, mobile devices, with their reduced dimensions with power rivaling desktop computers, have substantially augmented our communication abilities offering instant availability, anywhere, to everyone. These devices have become essential for human communication but also include a more comprehensive tool set to support productivity and leisure applications. However, the many applications commonly available are not adapted to people with special needs. Rather, most popular devices are targeted at teenagers or young adults with excellent eyesight and coordination. What is worse, most of the commonly used assistive control interfaces are not available in a mobile environment where user's position, accommodation and capacities can vary even widely. To try and address people with special needs new approaches and techniques are sorely needed. This paper presents a control interface to allow tetraplegic users to interact with electronic devices. Our method uses myographic information (Electromyography or EMG) collected from residually controlled body areas. User evaluations validate electromyography as a daily wearable interface. In particular our results show that EMG can be used even in mobility contexts.
Resumo:
After 20 years of silence, two recent references from the Czech Republic (Bezpečnostní softwarová asociace, Case C-393/09) and from the English High Court (SAS Institute, Case C-406/10) touch upon several questions that are fundamental for the extent of copyright protection for software under the Computer Program Directive 91/25 (now 2009/24) and the Information Society Directive 2001/29. In Case C-393/09, the European Court of Justice held that “the object of the protection conferred by that directive is the expression in any form of a computer program which permits reproduction in different computer languages, such as the source code and the object code.” As “any form of expression of a computer program must be protected from the moment when its reproduction would engender the reproduction of the computer program itself, thus enabling the computer to perform its task,” a graphical user interface (GUI) is not protected under the Computer Program Directive, as it does “not enable the reproduction of that computer program, but merely constitutes one element of that program by means of which users make use of the features of that program.” While the definition of computer program and the exclusion of GUIs mirror earlier jurisprudence in the Member States and therefore do not come as a surprise, the main significance of Case C-393/09 lies in its interpretation of the Information Society Directive. In confirming that a GUI “can, as a work, be protected by copyright if it is its author’s own intellectual creation,” the ECJ continues the Europeanization of the definition of “work” which began in Infopaq (Case C-5/08). Moreover, the Court elaborated this concept further by excluding expressions from copyright protection which are dictated by their technical function. Even more importantly, the ECJ held that a television broadcasting of a GUI does not constitute a communication to the public, as the individuals cannot have access to the “essential element characterising the interface,” i.e., the interaction with the user. The exclusion of elements dictated by technical functions from copyright protection and the interpretation of the right of communication to the public with reference to the “essential element characterising” the work may be seen as welcome limitations of copyright protection in the interest of a free public domain which were not yet apparent in Infopaq. While Case C-393/09 has given a first definition of the computer program, the pending reference in Case C-406/10 is likely to clarify the scope of protection against nonliteral copying, namely in how far the protection extends beyond the text of the source code to the design of a computer program and where the limits of protection lie as regards the functionality of a program and mere “principles and ideas.” In light of the travaux préparatoires, it is submitted that the ECJ is also likely to grant protection for the design of a computer program, while excluding both the functionality and underlying principles and ideas from protection under the European copyright directives.
Resumo:
Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.
Resumo:
Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.
Resumo:
Ökobilanzierung von Produktsystemen dient der Abschätzung ihrer Wirkung auf die Umwelt. Eine vollständige Lebenswegbetrachtung erfordert auch die Einbeziehung intralogistischer Transportprozesse bzw. -mittel. Für die Erstellung von Ökobilanzen wird i. d. R. ein Computerprogramm verwendet. Die Demoversionen dreier kommerzieller Softwarelösungen (SimaPro, GaBi und Umberto NXT LCA) und die Vollversion einer Open Source Software (openLCA) wurden aus softwareergonomischer Sicht analysiert. Hierzu erfolgte u. a. der Nachbau der bereitgestellten Tutorials bzw. die Modellierung eigener Produktsysteme. Im Rahmen der Analyse wurden die Punkte • Entstehung, Verbreitung, Zielgruppe, • Eignung der Tutorials, Erlernbarkeit, • Grafische Benutzeroberfläche, Individualisierbarkeit der Software, • Umsetzung der Anforderungen aus den Ökobilanzierungsnormen, • Notwendige Arbeitsschritte zur Erstellung einer Ökobilanz einer vergleichenden Betrachtung unterzogen. Der Beitrag beinhaltet eine Einführung in die wesentlichen Prinzipien der Ökobilanzierung und die Grundsätze der Softwareergonomie. Diese werden zu softwareergonomischen Eigenschaften für Ökobilanzsoftwarelösungen subsumiert. Anschließend werden die Ergebnisse des Softwarevergleiches dargestellt. Abschließend erfolgt eine Zusammenfassung der Erkenntnisse.