877 resultados para hybrid human-computer
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.
Resumo:
Electronic apppliances are increasingly a part of our everyday lives. In particular, mobile devices, with their reduced dimensions with power rivaling desktop computers, have substantially augmented our communication abilities offering instant availability, anywhere, to everyone. These devices have become essential for human communication but also include a more comprehensive tool set to support productivity and leisure applications. However, the many applications commonly available are not adapted to people with special needs. Rather, most popular devices are targeted at teenagers or young adults with excellent eyesight and coordination. What is worse, most of the commonly used assistive control interfaces are not available in a mobile environment where user's position, accommodation and capacities can vary even widely. To try and address people with special needs new approaches and techniques are sorely needed. This paper presents a control interface to allow tetraplegic users to interact with electronic devices. Our method uses myographic information (Electromyography or EMG) collected from residually controlled body areas. User evaluations validate electromyography as a daily wearable interface. In particular our results show that EMG can be used even in mobility contexts.
Resumo:
Incorporating physical activity and exertion into pervasive gaming applications can provide health and social benefits. Prior research has resulted in several prototypes of pervasive games that encourage exertion as interaction form; however, no detailed critical account of the various approaches exists. We focus on networked exertion games and detail some of our work while identifying the remaining issues towards providing a coherent framework. We outline common lessons learned and use them as the basis for generalizations for the design of networked exertion games. We propose possible directions of further investigation, hoping to provide guidance for future work to facilitate greater awareness and exposure of exertion games and their benefits.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Funded by the US-EU Atlantis Program, the International Cooperation in Ambient Computing Education Project is establishing an international knowledge-building community for developing a broader computer science curriculum aimed at preparing students for real-world problems in a multidisciplinary, global world. The project is collaboration among Troy University (USA), University of Sunderland (UK), FernUniversität in Hagen (Germany), Universidade do Algarve (Portugal), University of Arkansas at Little Rock (USA) and San Diego State University (USA). The curriculum will include aspects of social science, cognitive science, human-computer interaction, organizational studies, global studies, and particular application areas as well as core computer science subjects. Programs offered at partner institutions will form trajectories through the curriculum. A degree will be defined in terms of combinations of trajectories which will satisfy degree requirements set by accreditation organizations. This is expected to lead to joint- or dual-degree programs among the partner institutions in the future. This paper describes the goals and activities of the project and discusses implementation issues.
Resumo:
Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.
Resumo:
Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.
Resumo:
In his in uential article about the evolution of the Web, Berners-Lee [1] envisions a Semantic Web in which humans and computers alike are capable of understanding and processing information. This vision is yet to materialize. The main obstacle for the Semantic Web vision is that in today's Web meaning is rooted most often not in formal semantics, but in natural language and, in the sense of semiology, emerges not before interpretation and processing. Yet, an automated form of interpretation and processing can be tackled by precisiating raw natural language. To do that, Web agents extract fuzzy grassroots ontologies through induction from existing Web content. Inductive fuzzy grassroots ontologies thus constitute organically evolved knowledge bases that resemble automated gradual thesauri, which allow precisiating natural language [2]. The Web agents' underlying dynamic, self-organizing, and best-effort induction, enable a sub-syntactical bottom up learning of semiotic associations. Thus, knowledge is induced from the users' natural use of language in mutual Web interactions, and stored in a gradual, thesauri-like lexical-world knowledge database as a top-level ontology, eventually allowing a form of computing with words [3]. Since when computing with words the objects of computation are words, phrases and propositions drawn from natural languages, it proves to be a practical notion to yield emergent semantics for the Semantic Web. In the end, an improved understanding by computers on the one hand should upgrade human- computer interaction on the Web, and, on the other hand allow an initial version of human- intelligence amplification through the Web.
Resumo:
This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data (such as location data, ontology-backed search queries, in- and outdoor conditions) the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.
Resumo:
OBJECTIVE: To identify and describe unintended adverse consequences related to clinical workflow when implementing or using computerized provider order entry (CPOE) systems. METHODS: We analyzed qualitative data from field observations and formal interviews gathered over a three-year period at five hospitals in three organizations. Five multidisciplinary researchers worked together to identify themes related to the impacts of CPOE systems on clinical workflow. RESULTS: CPOE systems can affect clinical work by 1) introducing or exposing human/computer interaction problems, 2) altering the pace, sequencing, and dynamics of clinical activities, 3) providing only partial support for the work activities of all types of clinical personnel, 4) reducing clinical situation awareness, and 5) poorly reflecting organizational policy and procedure. CONCLUSIONS: As CPOE systems evolve, those involved must take care to mitigate the many unintended adverse effects these systems have on clinical workflow. Workflow issues resulting from CPOE can be mitigated by iteratively altering both clinical workflow and the CPOE system until a satisfactory fit is achieved.
Resumo:
The majority of sensor network research deals with land-based networks, which are essentially two-dimensional, and thus the majority of simulation and animation tools also only handle such networks. Underwater sensor networks on the other hand, are essentially 3D networks because the depth at which a sensor node is located needs to be considered as well. Due to that additional dimension, specialized tools need to be used when conducting simulations for experimentation. The School of Engineering’s Underwater Sensor Network (UWSN) lab is conducting research on underwater sensor networks and requires simulation tools for 3D networks. The lab has extended NS-2, a widely used network simulator, so that it can simulate three-dimensional networks. However, NAM, a widely used network animator, currently only supports two-dimensional networks and no extensions have been implemented to give it three-dimensional capabilities. In this project, we develop a network visualization tool that functions similarly to NAM but is able to render network environments in full 3-D. It is able to take as input a NS-2 trace file (the same file taken as input by NAM), create the environment, position the sensor nodes, and animate the events of the simulation. Further, the visualization tool is easy to use, especially friendly to NAM users, as it is designed to follow the interfaces and functions similar to NAM. So far, the development has fulfilled the basic functionality. Future work includes fully functional capabilities for visualization and much improved user interfaces.
Resumo:
Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.
Resumo:
For years, the Human Computer Interaction (HCI) community has crafted usability guidelines that clearly define what characteristics a software system should have in order to be easy to use. However, in the Software Engineering (SE) community keep falling short of successfully incorporating these recommendations into software projects. From a SE perspective, the process of incorporating usability features into software is not always straightforward, as a large number of these features have heavy implications in the underlying software architecture. For example, successfully including an “undo” feature in an application requires the design and implementation of many complex interrelated data structures and functionalities. Our work is focused upon providing developers with a set of software design patterns to assist them in the process of designing more usable software. This would contribute to the proper inclusion of specific usability features with high impact on the software design. Preliminary validation data show that usage of the guidelines also has positive effects on development time and overall software design quality.