982 resultados para Interaction Techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problems associated with interaction in immersive virtual reality and makes recommendations as to how best to deal with these problems, thereby producing a usable virtual reality interactive environment. Immersive virtual reality means that the users are immersed or contained inside the environment in which they are working. For example, they are able to turn their heads and look around, as well as use their bodies to control the system.

The work in progress involves a study of various virtual reality input devices, some designed and implemented as part of the project. Additionally, the paper describes a simple framework for separation of the interaction and application parts of a virtual reality system in order to facilitate an object oriented approach to the implementation of the recommendations, and to the building of future virtual reality applications which incorporate these ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how modelbased design can facilitate the development process by designing environments through the use of highlevel diagrams. In this scope, we present ‘NiMMiT’, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desktop user interface design originates from the fact that users are stationary and can devote all of their visual resource to the application with which they are interacting. In contrast, users of mobile and wearable devices are typically in motion whilst using their device which means that they cannot devote all or any of their visual resource to interaction with the mobile application -- it must remain with the primary task, often for safety reasons. Additionally, such devices have limited screen real estate and traditional input and output capabilities are generally restricted. Consequently, if we are to develop effective applications for use on mobile or wearable technology, we must embrace a paradigm shift with respect to the interaction techniques we employ for communication with such devices.This paper discusses why it is necessary to embrace a paradigm shift in terms of interaction techniques for mobile technology and presents two novel multimodal interaction techniques which are effective alternatives to traditional, visual-centric interface designs on mobile devices as empirical examples of the potential to achieve this shift.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the application of commercial and non-invasive electroencephalography (EEG)-based brain-computer (BCIs) interfaces with serious games. Two different EEG-based BCI devices were used to fully control the same serious game. The first device (NeuroSky MindSet) uses only a single dry electrode and requires no calibration. The second device (Emotiv EPOC) uses 14 wet sensors requiring additional training of a classifier. User testing was performed on both devices with sixty-two participants measuring the player experience as well as key aspects of serious games, primarily learnability, satisfaction, performance and effort. Recorded feedback indicates that the current state of BCIs can be used in the future as alternative game interfaces after familiarisation and in some cases calibration. Comparative analysis showed significant differences between the two devices. The first device provides more satisfaction to the players whereas the second device is more effective in terms of adaptation and interaction with the serious game.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Users need to be able to address in-air gesture systems, which means finding where to perform gestures and how to direct them towards the intended system. This is necessary for input to be sensed correctly and without unintentionally affecting other systems. This thesis investigates novel interaction techniques which allow users to address gesture systems properly, helping them find where and how to gesture. It also investigates audio, tactile and interactive light displays for multimodal gesture feedback; these can be used by gesture systems with limited output capabilities (like mobile phones and small household controls), allowing the interaction techniques to be used by a variety of device types. It investigates tactile and interactive light displays in greater detail, as these are not as well understood as audio displays. Experiments 1 and 2 explored tactile feedback for gesture systems, comparing an ultrasound haptic display to wearable tactile displays at different body locations and investigating feedback designs. These experiments found that tactile feedback improves the user experience of gesturing by reassuring users that their movements are being sensed. Experiment 3 investigated interactive light displays for gesture systems, finding this novel display type effective for giving feedback and presenting information. It also found that interactive light feedback is enhanced by audio and tactile feedback. These feedback modalities were then used alongside audio feedback in two interaction techniques for addressing gesture systems: sensor strength feedback and rhythmic gestures. Sensor strength feedback is multimodal feedback that tells users how well they can be sensed, encouraging them to find where to gesture through active exploration. Experiment 4 found that they can do this with 51mm accuracy, with combinations of audio and interactive light feedback leading to the best performance. Rhythmic gestures are continuously repeated gesture movements which can be used to direct input. Experiment 5 investigated the usability of this technique, finding that users can match rhythmic gestures well and with ease. Finally, these interaction techniques were combined, resulting in a new single interaction for addressing gesture systems. Using this interaction, users could direct their input with rhythmic gestures while using the sensor strength feedback to find a good location for addressing the system. Experiment 6 studied the effectiveness and usability of this technique, as well as the design space for combining the two types of feedback. It found that this interaction was successful, with users matching 99.9% of rhythmic gestures, with 80mm accuracy from target points. The findings show that gesture systems could successfully use this interaction technique to allow users to address them. Novel design recommendations for using rhythmic gestures and sensor strength feedback were created, informed by the experiment findings.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Highlights ► Provides a review of the history and development of locative media. ► Outlines different human-computer interaction techniques applied in locative media. ► Discusses how locative media applications have changed interaction affordances in and of physical spaces. ► Discusses practices of people in urban settings that evolved through these new affordances. ► Provides an overview on methods to investigate and elaborate design principles for future locative media.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This article deals with embodied user interfaces for handheld augmented reality games, which consist of both physical and virtual components. We have developed a number of spatial interaction techniques that optically capture the device's movement and orientation relative to a visual marker. Such physical interactions in 3-D space enable manipulative control of mobile games. In addition to acting as a physical controller that recognizes multiple game-dependent gestures, the mobile device augments the camera view with graphical overlays. We describe three game prototypes that use ubiquitous product packaging and other passive media as backgrounds for handheld augmentation. The prototypes can be realized on widely available off-the-shelf hardware and require only minimal setup and infrastructure support.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the proceedings of CASCON'2003 we published an article entitled "A Paradigm Shift: Alternative Interaction Techniques for Use with Mobile & Wearable Devices" and were honoured to receive the Best Paper Award for that year's conference. A decade on, we are equally honoured to have the same paper recognised as the Most Influential Paper of the 2003 conference. To this end, we have been asked to provide a brief overview of our intervening research in relation to that original publication, and so that is the purpose of this extended abstract.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We are currently witnessing an era where interaction with computers is no longer limited to conventional methods (i.e. keyboard and mouse). Human Computer Interaction (HCI) as a progressive field of research, has opened up alternatives to the traditional interaction techniques. Embedded Infrared (IR) sensors, Accelerometers and RGBD cameras have become common inputs for devices to recognize gestures and body movements. These sensors are vision based and as a result the devices that incorporate them will be reliant on presence of light. Ultrasonic sensors on the other hand do not suffer this limitation as they utilize properties of sound waves. These sensors however, have been mainly used for distance detection and not with HCI devices. This paper presents our approach in developing a multi-dimensional interaction input method and tool Ultrasonic Gesture-based Interaction (UGI) that utilizes ultrasonic sensors. We demonstrate how these sensors can detect object movements and recognize gestures. We present our approach in building the device and demonstrate sample interactions with it. We have also conducted a user study to evaluate our tool and its distance and micro gesture detection accuracy. This paper reports these results and outlines our future work in the area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present TiltZoom, a collection of tilt-based interaction techniques designed for easy one-handed zooming on mobile devices. TiltZoom represents novel gestural interaction techniques, implemented using rate-of-rotation readings from a gyroscope, a sensor commonly embedded on current generation smart phones. We designed and experimented three variants of TiltZoom - Tilt Level, Tilt and Hold and Flip Gesture. The design decisions for all three variants are discussed in this paper and their performance, as well as subjective user experience are evaluated and compared against conventional touch-based zooming techniques. TiltZoom appears to be a worthy addition to current established collection of gesture-based mobile interaction techniques for zooming controls, especially when user has only one hand available when moving about.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research proposes the development of interfaces to support collaborative, community-driven inquiry into data, which we refer to as Participatory Data Analytics. Since the investigation is led by local communities, it is not possible to anticipate which data will be relevant and what questions are going to be asked. Therefore, users have to be able to construct and tailor visualisations to their own needs. The poster presents early work towards defining a suitable compositional model, which will allow users to mix, match, and manipulate data sets to obtain visual representations with little-to-no programming knowledge. Following a user-centred design process, we are subsequently planning to identify appropriate interaction techniques and metaphors for generating such visual specifications on wall-sized, multi-touch displays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Through ubiquitous computing and location-based social media, information is spreading outside the traditional domains of home and work into the urban environment. Digital technologies have changed the way people relate to the urban form supporting discussion on multiple levels, allowing more citizens to be heard in new ways (Fredericks et al. 2013; Houghton et al. 2014; Caldwell et al. 2013). Face-to-face and digitally mediated discussions, facilitated by tangible and hybrid interaction, such as multi-touch screens and media façades, are initiated through a telephone booth inspired portable structure: The InstaBooth. The InstaBooth prototype employs a multidisciplinary approach to engage local communities in a situated debate on the future of their urban environment. With it, we capture citizens’ past stories and opinions on the use and design of public places. The way public consultations are currently done often engages only a section of the population involved in a proposed development; the more vocal citizens are not necessarily the more representative of the communities (Jenkins 2006). Alternative ways to engage urban dwellers in the debate about the built environment are explored at the moment, including the use of social media or online tools (Foth 2009). This project fosters innovation by providing pathways for communities to participate in the decision making process that informs the urban form. The InstaBooth promotes dialogue and mediation between a bottom-up and a top-down approach to urban design, with the aim of promoting community connectedness with the urban environment. The InstaBooth provides an engagement and discussion platform that leverages a number of locally developed display and interaction technologies in order to facilitate a dialogue of ideas and commentary. The InstaBooth combines multiple interaction techniques into a hybrid (digital and analogue) media space. Through the InstaBooth, urban design and architectural proposals are displayed encouraging commentary from visitors. Inside the InstaBooth, visitors can activate a multi-touch screen in order to browse media, write a note, or draw a picture to provide feedback. The purpose of the InstaBooth is to engage with a broader section of society, including those who are often marginalised. The specific design of the internal and external interfaces, the mutual relationship between these interfaces with regards to information display and interaction, and the question how visitors can engage with the system, are part of the research agenda of the project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive neuroscience defines the sense of agency as the experience of controlling one's own actions and, through this control, affecting the external world. We believe that the sense of personal agency is a key factor in how people experience interactions with technology. This paper draws on theoretical perspectives in cognitive neuroscience and describes two implicit methods through which personal agency can be empirically investigated. We report two experiments applying these methods to HCI problems. One shows that a new input modality - skin-based interaction - can substantially increase users' sense of agency. The second demonstrates that variations in the parameters of assistance techniques such as predictive mouse acceleration can have a significant impact on users' sense of agency. The methods presented provide designers with new ways of evaluating and refining empowering interaction techniques and interfaces, in which users experience an instinctive sense of control and ownership over their actions. Copyright 2012 ACM.