415 resultados para Gesture.
Resumo:
This research explores gestures used in the context of activities in the workplace and in everyday life in order to understand requirements and devise concepts for the design of gestural information applicances. A collaborative method of video interaction analysis devised to suit design explorations, the Video Card Game, was used to capture and analyse how gesture is used in the context of six different domains: the dentist's office; PDA and mobile phone use; the experimental biologist's laboratory; a city ferry service; a video cassette player repair shop; and a factory flowmeter assembly station. Findings are presented in the form of gestural themes, derived from the tradition of qualitative analysis but bearing some similarity to Alexandrian patterns. Implications for the design of gestural devices are discussed.
Resumo:
There is a mismatch between the kinds of movements used in gesture interfaces and our existing theoretical understandings of gesture. We need to re-examine the assumptions of gesture research and develop theory more suited to gesture interface design. In addition to improved theory, we need to develop ways for participants in the process of design to adapt, extend and develop theory for their own design contexts. Gesture interface designers should approach theory as a contingent resource for design actions that is responsive to the needs of the design process.
Resumo:
Maps are used to represent three-dimensional space and are integral to a range of everyday experiences. They are increasingly used in mathematics, being prominent both in school curricula and as a form of assessing students understanding of mathematics ideas. In order to successfully interpret maps, students need to be able to understand that maps: represent space, have their own perspective and scale, and their own set of symbols and texts. Despite the fact that maps have an increased prevalence in society and school, there is evidence to suggest that students have difficulty interpreting maps. This study investigated 43 primary-aged students’ (aged 9-12 years) verbal and gestural behaviours as they engaged with and solved map tasks. Within a multiliteracies framework that focuses on spatial, visual, linguistic, and gestural elements, the study investigated how students interpret map tasks. Specifically, the study sought to understand students’ skills and approaches used to solving map tasks and the gestural behaviours they utilised as they engaged with map tasks. The investigation was undertaken using the Knowledge Discovery in Data (KDD) design. The design of this study capitalised on existing research data to carry out a more detailed analysis of students’ interpretation of map tasks. Video data from an existing data set was reorganised according to two distinct episodes—Task Solution and Task Explanation—and analysed within the multiliteracies framework. Content Analysis was used with these data and through anticipatory data reduction techniques, patterns of behaviour were identified in relation to each specific map task by looking at task solution, task correctness and gesture use. The findings of this study revealed that students had a relatively sound understanding of general mapping knowledge such as identifying landmarks, using keys, compass points and coordinates. However, their understanding of mathematical concepts pertinent to map tasks including location, direction, and movement were less developed. Successful students were able to interpret the map tasks and apply relevant mathematical understanding to navigate the spatial demands of the map tasks while the unsuccessful students were only able to interpret and understand basic map conventions. In terms of their gesture use, the more difficult the task, the more likely students were to exhibit gestural behaviours to solve the task. The most common form of gestural behaviour was deictic, that is a pointing gesture. Deictic gestures not only aided the students capacity to explain how they solved the map tasks but they were also a tool which assisted them to navigate and monitor their spatial movements when solving the tasks. There were a number of implications for theory, learning and teaching, and test and curriculum design arising from the study. From a theoretical perspective, the findings of the study suggest that gesturing is an important element of multimodal engagement in mapping tasks. In terms of teaching and learning, implications include the need for students to utilise gesturing techniques when first faced with new or novel map tasks. As students become more proficient in solving such tasks, they should be encouraged to move beyond a reliance on such gesture use in order to progress to more sophisticated understandings of map tasks. Additionally, teachers need to provide students with opportunities to interpret and attend to multiple modes of information when interpreting map tasks.
Resumo:
With the release of the Nintendo Wii in 2006, the use of haptic force gestures has become a very popular form of input for interactive entertainment. However, current gesture recognition techniques utilised in Nintendo Wii games fall prey to a lack of control when it comes to recognising simple gestures. This paper presents a simple gesture recognition technique called Peak Testing which gives greater control over gesture interaction. This recognition technique locates force peaks in continuous force data (provided by a gesture device such as the Wiimote) and then cancels any peaks which are not meant for input. Peak Testing is therefore technically able to identify movements in any direction. This paper applies this recognition technique to control virtual instruments and investigates how users respond to this interaction. The technique is then explored as the basis for a robust way to navigate menus with a simple flick of the wrist. We propose that this flick-form of interaction could be a very intuitive way to navigate Nintendo Wii menus instead of the current pointer techniques implemented.
Resumo:
This paper illustrates the complexity of pointing as it is employed in a design workshop. Using the method of interaction analysis, we argue that pointing is not merely employed to index, locate, or fix reference to an object. It also constitutes a practice for reestablishing intersubjectivity and solving interactional trouble such as misunderstandings or disagreements by virtue of enlisting something as part of the participants’ shared experience. We use this analysis to discuss implications for how such practices might be supported with computer mediation, arguing for a “bricolage” approach to systems development that emphasizes the provision of resources for users to collaboratively negotiate the accomplishment of intersubjectivity ra- ther than systems that try to support pointing as a specific gestural action.
Resumo:
Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this.
Resumo:
In this paper, we describe an interactive artwork that uses large body gestures as its primary interactive mode. The artist intends the work to provoke active reflection in the audience by way of gesture and content. The technology is not the focus, rather the aim is to provoke memory, to elicit feelings of connective human experiences in a required-to-participate audience. We find the work provokes a diverse and contradictory set of responses. The methods used to understand this include qualitative methods common to evaluating interactive art works, as well as in-depth discussions with the artist herself. This paper is relevant to the Human - Centered Computing track because in all stages of the design of the work - as well as the evaluation - the focus is on the human aspect; the computing is designed to enable all-too-human responses.
Resumo:
This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.