991 resultados para Multi-touch


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on an alternative OCGM interface for a bulletin board, where a user can pin a note or a drawing, and actually shares contents. Exploiting direct and continuous manipulations, opposite to discrete gestures, to explore containers, the proposed interface supports a more natural and immediate interaction. It manages also the presence of different simultaneous users, allowing for the creation of local multimedia contents, the connection to social networks, providing a suitable working environment for cooperative and collaborative tasks in a multi-touch setup, such as touch-tables, interactive walls or multimedia boards

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research project takes place within the technology acceptability framework which tries to understand the use made of new technologies, and concentrates more specifically on the factors that influence multi-touch devices’ (MTD) acceptance and intention to use. Why be interested in MTD? Nowadays, this technology is used in all kinds of human activities, e.g. leisure, study or work activities (Rogowski and Saeed, 2012). However, the handling or the data entry by means of gestures on multi-touch-sensitive screen imposes a number of constraints and consequences which remain mostly unknown (Park and Han, 2013). Currently, few researches in ergonomic psychology wonder about the implications of these new human-computer interactions on task fulfillment.This research project aims to investigate the cognitive, sensori-motor and motivational processes taking place during the use of those devices. The project will analyze the influences of the use of gestures and the type of gesture used: simple or complex gestures (Lao, Heng, Zhang, Ling, and Wang, 2009), as well as the personal self-efficacy feeling in the use of MTD on task engagement, attention mechanisms and perceived disorientation (Chen, Linen, Yen, and Linn, 2011) when confronted to the use of MTD. For that purpose, the various above-mentioned concepts will be measured within a usability laboratory (U-Lab) with self-reported methods (questionnaires) and objective indicators (physiological indicators, eye tracking). Globally, the whole research aims to understand the processes at stakes, as well as advantages and inconveniences of this new technology, to favor a better compatibility and adequacy between gestures, executed tasks and MTD. The conclusions will allow some recommendations for the use of the DMT in specific contexts (e.g. learning context).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variability management is one of the main activities in the Software Product Line Engineering process. Common and varied features of related products are modelled along with the dependencies and relationships among them. With the increase in size and complexity of product lines and the more holistic systems approach to the design process, managing the ever- growing variability models has become a challenge. In this paper, we present MUSA, a tool for managing variability and features in large-scale models. MUSA adopts the Separation of Concerns design principle by providing multiple perspectives to the model, each conveying different set of information. The demonstration is conducted using a real-life model (comprising of 1000+ features) particularly showing the Structural View, which is displayed using a mind-mapping visualisation technique (hyperbolic trees), and the Dependency View, which is displayed graphically using logic gates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many creative and technical areas, professionals make use of paper sketches for developing and expressing concepts and models. Paper offers an almost constraint free environment where they have as much freedom to express themselves as they need. However, paper does have some disadvantages, such as size and not being able to manipulate the content (other than remove it or scratch it), which can be overcome by creating systems that can offer the same freedom people have from paper but none of the disadvantages and limitations. Only in recent years has the technology become massively available that allows doing precisely that, with the development in touch‐sensitive screens that also have the ability to interact with a stylus. In this project a prototype was created with the objective of finding a set of the most useful and usable interactions, which are composed of combinations of multi‐touch and pen. The project selected Computer Aided Software Engineering (CASE) tools as its application domain, because it addresses a solid and well‐defined discipline with still sufficient room for new developments. This was the result from the area research conducted to find an application domain, which involved analyzing sketching tools from several possible areas and domains. User studies were conducted using Model Driven Inquiry (MDI) to have a better understanding of the human sketch creation activities and concepts devised. Then the prototype was implemented, through which it was possible to execute user evaluations of the interaction concepts created. Results validated most interactions, in the face of limited testing only being possible at the time. Users had more problems using the pen, however handwriting and ink recognition were very effective, and users quickly learned the manipulations and gestures from the Natural User Interface (NUI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis argues on the possibility of supporting deictic gestures through handheld multi-touch devices in remote presentation scenarios. In [1], Clark distinguishes indicative techniques of placing-for and directing-to, where placing-for refers to placing a referent into the addressee’s attention, and directing-to refers to directing the addressee’s attention towards a referent. Keynote, PowerPoint, FuzeMeeting and others support placing-for efficiently with slide transitions, and animations, but support limited to none directing-to. The traditional “pointing feature” present in some presentation tools comes as a virtual laser pointer or mouse cursor. [12, 13] have shown that the mouse cursor and laser pointer offer very little informational expressiveness and do not do justice to human communicative gestures. In this project, a prototype application was implemented for the iPad in order to explore, develop, and test the concept of pointing in remote presentations. The prototype offers visualizing and navigating the slides as well as “pointing” and zooming. To further investigate the problem and possible solutions, a theoretical framework was designed representing the relationships between the presenter’s intention and gesture and the resulting visual effect (cursor) that enables the audience members to interpret the meaning of the effect and the presenter’s intention. Two studies were performed to investigate people’s appreciation of different ways of presenting remotely. An initial qualitative study was performed at The Hague, followed by an online quantitative user experiment. The results indicate that subjects found pointing to be helpful in understanding and concentrating, while the detached video feed of the presenter was considered to be distracting. The positive qualities of having the video feed were the emotion and social presence that it adds to the presentations. For a number of subjects, pointing displayed some of the same social and personal qualities [2] that video affords, while less intensified. The combination of pointing and video proved to be successful with 10-out-of-19 subjects scoring it the highest while pointing example came at a close 8-out-of-19. Video was the least preferred with only one subject preferring it. We suggest that the research performed here could provide a basis for future research and possibly be applied in a variety of distributed collaborative settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. ^ The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. CubIT was built to make the Cube facility accessible to QUT’s academic and student population. The system allows users to upload, interact with and share media content on the Cube’s very large display surfaces. CubIT implements a unique combination of features including RFID authentication, content management through multiple interfaces, multi-user shared workspace support, drag and drop upload and sharing, dynamic state control between different parts of the system and execution and synchronisation of the system across multiple computing nodes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. The CubIT system allows users to upload, interact with and share their own content on the Cube’s display surfaces. This paper outlines the collaborative features of CubIT which are implemented via three user interfaces, a large-screen multi-touch interface, a mobile phone and tablet application and a web-based content management system. Each of these applications plays a different role and supports different interaction mechanisms supporting a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.