939 resultados para Human Computer Interaction (HCI)
Resumo:
Control and automation of residential environments domotics is emerging area of computing application. The development of computational systems for domotics is complex, due to the diversity of potential users, and because it is immerse in a context of emotional relationships and familiar construction. Currently, the focus of the development of this kind of system is directed, mainly, to physical and technological aspects. Due to the fact, gestural interaction in the present research is investigated under the view of Human-Computer Interaction (HCI). First, we approach the subject through the construction of a conceptual framework for discussion of challenges from the area, integrated to the dimensions: people, interaction mode and domotics. A further analysis of the domain is accomplished using the theoretical-methodological referential of Organizational Semiotics. After, we define recommendations to the diversity that base/inspire the inclusive design, guided by physical, perceptual and cognitive abilities, which aim to better represent the concerned diversity. Although developers have the support of gestural recognition technologies that help a faster development, these professionals face another difficulty by not restricting the gestural commands of the application to the standard gestures provided by development frameworks. Therefore, an abstraction of the gestural interaction was idealized through a formalization, described syntactically by construction blocks that originates a grammar of the gestural interaction and, semantically, approached under the view of the residential system. So, we define a set of metrics grounded in the recommendations that are described with information from the preestablished grammar, and still, we conceive and implement in Java, under the foundation of this grammar, a residential system based on gestural interaction for usage with Microsoft Kinect. Lastly, we accomplish an experiment with potential end users of the system, aiming to better analyze the research results
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
[ES]El diseño de entornos de simulación aplicados al estudio, aprendizaje y adquisición de competencias es un campo de trabajo activo en todos los niveles educativos. Los sucesivos avances experimentados por las Tecnologías de la Información y Comunicaciones (TIC), las plataformas que las sustentan y los estudios e investigaciones llevadas a cabo en el campo de la Interacción Persona Ordenador (Human Computer Interaction, HCI), soportan y hacen posible la realización de herramientas como la que aquí proponemos. Se trata de un entorno de simulación que ha sido diseñado con dos grandes objetivos: fomentar el aprendizaje activo y mejorar el rendimiento académico, como los más destacados. Para ello hemos elegido la Realimentación Acústica en los sistemas de refuerzo sonoro, fenómeno que exige un estudio detallado de los parámetros que lo controlan, que presentado de la forma que aquí proponemos permitirá un mejor aprovechamiento del escaso tiempo disponible para la experimentación en situaciones reales.
Resumo:
Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.
Resumo:
The Software Engineering (SE) community has historically focused on working with models to represent functionality and persistence, pushing interaction modelling into the background, which has been covered by the Human Computer Interaction (HCI) community. Recently, adequately modelling interaction, and specifically usability, is being considered as a key factor for success in user acceptance, making the integration of the SE and HCI communities more necessary. If we focus on the Model-Driven Development (MDD) paradigm, we notice that there is a lack of proposals to deal with usability features from the very first steps of software development process. In general, usability features are manually implemented once the code has been generated from models. This contradicts the MDD paradigm, which claims that all the analysts? effort must be focused on building models, and the code generation is relegated to model to code transformations. Moreover, usability features related to functionality may involve important changes in the system architecture if they are not considered from the early steps. We state that these usability features related to functionality can be represented abstractly in a conceptual model, and their implementation can be carried out automatically.
Resumo:
In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
This research was conducted at the Space Research and Technology Centre o the European Space Agency at Noordvijk in the Netherlands. ESA is an international organisation that brings together a range of scientists, engineers and managers from 14 European member states. The motivation for the work was to enable decision-makers, in a culturally and technologically diverse organisation, to share information for the purpose of making decisions that are well informed about the risk-related aspects of the situations they seek to address. The research examined the use of decision support system DSS) technology to facilitate decision-making of this type. This involved identifying the technology available and its application to risk management. Decision-making is a complex activity that does not lend itself to exact measurement or precise understanding at a detailed level. In view of this, a prototype DSS was developed through which to understand the practical issues to be accommodated and to evaluate alternative approaches to supporting decision-making of this type. The problem of measuring the effect upon the quality of decisions has been approached through expert evaluation of the software developed. The practical orientation of this work was informed by a review of the relevant literature in decision-making, risk management, decision support and information technology. Communication and information technology unite the major the,es of this work. This allows correlation of the interests of the research with European public policy. The principles of communication were also considered in the topic of information visualisation - this emerging technology exploits flexible modes of human computer interaction (HCI) to improve the cognition of complex data. Risk management is itself an area characterised by complexity and risk visualisation is advocated for application in this field of endeavour. The thesis provides recommendations for future work in the fields of decision=making, DSS technology and risk management.
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. ^ The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.^
Resumo:
The observation chart is for many health professionals (HPs) the primary source of objective information relating to the health of a patient. Information Systems (IS) research has demonstrated the positive impact of good interface design on decision making and it is logical that good observation chart design can positively impact healthcare decision making. Despite the potential for good observation chart design, there is a paucity of observation chart design literature, with the primary source of literature leveraging Human Computer Interaction (HCI) literature to design better charts. While this approach has been successful, this design approach introduces a gap between understanding of the tasks performed by HPs when using charts and the design features implemented in the chart. Good IS allow for the collection and manipulation of data so that it can be presented in a timely manner that support specific tasks. Good interface design should therefore consider the specific tasks being performed prior to designing the interface. This research adopts a Design Science Research (DSR) approach to formalise a framework of design principles that incorporates knowledge of the tasks performed by HPs when using observation charts and knowledge pertaining to visual representations of data and semiology of graphics. This research is presented in three phases, the initial two phases seek to discover and formalise design knowledge embedded in two situated observation charts: the paper-based NEWS chart developed by the Health Service Executive in Ireland and the electronically generated eNEWS chart developed by the Health Information Systems Research Centre in University College Cork. A comparative evaluation of each chart is also presented in the respective phases. Throughout each of these phases, tentative versions of a design framework for electronic vital sign observation charts are presented, with each subsequent iteration of the framework (versions Alpha, Beta, V0.1 and V1.0) representing a refinement of the design knowledge. The design framework will be named the framework for the Retrospective Evaluation of Vital Sign Information from Early Warning Systems (REVIEWS). Phase 3 of the research presents the deductive process for designing and implementing V0.1 of the framework, with evaluation of the instantiation allowing for the final iteration V1.0 of the framework. This study makes a number of contributions to academic research. First the research demonstrates that the cognitive tasks performed by nurses during clinical reasoning can be supported through good observation chart design. Secondly the research establishes the utility of electronic vital sign observation charts in terms of supporting the cognitive tasks performed by nurses during clinical reasoning. Third the framework for REVIEWS represents a comprehensive set of design principles which if applied to chart design will improve the usefulness of the chart in terms of supporting clinical reasoning. Fourth the electronic observation chart that emerges from this research is demonstrated to be significantly more useful than previously designed charts and represents a significant contribution to practice. Finally the research presents a research design that employs a combination of inductive and deductive design activities to iterate on the design of situated artefacts.
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.
Resumo:
[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.
Resumo:
Effective interaction with personal computers is a basic requirement for many of the functions that are performed in our daily lives. With the rapid emergence of the Internet and the World Wide Web, computers have become one of the premier means of communication in our society. Unfortunately, these advances have not become equally accessible to physically handicapped individuals. In reality, a significant number of individuals with severe motor disabilities, due to a variety of causes such as Spinal Cord Injury (SCI), Amyothrophic Lateral Sclerosis (ALS), etc., may not be able to utilize the computer mouse as a vital input device for computer interaction. The purpose of this research was to further develop and improve an existing alternative input device for computer cursor control to be used by individuals with severe motor disabilities. This thesis describes the development and the underlying principle for a practical hands-off human-computer interface based on Electromyogram (EMG) signals and Eye Gaze Tracking (EGT) technology compatible with the Microsoft Windows operating system (OS). Results of the software developed in this thesis show a significant improvement in the performance and usability of the EMG/EGT cursor control HCI.
Resumo:
Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of ‘familiar’ interface design which builds upon users’ knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which supports multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using “familiarity”, the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway.
Resumo:
In this paper a look is taken at how the use of implant and electrode technology can be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking a biological brain directly with computer technology. The emphasis is placed on practical scientific studies that have been and are being undertaken and reported on. The area of focus is the use of electrode technology, where either a connection is made directly with the cerebral cortex and/or nervous system or where implants into the human body are involved. The paper also considers robots that have biological brains in which human neurons can be employed as the sole thinking machine for a real world robot body.