849 resultados para HUMAN SYSTEM INTERACTION
Resumo:
This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.
Resumo:
Bug fixing is a highly cooperative work activity where developers, testers, product managers and other stake-holders collaborate using a bug tracking system. In the context of Global Software Development (GSD), where software development is distributed across different geographical locations, we focus on understanding the role of bug trackers in supporting software bug fixing activities. We carried out a small-scale ethnographic fieldwork in a software product team distributed between Finland and India at a multinational engineering company. Using semi-structured interviews and in-situ observations of 16 bug cases, we show that the bug tracker 1) supported information needs of different stake holder, 2) established common-ground, and 3) reinforced issues related to ownership, performance and power. Consequently, we provide implications for design around these findings.
Resumo:
This thesis will examine the interaction between the user and the digital archive. The aim of the study is to support an in-depth examination of the interaction process, with a view to making recommendations and tools, for system designers and archival professionals, to promote digital archive domain development. Following a comprehensive literature review process, an urgent requirement for models was identified. The Model of Contextual Interaction presented in this thesis, aims to provide a conceptual model through which the interaction process, between the user and the digital archive, can be examined. Using the five-phased research development framework, the study will present a structured account of its methods, using a multi-method methodology to ensuring robust data collection and analysis. The findings of the study are presented across the Model of Contextual Interaction, and provide a basis on which recommendations and tools for system designers have been made. The thesis concludes with a summary of key findings, and a reflective account of how the findings and the Model of Contextual Interaction have impacted digital provision within the archive domain and how the model could be applied to other domains.
Resumo:
In this paper, a novel framework for visual tracking of human body parts is introduced. The approach presented demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera by using a limb-tracking system based on a 2-D articulated model and a double-tracking strategy. Its key contribution is that the 2-D model is only constrained by biomechanical knowledge about human bipedal motion, instead of relying on constraints that are linked to a specific activity or camera view. These characteristics make our approach suitable for real visual surveillance applications. Experiments on a set of indoor and outdoor sequences demonstrate the effectiveness of our method on tracking human lower body parts. Moreover, a detail comparison with current tracking methods is presented.
Resumo:
The SAL system embodies a new kind of human-computer interaction, where a person and a computer carry out a fluent, emotionally coloured conversation. Because that kind of capability is new, evaluating systems that have it is a new challenge. This paper outlines techniques that have been developed to evaluate SAL interactions, and uses the case to highlight the range of variables that become relevant in dealing with systems of this order of complexity.
Resumo:
This thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product created and acquired through social interactions. This emphasizes the role of society as a source of linguistic input. Taking these aspects into account, an experimental scenario is set up where a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. The agent grounds the names of these objects in visual perception. Word learning is an open-ended problem. Therefore, the learning architecture of the agent will have to be able to acquire words and categories in an openended manner. In this work, four learning architectures were designed that can be used by robotic agents for long-term and open-ended word and category acquisition. The learning methods used in these architectures are designed for incrementally scaling-up to larger sets of words and categories. A novel experimental evaluation methodology, that takes into account the openended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. The results indicate that all approaches were able to incrementally acquire new words and categories. Although some of the approaches could not scale-up to larger vocabularies, one approach was shown to learn up to 293 categories, with potential for learning many more.
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
This paper discusses the implementation details of a child friendly, good quality, English text-to-speech (TTS) system that is phoneme-based, concatenative, easy to set up and use with little memory. Direct waveform concatenation and linear prediction coding (LPC) are used. Most existing TTS systems are unit-selection based, which use standard speech databases available in neutral adult voices.Here reduced memory is achieved by the concatenation of phonemes and by replacing phonetic wave files with their LPC coefficients. Linguistic analysis was used to reduce the algorithmic complexity instead of signal processing techniques. Sufficient degree of customization and generalization catering to the needs of the child user had been included through the provision for vocabulary and voice selection to suit the requisites of the child. Prosody had also been incorporated. This inexpensive TTS systemwas implemented inMATLAB, with the synthesis presented by means of a graphical user interface (GUI), thus making it child friendly. This can be used not only as an interesting language learning aid for the normal child but it also serves as a speech aid to the vocally disabled child. The quality of the synthesized speech was evaluated using the mean opinion score (MOS).
Resumo:
This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.
Resumo:
The interface between humans and technology is a rapidly changing field. In particular as technological methods have improved dramatically so interaction has become possible that could only be speculated about even a decade earlier. This interaction can though take on a wide range of forms. Indeed standard buttons and dials with televisual feedback are perhaps a common example. But now virtual reality systems, wearable computers and most of all, implant technology are throwing up a completely new concept, namely a symbiosis of human and machine. No longer is it sensible simply to consider how a human interacts with a machine, but rather how the human-machine symbiotic combination interacts with the outside world. In this paper we take a look at some of the recent approaches, putting implant technology in context. We also consider some specific practical examples which may well alter the way we look at this symbiosis in the future. The main area of interest as far as symbiotic studies are concerned is clearly the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Often pilot tests and experimentation has been carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed briefly here. The paper however concentrates on human experimentation, in particular that carried out by the authors themselves, firstly to indicate what possibilities exist as of now with available technology, but perhaps more importantly to also show what might be possible with such technology in the future and how this may well have extensive social effects. The driving force behind the integration of technology with humans on a neural level has historically been to restore lost functionality in individuals who have suffered neurological trauma such as spinal cord damage, or who suffer from a debilitating disease such as lateral amyotrophic sclerosis. Very few would argue against the development of implants to enable such people to control their environment, or some aspect of their own body functions. Indeed this technology in the short term has applications for amelioration of symptoms for the physically impaired, such as alternative senses being bestowed on a blind or deaf individual. However the issue becomes distinctly more complex when it is proposed that such technology be used on those with no medical need, but instead who wish to enhance and augment their own bodies, particularly in terms of their mental attributes. These issues are discussed here in the light of practical experimental test results and their ethical consequences.
Resumo:
Enhanced production of proinflammatory bradykinin-related peptides, the kinins, has been suggested to contribute to the pathogenesis of periodontitis, a common inflammatory disease of human gingival tissues. In this report, we describe a plausible mechanism of activation of the kinin-generating system, also known as the contact system or kininogen-kallikrein-kinin system, by the adsorption of its plasma-derived components such as high-molecular-mass kininogen (HK), prekallikrein (PK), and Hageman factor (FXII) to the cell surface of periodontal pathogen Porphyromonas gingivalis. The adsorption characteristics of mutant strains deficient in selected proteins of the cell envelope suggested that the surface-associated cysteine proteinases, gingipains, bearing hemagglutinin/adhesin domains (RgpA and Kgp) serve as the major platforms for HK and FXII adhesion. These interactions were confirmed by direct binding tests using microplate-immobilized gingipains and biotinylated contact factors. Other bacterial cell surface components such as fimbriae and lipopolysaccharide were also found to contribute to the binding of contact factors, particularly PK. Analysis of kinin release in plasma upon contact with P. gingivalis showed that the bacterial surface-dependent mechanism is complementary to the previously described kinin generation system dependent on HK and PK proteolytic activation by the gingipains. We also found that several P. gingivalis clinical isolates differed in the relative significance of these two mechanisms of kinin production. Taken together, these data show the importance of this specific type of bacterial surface-host homeostatic system interaction in periodontal infections.
Resumo:
Ewaso Incident Reporting System: reports on human-elephant interaction in Laikipia
Resumo:
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.
Resumo:
This paper predicts speech synthesis, speech recognition, and speaker recognition technology for the year 2001, and it describes the most important research problems to be solved in order to arrive at these ultimate synthesis and recognition systems. The problems for speech synthesis include natural and intelligible voice production, prosody control based on meaning, capability of controlling synthesized voice quality and choosing individual speaking style, multilingual and multidialectal synthesis, choice of application-oriented speaking styles, capability of adding emotion, and synthesis from concepts. The problems for speech recognition include robust recognition against speech variations, adaptation/normalization to variations due to environmental conditions and speakers, automatic knowledge acquisition for acoustic and linguistic modeling, spontaneous speech recognition, naturalness and ease of human-machine interaction, and recognition of emotion. The problems for speaker recognition are similar to those for speech recognition. The research topics related to all these techniques include the use of articulatory and perceptual constraints and evaluation methods for measuring the quality of technology and systems.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.