863 resultados para Human-robot interaction
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
Public Display Systems (PDS) increasingly have a greater presence in our cities. These systems provide information and advertising specifically tailored to audiences in spaces such as airports, train stations, and shopping centers. A large number of public displays are also being deployed for entertainment reasons. Sometimes designing and prototyping PDS come to be a laborious, complex and a costly task. This dissertation focuses on the design and evaluation of PDS at early development phases with the aim of facilitating low-effort, rapid design and the evaluation of interactive PDS. This study focuses on the IPED Toolkit. This tool proposes the design, prototype, and evaluation of public display systems, replicating real-world scenes in the lab. This research aims at identifying benefits and drawbacks on the use of different means to place overlays/virtual displays above a panoramic video footage, recorded at real-world locations. The means of interaction studied in this work are on the one hand the keyboard and mouse, and on the other hand the tablet with two different techniques of use. To carry out this study, an android application has been developed whose function is to allow users to interact with the IPED Toolkit using the tablet. Additionally, the toolkit has been modified and adapted to tablets by using different web technologies. Finally the users study makes a comparison about the different means of interaction.
Resumo:
This research project focuses on contemporary eagle-taming falconry practice of the Altaic Kazakhs animal herding society in Bayan Ulgii Province in Western Mongolia. It aims to contributing both theoretical and empirical criteria for cultural preservation of Asian falconry. This cultural as well as environmental discourse is illustrated with concentrated field research framed by ecological anthropology and ethno-ornithology from the viewpoint of “Human-Animal Interaction (HAI)” and “Human-Animal Behavior (HAB)”. Part I (Chapter 2 & 3) explores ethno-archaeological and ethno-ornithological dimensions by interpretive research of archaeological artefacts which trace the historical depth of Asian falconry culture. Part II (Chapter 4 & 5) provides an extensive ethnographic narrative of Altaic Kazakh falconry, which is the central part of this research project. The “Traditional Art and Knowledge (TAK)” in human-raptor interactions, comprising the entire cycle of capture, perch, feeding, training, hunting, and release, is presented with specific emphasis on its relation to environmental and societal context. Traditional falconry as integral part of a nomadic lifestyle has to face some critical problems nowadays which necessitate preventing the complete disappearance of this outstanding indigenous cultural heritage. Part III (Chapter 6 & 7) thus focuses on the cultural sustainability of Altaic Kazakh falconry. Changing livelihoods, sedentarisation, and decontextualisation are identified as major threats. The role of Golden Eagle Festivals is critically analysed with regard to positive and negative impact. This part also intends to contribute to the academic definition of eagle falconry as an intangible cultural heritage, and to provide scientific criteria for a preservation master plan, as well as stipulate local resilience by pointing to successive actions needed for conservation. This research project concludes that cultural sustainability of Altaic Kazakh falconry needs to be supported from the angles of three theoretical frameworks; (1) Cultural affairs for protection based on the concept of nature-guardianship in its cultural domain, (2) Sustainable development and improvement of animal herding productivity and herder’s livelihood, (3) Natural resource management, especially supporting the population of Golden Eagles, their potential prey animals, and their nesting environment.
Resumo:
One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.
Resumo:
The interface between humans and technology is a rapidly changing field. In particular as technological methods have improved dramatically so interaction has become possible that could only be speculated about even a decade earlier. This interaction can though take on a wide range of forms. Indeed standard buttons and dials with televisual feedback are perhaps a common example. But now virtual reality systems, wearable computers and most of all, implant technology are throwing up a completely new concept, namely a symbiosis of human and machine. No longer is it sensible simply to consider how a human interacts with a machine, but rather how the human-machine symbiotic combination interacts with the outside world. In this paper we take a look at some of the recent approaches, putting implant technology in context. We also consider some specific practical examples which may well alter the way we look at this symbiosis in the future. The main area of interest as far as symbiotic studies are concerned is clearly the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Often pilot tests and experimentation has been carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed briefly here. The paper however concentrates on human experimentation, in particular that carried out by the authors themselves, firstly to indicate what possibilities exist as of now with available technology, but perhaps more importantly to also show what might be possible with such technology in the future and how this may well have extensive social effects. The driving force behind the integration of technology with humans on a neural level has historically been to restore lost functionality in individuals who have suffered neurological trauma such as spinal cord damage, or who suffer from a debilitating disease such as lateral amyotrophic sclerosis. Very few would argue against the development of implants to enable such people to control their environment, or some aspect of their own body functions. Indeed this technology in the short term has applications for amelioration of symptoms for the physically impaired, such as alternative senses being bestowed on a blind or deaf individual. However the issue becomes distinctly more complex when it is proposed that such technology be used on those with no medical need, but instead who wish to enhance and augment their own bodies, particularly in terms of their mental attributes. These issues are discussed here in the light of practical experimental test results and their ethical consequences.
Resumo:
This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.
Examining the relationships between Holocene climate change, hydrology, and human society in Ireland
Resumo:
This thesis explores human-environment interactions during the Mid-Late Holocene in raised bogs in central Ireland. The raised bogs of central Ireland are widely-recognised for their considerable palaeoenvironmental and archaeological resources: research over the past few decades has established the potential for such sites to preserve sensitive records of Holocene climatic variability expressed as changes in bog surface wetness (BSW); meanwhile archaeological investigations over the past century have uncovered hundreds of peatland archaeological features dating from the Neolithic through to the Post-Medieval period including wooden trackways, platforms, and deposits of high-status metalwork. Previous studies have attempted to explore the relationship between records of past environmental change and the occurrence of peatland archaeological sites reaching varying conclusions. More recently, environmentally-deterministic models of human-environment interaction in Irish raised bogs at the regional scale have been explicitly tested leading to the conclusion that there is no relationship between BSW and past human activity. These relationships are examined in more detail on a site-by-site basis in this thesis. To that end, testate amoebae-derived BSW records from nine milled former raised bogs in central Ireland were produced from sites with known and dated archaeological records. Relationships between BSW records and environmental conditions within the study area were explored through both the development of a new central Ireland testate amoebae transfer function and through comparisons between recent BSW records and instrumental weather data. Compilation of BSW records from the nine fossil study sites show evidence both for climate forcing, particularly during 3200-2400 cal BP, as well as considerable inter-site variability. Considerable inter-site variability was also evident in the archaeological records of the same sites. Whilst comparisons between BSW and archaeological records do not show a consistent linear relationship, examination of records on a site-by-site basis were shown to reveal interpretatively important contingent relationships. It is concluded therefore, that future research on human-environment interactions should focus on individual sites and should utilise theoretical approaches from the humanities in order to avoid the twin pitfalls of masking important local patterns of change, and of environmental determinism.
Resumo:
This paper presents a usability evaluation of the MTE (Ministry of Labor e Employment) website in order to measure the effectiveness, efficiency and user satisfaction regarding the website. The participants were 12 users (07 users were female and 05 male). The results indicate that although the education level of all participants and computing experience, many of them have had difficulty in finding information and do not recommend the site. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
The grasping of virtual objects has been an active research field for several years. Solutions providing realistic grasping rely on special hardware or require time-consuming parameterizations. Therefore, we introduce a flexible grasping algorithm enabling grasping without computational complex physics. Objects can be grasped and manipulated with multiple fingers. In addition, multiple objects can be manipulated simultaneously with our approach. Through the usage of contact sensors the technique is easily configurable and versatile enough to be used in different scenarios.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.
Resumo:
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound
Resumo:
This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staff
Resumo:
Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user -independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human -robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone.