992 resultados para robot interaction
Resumo:
This article introduced an effective design method of robot called remote-brain, which is made the brain and body separated. It leaves the brain in the mother environment, by which we mean the environment in which the brain's software is developed, and talks with its body by wireless links. It also presents a real robot TUT06-B based on this method which has human-machine interaction, vision systems, manipulator etc. Then it discussed the path planning method for the robot based on ant colony algorithm in details, especially the Ant-cycle model. And it also analyzed the parameter of the algorithm which can affect the convergence. Finally, it gives the program flow chat of this algorithm.
Resumo:
M J Neal and J Timmis. Timidity: A useful mechanism for robot control? Informatica - special issue on perception and emotion based control, 4(27):197-204, 2003.
Resumo:
In this study we investigate the coordination between rhythmic flexion-extension (FE) and supination-pronation (SP) movements at the elbow joint-complex, while manipulating the intersegmental dynamics by means of a 2-degrees of freedom (df) robot arm. We hypothesized that constraints imposed by the structure of the neuromuscular-skeletal system would (1) result in predominant pattern(s) of coordination in the absence of interaction torques and (2) influence the capabilities of participants to exploit artificially induced interaction torques. Two experiments were conducted in which different conditions of interaction torques were applied on the SP-axis as a function of FE movements. These conditions promoted different patterns of coordination between the 2-df. Control trials conducted in the absence of interaction torques revealed that both the in-phase (supination synchronized with flexion) and the anti-phase (pronation synchronized with flexion) patterns were spontaneously established by participants. The predominance of these patterns of coordination is explained in terms of the mechanical action of bi-articular muscles acting at the elbow joint-complex, and in terms of the reflexes that link the activity of the muscles involved. Results obtained in the different conditions of interaction torques revealed that those neuromuscular-skeletal constraints either impede or favor the exploitation of intersegmental dynamics depending on the context. Interaction torques were indeed found to be exploited to a greater extent in conditions in which the profiles of interaction torques favored one of the two predominant patterns of coordination (i.e., in-phase or anti-phase) as opposed to other patterns of coordination (e.g., 90 degrees or 270 degrees). Those results are discussed in relation to recent studies reporting exploitation of interaction torques in the context of rhythmic movements.
Resumo:
This research aims to understand the fundamental dynamic behavior of servo-controlled machinery in response to various types of sensory feedback. As an example of such a system, we study robot force control, a scheme which promises to greatly expand the capabilities of industrial robots by allowing manipulators to interact with uncertain and dynamic tasks. Dynamic models are developed which allow the effects of actuator dynamics, structural flexibility, and workpiece interaction to be explored in the frequency and time domains. The models are used first to explain the causes of robot force control instability, and then to find methods of improving this performance.
Resumo:
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls. There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models. Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces. The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
Resumo:
Augmented Reality (AR) is an emerging technology that utilizes computer vision methods to overlay virtual objects onto the real world scene so as to make them appear to co-exist with the real objects. Its main objective is to enhance the user’s interaction with the real world by providing the right information needed to perform a certain task. Applications of this technology in manufacturing include maintenance, assembly and telerobotics. In this paper, we explore the potential of teaching a robot to perform an arc welding task in an AR environment. We present the motivation, features of a system using the popular ARToolkit package, and a discussion on the issues and implications of our research.
Resumo:
It is usually expected that the intelligent controlling mechanism of a robot is a computer system. Research is however now ongoing in which biological neural networks are being cultured and trained to act as the brain of an interactive real world robot - thereby either completely replacing or operating in a cooperative fashion with a computer system. Studying such neural systems can give a distinct insight into biological neural structures and therefore such research has immediate medical implications. In particular, the use of rodent primary dissociated cultured neuronal networks for the control of mobile `animals' (artificial animals, a contraction of animal and materials) is a novel approach to discovering the computational capabilities of networks of biological neurones. A dissociated culture of this nature requires appropriate embodiment in some form, to enable appropriate development in a controlled environment within which appropriate stimuli may be received via sensory data but ultimate influence over motor actions retained. The principal aims of the present research are to assess the computational and learning capacity of dissociated cultured neuronal networks with a view to advancing network level processing of artificial neural networks. This will be approached by the creation of an artificial hybrid system (animal) involving closed loop control of a mobile robot by a dissociated culture of rat neurons. This 'closed loop' interaction with the environment through both sensing and effecting will enable investigation of its learning capacity This paper details the components of the overall animat closed loop system and reports on the evaluation of the results from the experiments being carried out with regard to robot behaviour.
Resumo:
Stroke is a leading cause of disability in particular affecting older people. Although the causes of stroke are well known and it is possible to reduce these risks, there is still a need to improve rehabilitation techniques. Early studies in the literature suggest that early intensive therapies can enhance a patient's recovery. According to physiotherapy literature, attention and motivation are key factors for motor relearning following stroke. Machine mediated therapy offers the potential to improve the outcome of stroke patients engaged on rehabilitation for upper limb motor impairment. Haptic interfaces are a particular group of robots that are attractive due to their ability to safely interact with humans. They can enhance traditional therapy tools, provide therapy "on demand" and can present accurate objective measurements of a patient's progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. The creation of human-like trajectories is essential for retraining upper limb movements of people that have lost manipulation functions following stroke. By coupling models for human arm movement with haptic interfaces and VR technology it is possible to create a new class of robot mediated neuro rehabilitation tools. This paper provides an overview on different approaches to robot mediated therapy and describes a system based on haptics and virtual reality visualisation techniques, where particular emphasis is given to different control strategies for interaction derived from minimum jerk theory and the aid of virtual and mixed reality based exercises.
Resumo:
The aurora project is investigating the possibility of using a robotic platform as a therapy aid for--children with autism. Because of the nature of this disability, the robot could be beneficial in its ability--to present the children with a safe and comfortable environment and allow them to explore and learn--about the interaction space involved in social situations. The robotic platform is able to present--information along a limited number of channels and in a manner which the children are familiar with--from television and cartoons. Also, the robot is potentially able to adapt its behaviour and to allow the--children to develop at their own rates. Initial trial results are presented and discussed, along with the--rationale behind the project and its goals and motivations. The trial procedure and methodology are--explained and future work is highlighted.
Resumo:
Increasingly socially intelligent agents (software or robotic) are used in education, rehabilitation and therapy. This paper discusses the role of interactive, mobile robots as social mediators in the particular domain of autism therapy. This research is part of the project AURORA that studies how mobile robots can be used to teach children with autism basic interaction skills that are important in social interactions among humans. Results from a particular series of trials involving pairs of two children and a mobile robot are described. The results show that the scenario with pairs of children and a robot creates a very interesting social context which gives rise to a variety of different social and non-social interaction patterns, demonstrating the specific problems but also abilities of children with autism in social interactions. Future work will include a closer analysis of interactional structure in human-human and robot-human interaction. We outline a particular framework that we are investigating.
Resumo:
In this paper a look is taken at how the use of implant and electrode technology can be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking a biological brain directly with computer technology. The emphasis is placed on practical scientific studies that have been and are being undertaken and reported on. The area of focus is the use of electrode technology, where either a connection is made directly with the cerebral cortex and/or nervous system or where implants into the human body are involved. The paper also considers robots that have biological brains in which human neurons can be employed as the sole thinking machine for a real world robot body.
Resumo:
In this paper a look is taken at how the use of implant and electrode technology can be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking a biological brain directly with computer technology. The emphasis is placed on practical scientific studies that have been and are being undertaken and reported on. The area of focus is the use of electrode technology, where either a connection is made directly with the cerebral cortex and/or nervous system or where implants into the human body are involved. The paper also considers robots that have biological brains in which human neurons can be employed as the sole thinking machine for a real world robot body.
Resumo:
Severely disabled children have little chance of environmental and social exploration and discovery. This lack of interaction and independency may lead to an idea that they are unable to do anything by themselves. In an attempt to help children in this situation, educational robotics can offer and aid, once it can provide them a certain degree of independency in the exploration of environment. The system developed in this work allows the child to transmit the commands to a robot through myoelectric and movement sensors. The sensors are placed on the child's body so they can obtain information from the body inclination and muscle contraction, thus allowing commanding, through a wireless communication, the mobile entertainment robot to carry out tasks such as play with objects and draw. In this paper, the details of the robot design and control architecture are presented and discussed. With this system, disabled children get a better cognitive development and social interaction, balancing in a certain way, the negative effects of their disabilities. © 2012 IEEE.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
We have developed a haptic-based approach for retraining of interjoint coordination following stroke called time-independent functional training (TIFT) and implemented this mode in the ARMin III robotic exoskeleton. The ARMin III robot was developed by Drs. Robert Riener and Tobias Nef at the Swiss Federal Institute of Technology Zurich (Eidgenossische Technische Hochschule Zurich, or ETH Zurich), in Zurich, Switzerland. In the TIFT mode, the robot maintains arm movements within the proper kinematic trajectory via haptic walls at each joint. These arm movements focus training of interjoint coordination with highly intuitive real-time feedback of performance; arm movements advance within the trajectory only if their movement coordination is correct. In initial testing, 37 nondisabled subjects received a single session of learning of a complex pattern. Subjects were randomized to TIFT or visual demonstration or moved along with the robot as it moved though the pattern (time-dependent [TD] training). We examined visual demonstration to separate the effects of action observation on motor learning from the effects of the two haptic guidance methods. During these training trials, TIFT subjects reduced error and interaction forces between the robot and arm, while TD subject performance did not change. All groups showed significant learning of the trajectory during unassisted recall trials, but we observed no difference in learning between groups, possibly because this learning task is dominated by vision. Further testing in stroke populations is warranted.