322 resultados para Humanoid Robot

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional approaches to joint control required accurate modelling of the system dynamic of the plant in question. Fuzzy Associative Memory (FAM) control schemes allow adequate control without a model of the system to be controlled. This paper presents a FAM based joint controller implemented on a humanoid robot. An empirically tuned PI velocity control loop is augmented with this feed forward FAM, with considerable reduction in joint position error achieved online and with minimal additional computational overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The GuRoo is a 1.2 m tall, 23 degree of freedom humanoid constructed at the University of Queensland for research into humanoid robotics. The key challenge being addressed by the GuRoo project is the development of appropriate learning strategies for control and coordination of the robot's many joints. The development of learning strategies is seen as a way to side-step the inherent intricacy of modeling a multi-DOF biped robot. This paper outlines the approach taken to generate an appropriate control scheme for the joints of the GuRoo. The paper demonstrates the determination of local feedback control parameters using a genetic algorithm. The feedback loop is then augmented by a predictive modulator that learns a form of feed-forward control to overcome the irregular loads experienced at each joint during the gait cycle. The predictive modulator is based on the CMAC architecture. Results from tests on the GuRoo platform show that both systems provide improvements in stability and tracking of joint control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The joints of a humanoid robot experience disturbances of markedly different magnitudes during the course of a walking gait. Consequently, simple feedback control techniques poorly track desired joint trajectories. This paper explores the addition of a control system inspired by the architecture of the cerebellum to improve system response. This system learns to compensate the changes in load that occur during a cycle of motion. The joint compensation scheme, called Trajectory Error Learning, augments the existing feedback control loop on a humanoid robot. The results from tests on the GuRoo platform show an improvement in system response for the system when augmented with the cerebellar compensator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a walking gait for a humanoid robot with a distributed control system. The motion for the robot is calculated in real time on a central controller, and sent over CAN bus to the distributed control system. The distributed control system loosely follows the motion patterns from the central controller, while also acting to maintain stability and balance. There is no global feedback control system; the system maintains its balance by the interaction between central gait and soft control of the actuators. The paper illustrates a straight line walking gait and shows the interaction between gait generation and the control system. The analysis of the data shows that successful walking can be achieved without maintaining strict local joint control, and without explicit global balance coordination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a process for evolving a stable humanoid walking gait that is based around parameterised loci of motion. The parameters of the loci are chosen by an evolutionary process based on the criteria that the robot's ZMP (zero moment point) follows a desirable path. The paper illustrates the evolution of a straight line walking gait. The gait has been tested on a 1.2 m tall humanoid robot (GuRoo). The results, apart form illustrating a successful walk, illustrate the effectiveness of the ZMP path criterion in not only ensuring a stable walk, but also in achieving efficient use of the actuators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Locomotion and autonomy in humanoid robots is of utmost importance in integrating them into social and community service type roles. However, the limited range and speed of these robots severely limits their ability to be deployed in situations where fast response is necessary. While the ability for a humanoid to drive a vehicle would aide in increasing their overall mobility, the ability to mount and dismount a vehicle designed for human occupants is a non-trivial problem. To address this issue, this paper presents an innovative approach to enabling a humanoid robot to mount and dismount a vehicle by proposing a simple mounting bracket involving no moving parts. In conjunction with a purpose built robotic vehicle, the mounting bracket successfully allowed a humanoid Nao robot to mount, dismount and drive the vehicle.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.