480 resultados para bilateral haptic robot control
Resumo:
Traditional approaches to joint control required accurate modelling of the system dynamic of the plant in question. Fuzzy Associative Memory (FAM) control schemes allow adequate control without a model of the system to be controlled. This paper presents a FAM based joint controller implemented on a humanoid robot. An empirically tuned PI velocity control loop is augmented with this feed forward FAM, with considerable reduction in joint position error achieved online and with minimal additional computational overhead.
Resumo:
The GuRoo is a 1.2 m tall, 23 degree of freedom humanoid constructed at the University of Queensland for research into humanoid robotics. The key challenge being addressed by the GuRoo project is the development of appropriate learning strategies for control and coordination of the robot's many joints. The development of learning strategies is seen as a way to side-step the inherent intricacy of modeling a multi-DOF biped robot. This paper outlines the approach taken to generate an appropriate control scheme for the joints of the GuRoo. The paper demonstrates the determination of local feedback control parameters using a genetic algorithm. The feedback loop is then augmented by a predictive modulator that learns a form of feed-forward control to overcome the irregular loads experienced at each joint during the gait cycle. The predictive modulator is based on the CMAC architecture. Results from tests on the GuRoo platform show that both systems provide improvements in stability and tracking of joint control.
Resumo:
To date, most quad-rotor aerial robots have been based on flying toys. Although such systems can be used as prototypes, they are not sufficiently robust to serve as experimental robotics platforms. We have developed the X-4 Flyer, a quad-rotor robot using custom-built chassis and avionics with off-the-shelf motors and batteries, to be a highly reliable experimental platform. The vehicle uses tuned plant dynamics with an onboard embedded attitude controller to stabilise flight. A linear SISO controller was designed to regulate flyer attitude.
Resumo:
The joints of a humanoid robot experience disturbances of markedly different magnitudes during the course of a walking gait. Consequently, simple feedback control techniques poorly track desired joint trajectories. This paper explores the addition of a control system inspired by the architecture of the cerebellum to improve system response. This system learns to compensate the changes in load that occur during a cycle of motion. The joint compensation scheme, called Trajectory Error Learning, augments the existing feedback control loop on a humanoid robot. The results from tests on the GuRoo platform show an improvement in system response for the system when augmented with the cerebellar compensator.
Resumo:
This paper describes a walking gait for a humanoid robot with a distributed control system. The motion for the robot is calculated in real time on a central controller, and sent over CAN bus to the distributed control system. The distributed control system loosely follows the motion patterns from the central controller, while also acting to maintain stability and balance. There is no global feedback control system; the system maintains its balance by the interaction between central gait and soft control of the actuators. The paper illustrates a straight line walking gait and shows the interaction between gait generation and the control system. The analysis of the data shows that successful walking can be achieved without maintaining strict local joint control, and without explicit global balance coordination.
Resumo:
In this paper we explore the ability of a recent model-based learning technique Receding Horizon Locally Weighted Regression (RH-LWR) useful for learning temporally dependent systems. In particular this paper investigates the application of RH-LWR to learn control of Multiple-input Multiple-output robot systems. RH-LWR is demonstrated through learning joint velocity and position control of a three Degree of Freedom (DoF) rigid body robot.
Resumo:
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.
Resumo:
Draglines are extremely large machines that are widely used in open-cut coal mines for overburden stripping. Since 1994 we have been working toward the development of a computer control system capable of automatically driving a dragline for a large portion of its operating cycle. This has necessitated the development and experimental evaluation of sensor systems, machines models, closed-loop control controllers, and an operator interface. This paper describes our steps toward the goal through scale-model and full-scale field experimentation.
Resumo:
Purpose – The purpose of this paper is to describe an innovative compliance control architecture for hybrid multi‐legged robots. The approach was verified on the hybrid legged‐wheeled robot ASGUARD, which was inspired by quadruped animals. The adaptive compliance controller allows the system to cope with a variety of stairs, very rough terrain, and is also able to move with high velocity on flat ground without changing the control parameters. Design/methodology/approach – The paper shows how this adaptivity results in a versatile controller for hybrid legged‐wheeled robots. For the locomotion control we use an adaptive model of motion pattern generators. The control approach takes into account the proprioceptive information of the torques, which are applied on the legs. The controller itself is embedded on a FPGA‐based, custom designed motor control board. An additional proprioceptive inclination feedback is used to make the same controller more robust in terms of stair‐climbing capabilities. Findings – The robot is well suited for disaster mitigation as well as for urban search and rescue missions, where it is often necessary to place sensors or cameras into dangerous or inaccessible areas to get a better situation awareness for the rescue personnel, before they enter a possibly dangerous area. A rugged, waterproof and dust‐proof corpus and the ability to swim are additional features of the robot. Originality/value – Contrary to existing approaches, a pre‐defined walking pattern for stair‐climbing was not used, but an adaptive approach based only on internal sensor information. In contrast to many other walking pattern based robots, the direct proprioceptive feedback was used in order to modify the internal control loop, thus adapting the compliance of each leg on‐line.
Resumo:
The inspection of marine vessels is currently performed manually. Inspectors use tools (e.g. cameras and devices for non-destructive testing) to detect damaged areas, cracks, and corrosion in large cargo holds, tanks, and other parts of a ship. Due to the size and complex geometry of most ships, ship inspection is time-consuming and expensive. The EU-funded project INCASS develops concepts for a marine inspection robotic assistant system to improve and automate ship inspections. In this paper, we introduce our magnetic wall–climbing robot: Marine Inspection Robotic Assistant (MIRA). This semiautonomous lightweight system is able to climb a vessels steel frame to deliver on-line visual inspection data. In addition, we describe the design of the robot and its building subsystems as well as its hardware and software components.
Resumo:
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
Resumo:
We consider multi-robot systems that include sensor nodes and aerial or ground robots networked together. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We present a sensor network deployment method using autonomous aerial vehicles and describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for repair, to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth). © Springer-Verlag Berlin/Heidelberg 2006.
Resumo:
This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.
Resumo:
Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine.