980 resultados para Robotics,
Resumo:
Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution feature-less systems have been introduced, such as SeqSLAM, all of which consider the whole image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by feature-less systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.
Resumo:
Current state of the art robot mapping and navigation systems produce impressive performance under a narrow range of robot platform, sensor and environmental conditions, in contrast to animals such as rats that produce “good enough” maps that enable them to function under an incredible range of situations. In this paper we present a rat-inspired featureless sensor-fusion system that assesses the usefulness of multiple sensor modalities based on their utility and coherence for place recognition during a navigation task, without knowledge as to the type of sensor. We demonstrate the system on a Pioneer robot in indoor and outdoor environments with abrupt lighting changes. Through dynamic weighting of the sensors, the system is able to perform correct place recognition and mapping where the static sensor weighting approach fails.
Resumo:
This paper proposes an efficient and online learning control system that uses the successful Model Predictive Control (MPC) method in a model based locally weighted learning framework. The new approach named Locally Weighted Learning Model Predictive Control (LWL-MPC) has been proposed as a solution to learn to control complex and nonlinear Elastic Joint Robots (EJR). Elastic Joint Robots are generally difficult to learn to control due to their elastic properties preventing standard model learning techniques from being used, such as learning computed torque control. This paper demonstrates the capability of LWL-MPC to perform online and incremental learning while controlling the joint positions of a real three Degree of Freedom (DoF) EJR. An experiment on a real EJR is presented and LWL-MPC is shown to successfully learn to control the system to follow two different figure of eight trajectories.
Resumo:
This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure — tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadrotor with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.
Resumo:
This paper details the progress to date, toward developing a small autonomous helicopter. We describe system architecture, avionics, visual state estimation, custom IMU design, aircraft modelling, as well as various linear and neuro/fuzzy control algorithms. Experimental results are presented for state estimation using fused stereo vision and IMU data, heading control, and attitude control. FAM attitude and velocity controllers have been shown to be effective in simulation.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
GPS is a commonly used and convenient technology for determining absolute position in outdoor environments, but its high power consumption leads to rapid battery depletion in mobile devices. An obvious solution is to duty cycle the GPS module, which prolongs the device lifetime at the cost of increased position uncertainty while the GPS is off. This article addresses the trade-off between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty while GPS is off. Empirical GPS and radio contact data from a large-scale animal tracking deployment is used to model node mobility, radio performance, and GPS. Because GPS takes a considerable, and variable, time after powering up before it delivers a good position measurement, we model the GPS behaviour through empirical measurements of two GPS modules. These models are then used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose strategies that use RSSI ranging and GPS back-offs to further reduce energy consumption. Results show that our combined strategies can cut node energy consumption by one third while still meeting application-specific positioning criteria.
Resumo:
This paper describes the implementation of the first portable, embedded data acquisition unit (BabelFuse) that is able to acquire and timestamp generic sensor data and trigger General Purpose I/O (GPIO) events against a microsecond-accurate wirelessly-distributed ‘global’ clock. A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fast-moving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment especially if non-deterministic communication hardware (such as IEEE-802.11-based wireless) and inaccurate clock synchronisation protocols are used. The issue of differing timebases makes correlation of data difficult and prevents the units from reliably performing synchronised operations or manoeuvres. By utilising hardware-assisted timestamping, clock synchronisation protocols based on industry standards and firmware designed to minimise indeterminism, an embedded data acquisition unit capable of microsecond-level clock synchronisation is presented.
Resumo:
Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Resumo:
The contextuality of changing attitudes makes them extremely difficult to model. This paper scales up Quantum Decision Theory (QDT) to a social setting, using it to model the manner in which social contexts can interact with the process of low elaboration attitude change. The elements of this extended theory are presented, along with a proof of concept computational implementation in a low dimensional subspace. This model suggests that a society's understanding of social issues will settle down into a static or frozen configuration unless that society consists of a range of individuals with varying personality types and norms.
Resumo:
This paper presents an Image Based Visual Servo control design for Fixed Wing Unmanned Aerial Vehicles tracking locally linear infrastructure in the presence of wind using a body fixed imaging sensor. Visual servoing offers improved data collection by posing the tracking task as one of controlling a feature as viewed by the inspection sensor, although is complicated by the introduction of wind as aircraft heading and course angle no longer align. In this work it is shown that the effects of wind alter the desired line angle required for continuous tracking to equal the wind correction angle as would be calculated to set a desired course. A control solution is then sort by linearizing the interaction matrix about the new feature pose such that kinematics of the feature can be augmented with the lateral dynamics of the aircraft, from which a state feedback control design is developed. Simulation results are presented comparing no compensation, integral control and the proposed controller using the wind correction angle, followed by an assessment of response to atmospheric disturbances in the form of turbulence and wind gusts
Resumo:
Robotics is a valuable tool for engaging students in the hands-on application of science, technology, engineering, and mathematics (STEM) concepts. Robotics competitions such as FIRST LEGO League (FLL) can increase students’ interest in the STEM subjects and can foster their problem solving and teamwork skills. This paper reports on a study investigating students’ perceptions on the influence of participating in a FLL competition on their learning. The students completed questionnaires regarding their perceptions of their learning during the FLL challenge and were also interviewed to gain a deeper understanding of their questionnaire responses. The results show that the students were engaged with the FLL challenge and held positive views regarding their experience. The results also suggest that students involved with the FLL challenge improved their learning about real-world applications, problem solving, engagement, communication, and the application of the technology/engineering cycle.
Resumo:
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Resumo:
Biorobotics has the potential to provide an integrated understanding from neural systems to behavior that is neither ethical nor technically feasible with living systems. Robots that can interact with animals in their natural environment open new possibilities for empirical studies in neuroscience. However, designing a robot that can interact with a rodent requires considerations that span a range of disciplines. For the rat's safety, the body form and movements of the robot need to take into consideration the safety of the animal, an appropriate size for the rodent arenas, and behaviors for interaction. For the robot's safety, its form must be robust in the face of typically inquisitive and potentially aggressive behaviors by the rodent, which can include chewing on exposed parts, including electronics, and deliberate or accidental fouling. We designed a rat-sized robot, the iRat (intelligent rat animat technology) for studies in neuroscience. The iRat is about the same size as a rat and has the ability to navigate autonomously around small environments. In this study we report the first interactions between the iRat and real rodents in a free exploration task. Studies with five rats show that the rats and iRat interact safely for both parties.
Resumo:
For humans and robots to communicate using natural language it is necessary for the robots to develop concepts and associated terms that correspond to the human use of words. Time and space are foundational concepts in human language, and to develop a set of words that correspond to human notions of time and space, it is necessary to take into account the way that they are used in natural human conversations, where terms and phrases such as `soon', `in a while', or `near' are often used. We present language learning robots called Lingodroids that can learn and use simple terms for time and space. In previous work, the Lingodroids were able to learn terms for space. In this work we extend their abilities by adding temporal variables which allow them to learn terms for time. The robots build their own maps of the world and interact socially to form a shared lexicon for location and duration terms. The robots successfully use the shared lexicons to communicate places and times to meet again.