256 resultados para Robots mòbils -- Teledetecció
Resumo:
In vegetated environments, reliable obstacle detection remains a challenge for state-of-the-art methods, which are usually based on geometrical representations of the environment built from LIDAR and/or visual data. In many cases, in practice field robots could safely traverse through vegetation, thereby avoiding costly detours. However, it is often mistakenly interpreted as an obstacle. Classifying vegetation is insufficient since there might be an obstacle hidden behind or within it. Some Ultra-wide band (UWB) radars can penetrate through vegetation to help distinguish actual obstacles from obstacle-free vegetation. However, these sensors provide noisy and low-accuracy data. Therefore, in this work we address the problem of reliable traversability estimation in vegetation by augmenting LIDAR-based traversability mapping with UWB radar data. A sensor model is learned from experimental data using a support vector machine to convert the radar data into occupancy probabilities. These are then fused with LIDAR-based traversability data. The resulting augmented traversability maps capture the fine resolution of LIDAR-based maps but clear safely traversable foliage from being interpreted as obstacle. We validate the approach experimentally using sensors mounted on two different mobile robots, navigating in two different environments.
Resumo:
'Design: Our Future', was an important and exciting call to arms for Queensland Design and Technology teachers at the INTAD State Conference 2015 held at Harristown State High School Toowoomba on the 25 June. As the Australian Government increasingly recognises design thinking as “a ubiquitous capability for innovation” (Commonwealth of Australia, 2013:90) to support a viable manufacturing sector in the Asian century, this represents an opportunity for Design and Technology teachers to provide leadership in the cultivation of these generic skills, behaviours and mindsets through secondary school education in Australia. This article, based on the conference keynote speech, outlines the value of design in education for the creative knowledge economy, the implications for Australian design and technology teachers, and the challenges ahead to ensure our future workforce is not superseded by robots.
Resumo:
A key component of robotic path planning is ensuring that one can reliably navigate a vehicle to a desired location. In addition, when the features of interest are dynamic and move with oceanic currents, vehicle speed plays an important role in the planning exercise to ensure that vehicles are in the right place at the right time. Aquatic robot design is moving towards utilizing the environment for propulsion rather than traditional motors and propellers. These new vehicles are able to realize significantly increased endurance, however the mission planning problem, in turn, becomes more difficult as the vehicle velocity is not directly controllable. In this paper, we examine Gaussian process models applied to existing wave model data to predict the behavior, i.e., velocity, of a Wave Glider Autonomous Surface Vehicle. Using training data from an on-board sensor and forecasting with the WAVEWATCH III model, our probabilistic regression models created an effective method for forecasting WG velocity.
Resumo:
Interest in the area of collaborative Unmanned Aerial Vehicles (UAVs) in a Multi-Agent System is growing to compliment the strengths and weaknesses of the human-machine relationship. To achieve effective management of multiple heterogeneous UAVs, the status model of the agents must be communicated to each other. This paper presents the effects on operator Cognitive Workload (CW), Situation Awareness (SA), trust and performance by increasing the autonomy capability transparency through text-based communication of the UAVs to the human agents. The results revealed a reduction in CW, increase in SA, increase in the Competence, Predictability and Reliability dimensions of trust, and the operator performance.
Resumo:
There is an increased interest in measuring the amount of greenhouse gases produced by farming practices . This paper describes an integrated solar powered Unmanned Air Vehicles (UAV) and Wireless Sensor Network (WSN) gas sensing system for greenhouse gas emissions in agricultural lands. The system uses a generic gas sensing system for CH4 and CO2 concentrations using metal oxide (MoX) and non-dispersive infrared sensors, and a new solar cell encapsulation method to power the unmanned aerial system (UAS)as well as a data management platform to store, analyze and share the information with operators and external users. The system was successfully field tested at ground and low altitudes, collecting, storing and transmitting data in real time to a central node for analysis and 3D mapping. The system can be used in a wide range of outdoor applications at a relatively low operational cost. In particular, agricultural environments are increasingly subject to emissions mitigation policies. Accurate measurements of CH4 and CO2 with its temporal and spatial variability can provide farm managers key information to plan agricultural practices. A video of the bench and flight test performed can be seen in the following link: https://www.youtube.com/watch?v=Bwas7stYIxQ
Resumo:
The vision sense of standalone robots is limited by line of sight and onboard camera capabilities, but processing video from remote cameras puts a high computational burden on robots. This paper describes the Distributed Robotic Vision Service, DRVS, which implements an on-demand distributed visual object detection service. Robots specify visual information requirements in terms of regions of interest and object detection algorithms. DRVS dynamically distributes the object detection computation to remote vision systems with processing capabilities, and the robots receive high-level object detection information. DRVS relieves robots of managing sensor discovery and reduces data transmission compared to image sharing models of distributed vision. Navigating a sensorless robot from remote vision systems is demonstrated in simulation as a proof of concept.
Resumo:
In recent years more and more complex humanoid robots have been developed. On the other hand programming these systems has become more difficult. There is a clear need for such robots to be able to adapt and perform certain tasks autonomously, or even learn by themselves how to act. An important issue to tackle is the closing of the sensorimotor loop. Especially when talking about humanoids the tight integration of perception with actions will allow for improved behaviours, embedding adaptation on the lower-level of the system.
Resumo:
There is an increased interest on the use of UAVs for environmental research such as tracking bush fires, volcanic eruptions, chemical accidents or pollution sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A method for generating sparse plumes in a virtual environment was also developed. Results indicated the ability of the algorithms to track plumes in 2D and 3D. The system has been tested with hardware in the loop (HIL) simulations and in flight using a CO2 gas sensor mounted to a multi-rotor UAV. The UAV is controlled by the plume tracking algorithm running on the ground control station (GCS).
Resumo:
There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.
Resumo:
The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.
Resumo:
Robotic vision is limited by line of sight and onboard camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.
Resumo:
This paper shows that by using only symbolic language phrases, a mobile robot can purposefully navigate to specified rooms in previously unexplored environments. The robot intelligently organises a symbolic language description of the unseen environment and “imagines” a representative map, called the abstract map. The abstract map is an internal representation of the topological structure and spatial layout of symbolically defined locations. To perform goal-directed exploration, the abstract map creates a high-level semantic plan to reason about spaces beyond the robot’s known world. While completing the plan, the robot uses the metric guidance provided by a spatial layout, and grounded observations of door labels, to efficiently guide its navigation. The system is shown to complete exploration in unexplored spaces by travelling only 13.3% further than the optimal path.
Resumo:
Robotics is taught in many Australian ICT classrooms, in both primary and secondary schools. Robotics activities, including those developed using the LEGO Mindstorms NXT technology, are mathematics-rich and provide a fertile round for learners to develop and extend their mathematical thinking. However, this context for learning mathematics is often under-exploited. In this paper a variant of the model construction sequence (Lesh, Cramer, Doerr, Post, & Zawojewski, 2003) is proposed, with the purpose of explicitly integrating robotics and mathematics teaching and learning. Lesh et al.’s model construction sequence and the model eliciting activities it embeds were initially researched in primary mathematics classrooms and more recently in university engineering courses. The model construction sequence involves learners working collaboratively upon product-focussed tasks, through which they develop and expose their conceptual understanding. The integrating model proposed in this paper has been used to design and analyse a sequence of activities in an Australian Year 4 classroom. In that sequence more traditional classroom learning was complemented by the programming of LEGO-based robots to ‘act out’ the addition and subtraction of simple fractions (tenths) on a number-line. The framework was found to be useful for planning the sequence of learning and, more importantly, provided the participating teacher with the ability to critically reflect upon robotics technology as a tool to scaffold the learning of mathematics.
Resumo:
Nature is a school for scientists and engineers. Inherent multiscale structures of biological materials exhibit multifunctional integration. In nature, the lotus, the water strider, and the flying bird evolved different and optimized biological solutions to survive. In this contribution, inspired by the optimized solutions from the lotus leaf with superhydrophobic self-cleaning, the water strider leg with durable and robust superhydrophobicity, and the lightweight bird bone with hollow structures, multifunctional metallic foams with multiscale structures are fabricated, demonstrating low adhesive superhydrophobic self-cleaning, striking loading capacity, and superior repellency towards different corrosive solutions. This approach provides an effective avenue to the development of water strider robots and other aquatic smart devices floating on water. Furthermore, the resultant multifunctional metallic foam can be used to construct an oil/water separation apparatus, exhibiting a high separation efficiency and long-term repeatability. The presented approach should provide a promising solution for the design and construction of other multifunctional metallic foams in a large scale for practical applications in the petro-chemical field. Optimized biological solutions continue to inspire and to provide design idea for the construction of multiscale structures with multifunctional integration. Inspired by the optimized biological solutions from the lotus leaf with superhydrophobic self-cleaning, the water strider leg with durable and robust superhydrophobicity, and the lightweight bird bone with hollow structures, multifunctional metallic foams with multiscale structures are fabricated, demonstrating low adhesive superhydrophobic self-cleaning, striking loading capacity, stable corrosion resistance, and oil/water separation.
Resumo:
Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.