739 resultados para Automation.
Resumo:
This technical report describes a Light Detection and Ranging (LiDAR) augmented optimal path planning at low level flight methodology for remote sensing and sampling Unmanned Aerial Vehicles (UAV). The UAV is used to perform remote air sampling and data acquisition from a network of sensors on the ground. The data that contains information on the terrain is in the form of a 3D point clouds maps is processed by the algorithms to find an optimal path. The results show that the method and algorithm are able to use the LiDAR data to avoid obstacles when planning a path from a start to a target point. The report compares the performance of the method as the resolution of the LIDAR map is increased and when a Digital Elevation Model (DEM) is included. From a practical point of view, the optimal path plan is loaded and works seemingly with the UAV ground station and also shows the UAV ground station software augmented with more accurate LIDAR data.
Resumo:
This report documents showcases my learning experiences and design of Green Falcon Solar Powered UAV. Only responsible aspects will be discussed inside this report. Using solar power that is captured by solar panels it can fly all day and also store power for night flying. Its major advantage lies in the fact that it is simple and versatile, which makes it applicable to a large range of UAVs of different wingspans. Green Falcon UAV is designed as a supporting tool for scientists to get a deeper understanding of gases exchange amongst ground plane and atmosphere
Development of multi-rotor localised surveillance using multi-spectral sensors for plant biosecurity
Resumo:
This report describes a proof of concept for multi-rotor localised surveillance using a multi-spectral sensor for plant biosecurity applications. A literature review was conducted on previous applications using airborne multispectral imaging for plant biosecurity purposes. A ready built platform was purchased and modified in order to fit and provide suitable clearance for a Tetracam Mini-MCA multispectral camera. The appropriate risk management documents were developed allowing the platform and the multi-spectral camera to be tested extensively. However, due to technical difficulties with the platform the Mini- MCA was not mounted to the platform. Once a suitable platform is developed, future extensions can be conducted into the suitability of the Mini-MCA for airborne surveillance of Australian crops.
Resumo:
A number of hurdles must be overcome in order to integrate unmanned aircraft into civilian airspace for routine operations. The ability of the aircraft to land safely in an emergency is essential to reduce the risk to people, infrastructure and aircraft. To date, few field-demonstrated systems have been presented that show online re-planning and repeatability from failure to touchdown. This paper presents the development of the Guidance, Navigation and Control (GNC) component of an Automated Emergency Landing System (AELS) intended to address this gap, suited to a variety of fixed-wing aircraft. Field-tested on both a fixed-wing UAV and Cessna 172R during repeated emergency landing experiments, a trochoid-based path planner computes feasible trajectories and a simplified control system executes the required manoeuvres to guide the aircraft towards touchdown on a predefined landing site. This is achieved in zero-thrust conditions with engine forced to idle to simulate failure. During an autonomous landing, the controller uses airspeed, inertial and GPS data to track motion and maintains essential flight parameters to guarantee flyability, while the planner monitors glide ratio and re-plans to ensure approach at correct altitude. Simulations show reliability of the system in a variety of wind conditions and its repeated ability to land within the boundary of a predefined landing site. Results from field-tests for the two aircraft demonstrate the effectiveness of the proposed GNC system in live operation. Results show that the system is capable of guiding the aircraft to close proximity of a predefined keyhole in nearly 100% of cases.
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.
Resumo:
This paper presents a global-optimisation frame-work for the design of a manipulator for harvesting capsicum(peppers) in the field. The framework uses a simulated capsicum scenario with automatically generated robot models based on DH parameters. Each automatically generated robot model is then placed in the simulated capsicum scenario and the ability of the robot model to get to several goals (capsicum with varying orientations and positions) is rated using two criteria:the length of a collision-free path and the dexterity of the end-effector. These criteria form the basis of the objective function used to perform a global optimisation. The paper shows a preliminary analysis and results that demonstrate the potential of this method to choose suitable robot models with varying degrees of freedom.
Resumo:
In this paper, we address the problem of stabilisation of robots subject to nonholonommic constraints and external disturbances using port-Hamiltonian theory and smooth time-invariant control laws. This should be contrasted with the commonly used switched or time-varying laws. We propose a control design that provides asymptotic stability of an manifold (also called relative equilibria)-due to the Brockett condition this is the only type of stabilisation possible using smooth time-invariant control laws. The equilibrium manifold can be shaped to certain extent to satisfy specific control objectives. The proposed control law also incorporates integral action, and thus the closed-loop system is robust to unknown constant disturbances. A key step in the proposed design is a change of coordinates not only in the momentum, but also in the position vector, which differs from coordinate transformations previously proposed in the literature for the control of nonholonomic systems. The theoretical properties of the control law are verified via numerical simulation based on a robotic ground vehicle model with differential traction wheels and non co-axial centre of mass and point of contact.
Resumo:
One of the major impediments for the use of UAVs in civilian environment is the capability to replicate some of the functionality of safe manned aircraft operations. One critical aspect is emergency landing. Once the possible landing sites have been rated, a decision on the most suitable choice to land is required. This is a multi-criteria decision making (MCDM) problem which needs to take into account various factors in its selection of landing site. This report summarises relevant literature in MCDM in the context of emergency forced landing and proposes and compares two algorithms and methods for this task.
Resumo:
This paper presents an approach, based on Lean production philosophy, for rationalising the processes involved in the production of specification documents for construction projects. Current construction literature erroneously depicts the process for the creation of construction specifications as a linear one. This traditional understanding of the specification process often culminates in process-wastes. On the contrary, the evidence suggests that though generalised, the activities involved in producing specification documents are nonlinear. Drawing on the outcome of participant observation, this paper presents an optimised approach for representing construction specifications. Consequently, the actors typically involved in producing specification documents are identified, the processes suitable for automation are highlighted and the central role of tacit knowledge is integrated into a conceptual template of construction specifications. By applying the transformation, flow, value (TFV) theory of Lean production the paper argues that value creation can be realised by eliminating the wastes associated with the traditional preparation of specification documents with a view to integrating specifications in digital models such as Building Information Models (BIM). Therefore, the paper presents an approach for rationalising the TFV theory as a method for optimising current approaches for generating construction specifications based on a revised specification writing model.
Resumo:
This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.
Resumo:
Game strategies have been developed in past decades and used in the field of economics, engineering, computer science and biology due to their efficiency in solving design optimisation problems. In addition, research on Multi-Objective (MO) and Multidisciplinary Design Optimisation (MDO) has focused on developing robust and efficient optimisation method to produce quality solutions with less computational time. In this paper, a new optimisation method Hybrid Game Strategy for MO problems is introduced and compared to CMA-ES based optimisation approach. Numerical results obtained from both optimisation methods are compared in terms of computational expense and model quality. The benefits of using Game-strategies are demonstrated.
Resumo:
This paper describes a vision-only system for place recognition in environments that are tra- versed at different times of day, when chang- ing conditions drastically affect visual appear- ance, and at different speeds, where places aren’t visited at a consistent linear rate. The ma- jor contribution is the removal of wheel-based odometry from the previously presented algo- rithm (SMART), allowing the technique to op- erate on any camera-based device; in our case a mobile phone. While we show that the di- rect application of visual odometry to our night- time datasets does not achieve a level of perfor- mance typically needed, the VO requirements of SMART are orthogonal to typical usage: firstly only the magnitude of the velocity is required, and secondly the calculated velocity signal only needs to be repeatable in any one part of the environment over day and night cycles, but not necessarily globally consistent. Our results show that the smoothing effect of motion constraints is highly beneficial for achieving a locally consis- tent, lighting-independent velocity estimate. We also show that the advantage of our patch-based technique used previously for frame recogni- tion, surprisingly, does not transfer to VO, where SIFT demonstrates equally good performance. Nevertheless, we present the SMART system us- ing only vision, which performs sequence-base place recognition in extreme low-light condi- tions where standard 6-DOF VO fails and that improves place recognition performance over odometry-less benchmarks, approaching that of wheel odometry.