148 resultados para Ground-based tracking
Resumo:
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.
Resumo:
Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.
Resumo:
Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.
Resumo:
This paper summarises the achievements of the Smart Skies Project, a three-year, multi-award winning international project that researched, developed and extensively flight tested four enabling aviation technologies: an electrooptical mid-air collision avoidance system, a static obstacle avoidance system, a mobile ground-based air traffic surveillance system, and a global automated airspace separation management system. The project included the development of manned and unmanned flight test aircraft, which were used to characterise the performance of the prototype systems for a range of realistic scenarios under a variety of environmental conditions. In addition to the collection of invaluable flight data, the project achieved world-firsts in the demonstration of future automated collision avoidance and separation management concepts. This paper summarises these outcomes, the overall objectives of the project, the research and the development of the prototype systems, the engineering of the flight test systems, and the results obtained from flight-testing.
Resumo:
This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure — tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadrotor with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.
Resumo:
Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing/noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.
Resumo:
Over the past decade, vision-based tracking systems have been successfully deployed in professional sports such as tennis and cricket for enhanced broadcast visualizations as well as aiding umpiring decisions. Despite the high-level of accuracy of the tracking systems and the sheer volume of spatiotemporal data they generate, the use of this high quality data for quantitative player performance and prediction has been lacking. In this paper, we present a method which predicts the location of a future shot based on the spatiotemporal parameters of the incoming shots (i.e. shot speed, location, angle and feet location) from such a vision system. Having the ability to accurately predict future short-term events has enormous implications in the area of automatic sports broadcasting in addition to coaching and commentary domains. Using Hawk-Eye data from the 2012 Australian Open Men's draw, we utilize a Dynamic Bayesian Network to model player behaviors and use an online model adaptation method to match the player's behavior to enhance shot predictability. To show the utility of our approach, we analyze the shot predictability of the top 3 players seeds in the tournament (Djokovic, Federer and Nadal) as they played the most amounts of games.
Resumo:
Aerial applications of granular insecticides are preferable because they can effectively penetrate vegetation, there is less drift, and no loss of product due to evaporation. We aimed to 1) assess the field efficacy ofVectoBac G to control Aedes vigilax (Skuse) in saltmarsh pools, 2) develop a stochastic-modeling procedure to monitor application quality, and 3) assess the distribution of VectoBac G after an aerial application. Because ground-based studies with Ae. vigilax immatures found that VectoBac G provided effective control below the recommended label rate of 7 kg/ha, we trialed a nominated aerial rate of 5 kg/ha as a case study. Our distribution pattern modeling method indicated that the variability in the number of VectoBac G particles captured in catch-trays was greater than expected for 5 kg/ha and that the widely accepted contour mapping approach to visualize the deposition pattern provided spurious results and therefore was not statistically appropriate. Based on the results of distribution pattern modeling, we calculated the catch tray size required to analyze the distribution of aerially applied granular formulations. The minimum catch tray size for products with large granules was 4 m2 for Altosid pellets and 2 m2 for VectoBac G. In contrast, the minimum catch-tray size for Altosid XRG, Aquabac G, and Altosand, with smaller granule sizes, was 1 m2. Little gain in precision would be made by increasing the catch-tray size further, when the increased workload and infrastructure is considered. Our improved methods for monitoring the distribution pattern of aerially applied granular insecticides can be adapted for use by both public health and agricultural contractors.
Resumo:
This thesis is a study on controlling methods for six-legged robots. The study is based on mathematical modeling and simulation. A new joint controller is proposed and tested in simulation that uses joint angles and leg reaction force as inputs to generate a torque, and a method to optimise this controller is formulated and validated. Simulation shows that hexapod can walk on flat ground based on PID controllers with just four target configurations and a set of leg coordination rules, which provided the basis for the design of the new controller.
Resumo:
Sensor networks for environmental monitoring present enormous benefits to the community and society as a whole. Currently there is a need for low cost, compact, solar powered sensors suitable for deployment in rural areas. The purpose of this research is to develop both a ground based wireless sensor network and data collection using unmanned aerial vehicles. The ground based sensor system is capable of measuring environmental data such as temperature or air quality using cost effective low power sensors. The sensor will be configured such that its data is stored on an ATMega16 microcontroller which will have the capability of communicating with a UAV flying overhead using UAV communication protocols. The data is then either sent to the ground in real time or stored on the UAV using a microcontroller until it lands or is close enough to enable the transmission of data to the ground station.
Resumo:
Australian farmers have used precision agriculture technology for many years with the use of ground – based and satellite systems. However, these systems require the use of vehicles in order to analyse a wide area which can be time consuming and cost ineffective. Also, satellite imagery may not be accurate for analysis. Low cost of Unmanned Aerial Vehicles (UAV) present an effective method of analysing large plots of agricultural fields. As the UAV can travel over long distances and fly over multiple plots, it allows for more data to be captured by a sampling device such as a multispectral camera and analysed thereafter. This would allow farmers to analyse the health of their crops and thus focus their efforts on certain areas which may need attention. This project evaluates a multispectral camera for use on a UAV for agricultural applications.
Resumo:
An amorphous silicon carbonitride (Si1-x-yCxN y, x = 0:43, y = 0:31) coating was deposited on polyimide substrate using the magnetron-sputtering method. Exposure tests of the coated polyimide in atomic oxygen beam and vacuum ultraviolet radiation were performed in a ground-based simulator. Erosion kinetics measurements indicated that the erosion yield of the Si0.26C0.43N0.31 coating was about 1.5x and 1.8 × 10-26 cm3 /atom during exposure in single atomic oxygen beam, simultaneous atomic oxygen beam, and vacuum ultraviolet radiation, respectively. These values were 2 orders of magnitude lower than that of bare polyimide substrate. Scanning electron and atomic force microscopy, X-ray photoelectron spectrometer, and Fourier transformed infrared spectroscopy investigation indicated that during exposures, an oxide-rich layer composed of SiO2 and minor Si-C-O formed on the surface of the Si 0.26C0.43N0.31 coating, which was the main reason for the excellent resistance to the attacks of atomic oxygen. Moreover, vacuum ultraviolet radiation could promote the breakage of chemical bonds with low binding energy, such as C-N, C = N, and C-C, and enhance atomic oxygen erosion rate slightly.
Resumo:
Performance evaluation of object tracking systems is typically performed after the data has been processed, by comparing tracking results to ground truth. Whilst this approach is fine when performing offline testing, it does not allow for real-time analysis of the systems performance, which may be of use for live systems to either automatically tune the system or report reliability. In this paper, we propose three metrics that can be used to dynamically asses the performance of an object tracking system. Outputs and results from various stages in the tracking system are used to obtain measures that indicate the performance of motion segmentation, object detection and object matching. The proposed dynamic metrics are shown to accurately indicate tracking errors when visually comparing metric results to tracking output, and are shown to display similar trends to the ETISEO metrics when comparing different tracking configurations.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by- 768). Currently, integration in the final platform is under way.