907 resultados para Vehicle frames.
Resumo:
We present algorithms, systems, and experimental results for underwater data muling. In data muling a mobile agent interacts with static agents to upload, download, or transport data to a different physical location. We consider a system comprising an Autonomous Underwater Vehicle (AUV) and many static Underwater Sensor Nodes (USN) networked together optically and acoustically. The AUV can locate the static nodes using vision and hover above the static nodes for data upload. We describe the hardware and software architecture of this underwater system, as well as experimental data. © 2006 IEEE.
Resumo:
The development of autonomous air vehicles can be an expensive research pursuit. To alleviate some of the financial burden of this process, we have constructed a system consisting of four winches each attached to a central pod (the simulated air vehicle) via cables - a cable-array robot. The system is capable of precisely controlling the three dimensional position of the pod allowing effective testing of sensing and control strategies before experimentation on a free-flying vehicle. In this paper, we present a brief overview of the system and provide a practical control strategy for such a system. ©2005 IEEE.
Resumo:
We describe a sensor network deployment method using autonomous flying robots. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data we collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for a second, repair, pass to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth.).
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.
Resumo:
The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.
Resumo:
A research project was conducted at Queensland University of Technology on the relationship between the forces at the wheel-rail interface in track and the rate of degradation of track. Data for the study was obtained from an instrumented vehicle which ran repeatedly over a section of Queensland Rail's track in Central Queensland over a 6-month period. The wheel-rail forces had to be correlated with the elements of roughness in the test track profile, which were measured with a variety of equipment. At low frequencies, there was strong correlation between forces and profile, as expected, but diminishing correlation as frequencies increased.
Dynamic analysis of on-board mass data to determine tampering in heavy vehicle on-board mass systems
Resumo:
Transport Certification Australia Limited, jointly with the National Transport Commission, has undertaken a project to investigate the feasibility of on-board mass monitoring (OBM) devices for regulatory purposes. OBM increases jurisdictional confidence in operational heavy vehicle compliance. This paper covers technical issues regarding potential use of dynamic data from OBM systems to indicate that tampering has occurred. Tamper-evidence and accuracy of current OBM systems needed to be determined before any regulatory schemes were put in place for its use. Tests performed to determine potential for, and ease of, tampering. An algorithm was developed to detect tamper events. Its results are detailed.
Resumo:
Objective • Feasibility programme for on-board mass (OBM) monitoring of heavy vehicles (HVs) • Australian road authorities through Transport Certification Australia (TCA) • Accuracy of contemporary, commercially-available OBM units in Australia • Results need to be addressed/incorporated into specifications for Stage 2 of Intelligent Access Program (IAP) by Transport Certification Australia
Resumo:
If mobile robots are to perform useful tasks in the real-world they will require a catalog of fundamental navigation competencies and a means to select between them. In this paper we describe our work on strongly vision-based competencies: road-following, person or vehicle following, pose and position stabilization. Results from experiments on an outdoor autonomous tractor, a car-like vehicle, are presented.
Resumo:
The development of autonomous air vehicles can be an expensive research pursuit. To alleviate some of the financial burden of this process, we have constructed a system consisting of four winches each attached to a central pod (the simulated air vehicle) via cables - a cable-array robot. The system is capable of precisely controlling the three dimensional position of the pod allowing effective testing of sensing and control strategies before experimentation on a free-flying vehicle. In this paper, we present a brief overview of the system and provide a practical control strategy for such a system.
Resumo:
Maintenance is a time consuming and expensive task for any golf course or driving range manager. For a golf course the primary tasks are grass mowing and maintenance (fertilizer and herbicide spreading), while for a driving range mowing, maintenance and ball collection are required. All these tasks require an operator to drive a vehicle along paths which are generally predefined. This paper presents some preliminary in-field tsting results for an automated tractor vehicle performing golf ball collection on an actual driving range, and mowing on difficult unstructured terrain.
Resumo:
In this paper, we outline the sensing system used for the visual pose control of our experimental car-like vehicle, the autonomous tractor. The sensing system consists of a magnetic compass, an omnidirectional camera and a low-resolution odometry system. In this work, information from these sensors is fused using complementary filters. Complementary filters provide a means of fusing information from sensors with different characteristics in order to produce a more reliable estimate of the desired variable. Here, the range and bearing of landmarks observed by the vision system are fused with odometry information and a vehicle model, providing a more reliable estimate of these states. We also present a method of combining a compass sensor with odometry and a vehicle model to improve the heading estimate.
Resumo:
In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform.