980 resultados para Autonomous underwater vehicle
Resumo:
This report summarizes the current state of the art in cooperative vehicle-highway automation systems in Europe and Asia based on a series of meetings, demonstrations, and site visits, combined with the results of literature review. This review covers systems that provide drivers with a range of automation capabilities, from driver assistance to fully automated driving, with an emphasis on cooperative systems that involve active exchanges of information between the vehicles and the roadside and among separate vehicles. The trends in development and deployment of these systems are examined by country, and the similarities and differences relative to the U.S. situation are noted, leading toward recommendations for future U.S. action. The Literature Review on Recent International Activity in Cooperative Vehicle-Highway Automation Systems is published separately as FHWA-HRT-13-025.
Resumo:
This literature review supports the report, Recent International Activity in Cooperative Vehicle-Highway Automation Systems. It reviews the published literature in English dating from 2007 or later about non-U.S.-based work on cooperative vehicle-highway automation systems. This review covers work performed in Europe and Japan, with application to transit buses, heavy trucks, and passenger cars. In addition to fully automated driving of the vehicles (without human intervention), it also covers partial automation systems, which automate subsets of the total driving process. Recent International Activity in Cooperative Vehicle Highway Automation Systems is published separately as FHWA-HRT-12-033.
Resumo:
A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.
Resumo:
Hardly a day goes by without the release of a handful of news stories about autonomous vehicles (or AVs for short). The proverbial “tipping point” of awareness has been reached in the public consciousness as AV technology is quickly becoming the new focus of firms from Silicon Valley to Detroit and beyond. Automation has, and will continue to have far-reaching implications for many human activities, but for driving, the technology is here. Google has been in talks with automaker Ford (1), Elon Musk has declared that Tesla will have the appropriate technology in two years (2), GM is paired-up with Lyft (3), Uber is in development-mode (4), Microsoft and Volvo have announced a partnership (5), Apple has been piloting its top-secret project “Titan” (6), Toyota is working on its own technology (7), as is BMW (8). Audi (9) made a splash by sending a driverless A7 concept car 550 miles from San Francisco to Las Vegas just in time to roll-into the 2016 Consumer Electronics Show. Clearly, the race is on.
Resumo:
The recent years have witnessed increased development of small, autonomous fixed-wing Unmanned Aerial Vehicles (UAVs). In order to unlock widespread applicability of these platforms, they need to be capable of operating under a variety of environmental conditions. Due to their small size, low weight, and low speeds, they require the capability of coping with wind speeds that are approaching or even faster than the nominal airspeed. In this thesis, a nonlinear-geometric guidance strategy is presented, addressing this problem. More broadly, a methodology is proposed for the high-level control of non-holonomic unicycle-like vehicles in the presence of strong flowfields (e.g. winds, underwater currents) which may outreach the maximum vehicle speed. The proposed strategy guarantees convergence to a safe and stable vehicle configuration with respect to the flowfield, while preserving some tracking performance with respect to the target path. As an alternative approach, an algorithm based on Model Predictive Control (MPC) is developed, and a comparison between advantages and disadvantages of both approaches is drawn. Evaluations in simulations and a challenging real-world flight experiment in very windy conditions confirm the feasibility of the proposed guidance approach.
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.
Resumo:
Assessment and prediction of the impact of vehicular traffic emissions on air quality and exposure levels requires knowledge of vehicle emission factors. The aim of this study was quantification of emission factors from an on road, over twelve months measurement program conducted at two sites in Brisbane: 1) freeway type (free flowing traffic at about 100 km/h, fleet dominated by small passenger cars - Tora St); and 2) urban busy road with stop/start traffic mode, fleet comprising a significant fraction of heavy duty vehicles - Ipswich Rd. A physical model linking concentrations measured at the road for specific meteorological conditions with motor vehicle emission factors was applied for data analyses. The focus of the study was on submicrometer particles; however the measurements also included supermicrometer particles, PM2.5, carbon monoxide, sulfur dioxide, oxides of nitrogen. The results of the study are summarised in this paper. In particular, the emission factors for submicrometer particles were 6.08 x 1013 and 5.15 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd respectively and for supermicrometer particles for Tora St, 1.48 x 109 particles per vehicle-1 km-1. Emission factors of diesel vehicles at both sites were about an order of magnitude higher than emissions from gasoline powered vehicles. For submicrometer particles and gasoline vehicles the emission factors were 6.08 x 1013 and 4.34 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively, and for diesel vehicles were 5.35 x 1014 and 2.03 x 1014 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively. For supermicrometer particles at Tora St the emission factors were 2.59 x 109 and 1.53 x 1012 particles per vehicle-1 km-1, for gasoline and diesel vehicles, respectively.
Underwater Emissions from a Two-Stroke Outboard Engine: Can the Type of Lubricant Make a Difference?