933 resultados para Range finders
Resumo:
The registration of full 3-D models is an important task in computer vision. Range finders only reconstruct a partial view of the object. Many authors have proposed several techniques to register 3D surfaces from multiple views in which there are basically two aspects to consider. First, poor registration in which some sort of correspondences are established. Second, accurate registration in order to obtain a better solution. A survey of the most common techniques is presented and includes experimental results of some of them
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
This paper describes an autonomous docking system and web interface that allows long-term unaided use of a sophisticated robot by untrained web users. These systems have been applied to the biologically inspired RatSLAM system as a foundation for testing both its long-term stability and its practicality. While docking and web interface systems already exist, this system allows for a significantly larger margin of error in docking accuracy due to the mechanical design, thereby increasing robustness against navigational errors. Also a standard vision sensor is used for both long-range and short-range docking, compared to the many systems that require both omni-directional cameras and high resolution Laser range finders for navigation. The web interface has been designed to accommodate the significant delays experienced on the Internet, and to facilitate the non- Cartesian operation of the RatSLAM system.
Resumo:
We aim to demonstrate unaided visual 3D pose estimation and map reconstruction using both monocular and stereo vision techniques. To date, our work has focused on collecting data from Unmanned Aerial Vehicles, which generates a number of significant issues specific to the application. Such issues include scene reconstruction degeneracy from planar data, poor structure initialisation for monocular schemes and difficult 3D reconstruction due to high feature covariance. Most modern Visual Odometry (VO) and related SLAM systems make use of a number of sensors to inform pose and map generation, including laser range-finders, radar, inertial units and vision [1]. By fusing sensor inputs, the advantages and deficiencies of each sensor type can be handled in an efficient manner. However, many of these sensors are costly and each adds to the complexity of such robotic systems. With continual advances in the abilities, small size, passivity and low cost of visual sensors along with the dense, information rich data that they provide our research focuses on the use of unaided vision to generate pose estimates and maps from robotic platforms. We propose that highly accurate (�5cm) dense 3D reconstructions of large scale environments can be obtained in addition to the localisation of the platform described in other work [2]. Using images taken from cameras, our algorithm simultaneously generates an initial visual odometry estimate and scene reconstruction from visible features, then passes this estimate to a bundle-adjustment routine to optimise the solution. From this optimised scene structure and the original images, we aim to create a detailed, textured reconstruction of the scene. By applying such techniques to a unique airborne scenario, we hope to expose new robotic applications of SLAM techniques. The ability to obtain highly accurate 3D measurements of an environment at a low cost is critical in a number of agricultural and urban monitoring situations. We focus on cameras as such sensors are small, cheap and light-weight and can therefore be deployed in smaller aerial vehicles. This, coupled with the ability of small aerial vehicles to fly near to the ground in a controlled fashion, will assist in increasing the effective resolution of the reconstructed maps.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
Describes the development and testing of a robotic system for charging blast holes in underground mining. The automation system supports four main tactical functions: detection of blast holes; teleoperated arm pose control; automatic arm pose control; and human-in-the-loop visual servoing. We present the system architecture, and analyse the major components, Hole detection is crucial for automating the process, and we discuss theoretical and practical aspects in detail. The sensors used are laser range finders and cameras installed in the end effector. For automatic insertion, we consider image processing techniques to support visual servoing the tool to the hole. We also discuss issues surrounding the control of heavy-duty mining manipulators, in particular, friction, stiction, and actuator saturation.
Resumo:
We describe our experiences with automating a large fork-lift type vehicle that operates outdoors and in all weather. In particular, we focus on the use of independent and robust localisation systems for reliable navigation around the worksite. Two localisation systems are briefly described. The first is based on laser range finders and retro-reflective beacons, and the second uses a two camera vision system to estimate the vehicle’s pose relative to a known model of the surrounding buildings. We show the results from an experiment where the 20 tonne experimental vehicle, an autonomous Hot Metal Carrier, was conducting autonomous operations and one of the localisation systems was deliberately made to fail.
Resumo:
本文设计并实现了一台高重复率,紧凑型微脉冲全固态激光器。YAG晶体, Nd:YAG晶体和Cr4+:YAG晶体键合为一个单块晶体作为谐振腔。优化计算了Cr4+:YAG晶体的初始透过率,耦合输出透射率和泵浦光斑大小。对激光器的性能进行了测试,结果表明该激光器适合于空间激光测距。
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
The first Brazilian mission to an asteroid is being planned. The target is the asteroid 2001 SN263, which has a NEA orbit of class AMOR. Spectral analysis indicated that this is a C-type asteroid. This type of asteroids are dark and difficult to be studied from Earth. They hold clues of the initial stages of planetary formation and also the origin of water and life on Earth. In fact, radar data showed that 2001 SN263 is composed of three bodies with diameters of about 2.8 km, 1.1 km and 0.4 km. Therefore, the spacecraft will have the opportunity to explore three bodies on the same trip. The mission is scheduled to be launched in 2015, reaching the asteroid in 2018. It will be used a small spacecraft (150 kg) with 30 kg for the payload. The set of scientific instruments being considered to explore the target of this mission include an Imaging Camera, a Laser Rangefinder, an Infrared Spectrometer, a Synthetic Aperture Radar and a Mass Spectrometer. The main measurements to be made include the bulk properties (size, shape, mass, density, dynamics, spin state), the internal properties (structure, gravity field) and surface properties (mineralogy, morphology, elemental composition). The mission also opens an opportunity for some relevant experiments, not directly related to the target. Two such experiments will take benefit from being on board of the spacecraft along the journey to the asteroid system, which will take about three years. The first is an astrobiology experiment. The main goal of this experiment is to determine the viability of the microorganisms survival in extraterrestrial environments simulated in laboratory (chemical atmosphere, temperature, desiccation, vacuum, microgravity and radiation). The second experiment is a plasma package. The main objectives of this experiment are to study the structure and electrodynamics of plasma along the trajectory, the plasma instability processes and the density and temperature of plasma of solar wind origin along the trajectory and near the asteroids. This mission represents a great challenge for the Brazilian space program. It is being structured to allow the full engagement of the Brazilian universities and technological companies in all the necessary developments to be carried out. In this paper, we present some aspects of this mission and details of the payload that will be used and the scientific expectations. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Resumo:
2001 SN263 is a triple system asteroid. Although it was discovery in 2001, in 2008 astronomical observation carried out by Arecibo observatory revealed that it is actually a system with three bodies orbiting each other. The main central body is an irregular object with a diameter about 2.8 km, while the other two are small objects with less than 1 km across. This system presents an orbital eccentricity of 0.47, with perihelion of 1.04 and aphelion of 1.99, which means that it can be considered as a Near Earth Object. This interesting system was chosen as the target for the Aster mission - first Brazilian space exploration undertaking. A small spacecraft with 150 kg of total mass, 30 kg of payload with 110 W available for the instruments, is scheduled to be launched in 2015, and in 2018 it will approach and will be put in orbit of the triple system. This spacecraft will use electric propulsion and in its payload it will carry image camera, laser rangefinder, infrared spectrometer, mass spectrometer, and experiments to be performed in its way to the asteroid. This mission represents a great challenge for the Brazilian space program. It is being structured to allow the full engagement of the Brazilian universities and technological companies in all the necessary developments to be carried out. In this paper, we present some aspects of this mission, including the transfer trajectories to be used, and details of buss and payload subsystems that are being developed and will be used. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Resumo:
Eye-safety requirements in important applications like LIDAR or Free Space Optical Communications make specifically interesting the generation of high power, short optical pulses at 1.5 um. Moreover, high repetition rates allow reducing the error and/or the measurement time in applications involving pulsed time-of-flight measurements, as range finders, 3D scanners or traffic velocity controls. The Master Oscillator Power Amplifier (MOPA) architecture is an interesting source for these applications since large changes in output power can be obtained at GHz rates with a relatively small modulation of the current in the Master Oscillator (MO). We have recently demonstrated short optical pulses (100 ps) with high peak power (2.7 W) by gain switching the MO of a monolithically integrated 1.5 um MOPA. Although in an integrated MOPA the laser and the amplifier are ideally independent devices, compound cavity effects due to the residual reflectance at the different interfaces are often observed, leading to modal instabilities such as self-pulsations.