955 resultados para Pulsed laser range finder


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ARGONTUBE is a liquid argon time projection chamber (LAr TPC) with a drift field generated in-situ by a Greinacher voltage multiplier circuit. We present results on the measurement of the drift-field distribution inside ARGONTUBE using straight ionization tracks generated by an intense UV laser beam. Our analysis is based on a simplified model of the charging of a multi-stage Greinacher circuit to describe the voltages on the field cage rings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As an emerging optical material, graphene’s ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp2-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm2, an order-of-magnitude lower than measured and theoretical ablation thresholds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seal of the Ordnance Department on t.p.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional detection scheme for self-mixing sensors uses an integrated photodiode within the laser package to monitor the self mixing signal. This arrangement can be simplified by directly obtaining the self-mixing signals across the laser diode itself and omitting the photodiode. This work reports on a Vertical-Cavity Surface-Emitting Laser (VCSEL) based selfmixing sensor using the laser junction voltage to obtain the selfmixing signal. We show that the same information can be obtained with only minor changes to the extraction circuitry leading to potential cost saving with reductions in component costs and complexity and significant increase in bandwidth favoring high speed modulation. Experiments using both photo current and voltage detection were carried out and the results obtained show good agreement with the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrated a high fundamental repetition-rate pulsed erbium-doped fiber laser with all-fiber-integrated configuration. A novel scheme using a 45°-tilted fiber grating as the in-fiber polarizing element was employed to shorten the total cavity length and, thus, increase the fundamental repetition rate of the laser. Dissipative soliton pulses mode-locked with a fundamental repetition rate of 251.3 MHz and pulse duration of 96.7 fs have been achieved from the compact and all-fiber ring cavity laser. Additionally, passively Q-switched pulses were observed from this high fundamental repetition-rate fiber laser, which is the first report on Q-switched fiber laser using a tilted fiber grating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In geotechnical engineering, the stability of rock excavations and walls is estimated by using tools that include a map of the orientations of exposed rock faces. However, measuring these orientations by using conventional methods can be time consuming, sometimes dangerous, and is limited to regions of the exposed rock that are reachable by a human. This thesis introduces a 2D, simulated, quadcopter-based rock wall mapping algorithm for GPS denied environments such as underground mines or near high walls on surface. The proposed algorithm employs techniques from the field of robotics known as simultaneous localization and mapping (SLAM) and is a step towards 3D rock wall mapping. Not only are quadcopters agile, but they can hover. This is very useful for confined spaces such as underground or near rock walls. The quadcopter requires sensors to enable self localization and mapping in dark, confined and GPS denied environments. However, these sensors are limited by the quadcopter payload and power restrictions. Because of these restrictions, a light weight 2D laser scanner is proposed. As a first step towards a 3D mapping algorithm, this thesis proposes a simplified scenario in which a simulated 1D laser range finder and 2D IMU are mounted on a quadcopter that is moving on a plane. Because the 1D laser does not provide enough information to map the 2D world from a single measurement, many measurements are combined over the trajectory of the quadcopter. Least Squares Optimization (LSO) is used to optimize the estimated trajectory and rock face for all data collected over the length of a light. Simulation results show that the mapping algorithm developed is a good first step. It shows that by combining measurements over a trajectory, the scanned rock face can be estimated using a lower-dimensional range sensor. A swathing manoeuvre is introduced as a way to promote loop closures within a short time period, thus reducing accumulated error. Some suggestions on how to improve the algorithm are also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes system identification, estimation and control of translational motion and heading angle for a cost effective open-source quadcopter — the MikroKopter. The dynamics of its built-in sensors, roll and pitch attitude controller, and system latencies are determined and used to design a computationally inexpensive multi-rate velocity estimator that fuses data from the built-in inertial sensors and a low-rate onboard laser range finder. Control is performed using a nested loop structure that is also computationally inexpensive and incorporates different sensors. Experimental results for the estimator and closed-loop positioning are presented and compared with ground truth from a motion capture system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares different state-of-the-art exploration strategies for teams of mobile robots exploring an unknown environment. The goal is to help in determining a best strategy for a given multi-robot scenario and optimization target. Experiments are done in a 2D-simulation environment with 5 robots that are equipped with a horizontal laser range finder. Required components like SLAM, path planning and obstacle avoidance of every robot are included in a full-system simulation. To evaluate different strategies the time to finish exploration, the number of measurements that have been integrated into the map and the development in size of the explored area over time are used. The results of extensive test runs on three environments with different characteristics show that simple strategies can perform fairly well in many situations but specialized strategies can improve performance with regards to their targeted evaluation measure.