111 resultados para optical sensor
em Queensland University of Technology - ePrints Archive
Resumo:
Blasting is an integral part of large-scale open cut mining that often occurs in close proximity to population centers and often results in the emission of particulate material and gases potentially hazardous to health. Current air quality monitoring methods rely on limited numbers of fixed sampling locations to validate a complex fluid environment and collect sufficient data to confirm model effectiveness. This paper describes the development of a methodology to address the need of a more precise approach that is capable of characterizing blasting plumes in near-real time. The integration of the system required the modification and integration of an opto-electrical dust sensor, SHARP GP2Y10, into a small fixed-wing and multi-rotor copter, resulting in the collection of data streamed during flight. The paper also describes the calibration of the optical sensor with an industry grade dust-monitoring device, Dusttrak 8520, demonstrating a high correlation between them, with correlation coefficients (R2) greater than 0.9. The laboratory and field tests demonstrate the feasibility of coupling the sensor with the UAVs. However, further work must be done in the areas of sensor selection and calibration as well as flight planning.
Resumo:
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
Resumo:
While sensor networks have now become very popular on land, the underwater environment still poses some difficult problems. Communication is one of the difficult challenges under water. There are two options: optical and acoustic. We have designed an optical communication board that allows the Fleck’s to communicate optically. We have tested the resulting underwater sensor nodes in two different applications.
Resumo:
In this paper we present a novel platform for underwater sensor networks to be used for long-term monitoring of coral reefs and �sheries. The sensor network consists of static and mobile underwater sensor nodes. The nodes communicate point-to-point using a novel high-speed optical communication system integrated into the TinyOS stack, and they broadcast using an acoustic protocol integrated in the TinyOS stack. The nodes have a variety of sensing capabilities, including cameras, water temperature, and pressure. The mobile nodes can locate and hover above the static nodes for data muling, and they can perform network maintenance functions such as deployment, relocation, and recovery. In this paper we describe the hardware and software architecture of this underwater sensor network. We then describe the optical and acoustic networking protocols and present experimental networking and data collected in a pool, in rivers, and in the ocean. Finally, we describe our experiments with mobility for data muling in this network.
Resumo:
Railway signaling facilitates two main functions, namely, train detection and train control, in order to maintain safe separations among the trains. Track circuits are the most commonly used train detection means with the simple open/close circuit principles; and subsequent adoption of axle counters further allows the detection of trains under adverse track conditions. However, with electrification and power electronics traction drive systems, aggravated by the electromagnetic interference in the vicinity of the signaling system, railway engineers often find unstable or even faulty operations of track circuits and axle counting systems, which inevitably jeopardizes the safe operation of trains. A new means of train detection, which is completely free from electromagnetic interference, is therefore required for the modern railway signaling system. This paper presents a novel optical fiber sensor signaling system. The sensor operation, field setup, axle detection solution set, and test results of an installation in a trial system on a busy suburban railway line are given.
Resumo:
The use of metal stripes for the guiding of plasmons is a well established technique for the infrared regime and has resulted in the development of a myriad of passive optical components and sensing devices. However, the plasmons suffer from large losses around sharp bends, making the compact design of nanoscale sensors and circuits problematic. A compact alternative would be to use evanescent coupling between two sufficiently close stripes, and thus we propose a compact interferometer design using evanescent coupling. The sensitivity of the design is compared with that achieved using a hand-held sensor based on the Kretschmann style surface plasmon resonance technique. Modeling of the new interferometric sensor is performed for various structural parameters using finite-difference time-domain and COMSOL Multiphysics. The physical mechanisms behind the coupling and propagation of plasmons in this structure are explained in terms of the allowed modes in each section of the device.
Resumo:
Raman spectroscopy, when used in spatially offset mode, has become a potential tool for the identification of explosives and other hazardous substances concealed in opaque containers. The molecular fingerprinting capability of Raman spectroscopy makes it an attractive tool for the unambiguous identification of hazardous substances in the field. Additionally, minimal sample preparation is required compared with other techniques. We report a field portable time resolved Raman sensor for the detection of concealed chemical hazards in opaque containers. The new sensor uses a pulsed nanosecond laser source in conjunction with an intensified CCD detector. The new sensor employs a combination of time and space resolved Raman spectroscopy to enhance the detection capability. The new sensor can identify concealed hazards by a single measurement without any chemometric data treatments.
Resumo:
A high sensitive fiber Bragg grating (FBG) strain sensor with automatic temperature compensation is demonstrated. FBG is axially linked with a stick and their free ends are fixed to the measured object. When the measured strain changes, the stick does not change in length, but the FBG does. When the temperature changes, the stick changes in length to pull the FBG to realize temperature compensation. In experiments, 1.45 times strain sensitivity of bare FBG with temperature compensation of less than 0.1 nm Bragg wavelength drift over 100 ◦C shift is achieved.
Resumo:
At cryogenic temperature, a fiber Bragg grating (FBG) temperature sensor with controllable sensitivity and variable measurement range is demonstrated by using bimetal configuration. In experiments, sensitivities of -51.2, -86.4, and -520 pm/K are achieved by varying the lengths of the metals. Measurement ranges of 293-290.5, 283-280.5, and 259-256.5 K are achieved by shortening the distance of the gap among the metals.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Near work may play an important role in the development of myopia in the younger population. The prevalence of myopia has also been found to be higher in occupations that involve substantial near work tasks, for example in microscopists and textile workers. When nearwork is performed, it typically involves accommodation, convergence and downward gaze. A number of previous studies have examined the effects of accommodation and convergence on changes in the optics and biometrics of the eye in primary gaze. However, little is known about the influence of accommodation on the eye in downward gaze. This thesis is primarily concerned with investigating the changes in the eye during near work in downward gaze under natural viewing conditions. To measure wavefront aberrations in downward gaze under natural viewing conditions, we modified a commercial Shack-Hartmann wavefront sensor by adding a relay lens system to allow on-axis ocular aberration measurements in primary gaze and downward gaze, with binocular fixation. Measurements with the modified wavefront sensor in primary and downward gaze were validated against a conventional aberrometer using both a model eye and in 9 human subjects. We then conducted an experiment to investigate changes in ocular aberrations associated with accommodation in downward gaze over 10 mins in groups of both myopes (n = 14) and emmetropes (n =12) using the modified Shack-Hartmann wavefront sensor. During the distance accommodation task, small but significant changes in refractive power (myopic shift) and higher order aberrations were observed in downward gaze compared to primary gaze. Accommodation caused greater changes in higher order aberrations (in particular coma and spherical aberration) in downward gaze than primary gaze, and there was evidence that the changes in certain aberrations with accommodation over time were different in downward gaze compared to primary gaze. There were no obvious systematic differences in higher order aberrations between refractive error groups during accommodation or downward gaze for fixed pupils. However, myopes exhibited a significantly greater change in higher order aberrations (in particular spherical aberration) than emmetropes for natural pupils after 10 mins of a near task (5 D accommodation) in downward gaze. These findings indicated that ocular aberrations change from primary to downward gaze, particularly with accommodation. To understand the mechanism underlying these changes in greater detail, we then extended this work to examine the characteristics of the corneal optics, internal optics, anterior biometrics and axial length of the eye during a near task, in downward gaze, over 10 mins. Twenty young adult subjects (10 emmetropes and 10 myopes) participated in this study. To measure corneal topography and ocular biometrics in downward gaze, a rotating Scheimpflug camera and an optical biometer were inclined on a custom built, height and tilt adjustable table. We found that both corneal optics and internal optics change with downward gaze, resulting in a myopic shift (~0.10 D) in the spherical power of the eye. The changes in corneal optics appear to be due to eyelid pressure on the anterior surface of the cornea, whereas the changes in the internal optics (an increase in axial length and a decrease in anterior chamber depth) may be associated with movement of the crystalline lens, under the action of gravity, and the influence of altered biomechanical forces from the extraocular muscles on the globe with downward gaze. Changes in axial length with accommodation were significantly greater in downward gaze than primary gaze (p < 0.05), indicating an increased effect of the mechanical forces from the ciliary muscle and extraocular muscles. A subsequent study was conducted to investigate the changes in anterior biometrics, axial length and choroidal thickness in nine cardinal gaze directions under the actions of the extraocular muscles. Ocular biometry measurements were obtained from 30 young adults (10 emmetropes, 10 low myopes and 10 moderate myopes) through a rotating prism with 15° deviation, along the foveal axis, using a non-contact optical biometer in each of nine different cardinal directions of gaze, over 5 mins. There was a significant influence of gaze angle and time on axial length (both p < 0.001), with the greatest axial elongation (+18 ± 8 μm) occurring with infero-nasal gaze (p < 0.001) and a slight decrease in axial length in superior gaze (−12 ± 17 μm) compared with primary gaze (p < 0.001). There was a significant correlation between refractive error (spherical equivalent refraction) and the mean change in axial length in the infero-nasal gaze direction (Pearson's R2 = 0.71, p < 0.001). To further investigate the relative effect of gravity and extraocular muscle force on the axial length, we measured axial length in 15° and 25° downward gaze with the biometer inclined on a tilting table that allowed gaze shifts to occur with either full head turn but no eye turn (reflects the effect of gravity), or full eye turn with no head turn (reflects the effect of extraocular muscle forces). We observed a significant axial elongation in 15° and 25° downward gaze in the full eye turn condition. However, axial length did not change significantly in downward gaze over 5 mins (p > 0.05) in the full head turn condition. The elongation of the axial length in downward gaze appears to be due to the influence of the extraocular muscles, since the effect was not present when head turn was used instead of eye turn. The findings of these experiments collectively show the dynamic characteristics of the optics and biometrics of the eye in downward gaze during a near task, over time. These were small but significant differences between myopic and emmetropic eyes in both the optical and biomechanical changes associated with shifts of gaze direction. These differences between myopes and emmetropes could arise as a consequence of excessive eye growth associated with myopia. However the potentially additive effects of repeated or long lasting near work activities employing infero-nasal gaze could also act to promote elongation of the eye due to optical and/or biomechanical stimuli.