328 resultados para Vision, Ocular
Resumo:
This paper presents a preliminary flight test based detection range versus false alarm performance characterisation of a morphological-hidden Markov model filtering approach to vision-based airborne dim-target collision detection. On the basis of compelling in-flight collision scenario data, we calculate system operating characteristic (SOC) curves that concisely illustrate the detection range versus false alarm rate performance design trade-offs. These preliminary SOC curves provide a more complete dim-target detection performance description than previous studies (due to the experimental difficulties involved, previous studies have been limited to very short flight data sample sets and hence have not been able to quantify false alarm behaviour). The preliminary investigation here is based on data collected from 4 controlled collision encounters and supporting non-target flight data. This study suggests head-on detection ranges of approximately 2.22 km under blue sky background conditions (1.26 km in cluttered background conditions), whilst experiencing false alarms at a rate less than 1.7 false alarms/hour (ie. less than once every 36 minutes). Further data collection is currently in progress.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, traditionally employ Bank-to-Turn maneuvers to change heading and thus direction of travel. Commonly overlooked is the effect these maneuvers have on downward facing body fixed sensors, which as a result of bank, point away from the feature during turns. By adopting Skid-to-Turn maneuvers, the aircraft is able change heading whilst maintaining wings level flight, thus allowing body fixed sensors to maintain a downward facing orientation. Eliminating roll also helps to improve data quality, as sensors are no longer subjected to the swinging motion induced as they pivot about an axis perpendicular to their line of sight. Traditional tracking controllers that apply an indirect approach of capturing ground based data by flying directly overhead can also see the feature off center due to steady state pitch and roll required to stay on course. An Image Based Visual Servo controller is developed to address this issue, allowing features to be directly tracked within the image plane. Performance of the proposed controller is tested against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to simulate the field of view of a body fixed camera.
Resumo:
Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.
Resumo:
Purpose: To investigate the interocular symmetry of optical, biometric and biomechanical characteristics between the fellow eyes of myopic anisometropes. Methods: Thirty-four young, healthy myopic anisometropic adults (≥ 1 D spherical equivalent difference between eyes) without amblyopia or strabismus were recruited. A range of biometric and optical parameters were measured in both eyes of each subject including; axial length, ocular aberrations, intraocular pressure (IOP), corneal topography and biomechanics. Ocular sighting dominance was also measured. Results: Mean absolute spherical equivalent anisometropia was 1.70 ± 0.74 D and there was a strong correlation between the degree of anisometropia and the interocular difference in axial length (r = 0.81, p < 0.001). The more and less myopic eyes displayed a high degree of interocular symmetry for the majority of biometric, biomechanical and optical parameters measured. When the level of anisometropia exceeded 1.75 D, the more myopic eye was more likely to be the dominant sighting eye than for lower levels of anisometropia (p=0.002). Subjects with greater levels of anisometropia (> 1.75 D) also showed high levels of correlation between the dominant and non-dominant eyes in their biometric, biomechanical and optical characteristics. Conclusions: Although significantly different in axial length, anisometropic eyes display a high degree of interocular symmetry for a range of anterior eye biometrics and optical parameters. For higher levels of anisometropia, the more myopic eye tends to be the dominant sighting eye.
Resumo:
This paper describes a vision-based airborne collision avoidance system developed by the Australian Research Centre for Aerospace Automation (ARCAA) under its Dynamic Sense-and-Act (DSA) program. We outline the system architecture and the flight testing undertaken to validate the system performance under realistic collision course scenarios. The proposed system could be implemented in either manned or unmanned aircraft, and represents a step forward in the development of a “sense-and-avoid” capability equivalent to human “see-and-avoid”.
Resumo:
Magnetic Resonance Imaging was used to study changes in the crystalline lens and ciliary body with accommodation and aging. Monocular images were obtained in 15 young (19-29 years) and 15 older (60-70 years) emmetropes when viewing at far (6m) and at individual near points (14.5 to 20.9 cm) in the younger group. With accommodation, lens thickness increased (mean±95% CI: 0.33±0.06mm) by a similar magnitude to the decrease in anterior chamber depth (0.31±0.07mm) and equatorial diameter (0.32±0.04mm) with a decrease in the radius of curvature of the posterior lens surface (0.58±0.30mm). Anterior lens surface shape could not be determined due to the overlapping region with the iris. Ciliary ring diameter decreased (0.44±0.17mm) with no decrease in circumlental space or forward ciliary body movement. With aging, lens thickness increased (mean±95% CI: 0.97±0.24mm) similar in magnitude to the sum of the decrease in anterior chamber depth (0.45±0.21mm) and increase in anterior segment depth (0.52±0.23mm). Equatorial lens diameter increased (0.28±0.23mm) with no change in the posterior lens surface radius of curvature. Ciliary ring diameter decreased (0.57±0.41mm) with reduced circumlental space (0.43±0.15mm) and no forward ciliary body movement. Accommodative changes support the Helmholtz theory of accommodation including an increase in posterior lens surface curvature. Certain aspects of aging changes mimic accommodation.
Resumo:
Purpose. To create a binocular statistical eye model based on previously measured ocular biometric data. Methods. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of −0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. Results. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. Conclusions. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.
Resumo:
This article, published in ON LINE Opinion on 26 October 2006, discusses the broad ranging amendments to the Copyright Act which (in part) implement obligations under the Australia-US Free Trade Agreement (AUSFTA) which were introduced into parliament on October 19, 2006. It covers issues relating to the criminalisation of copyright infringement, user rights and liabilities, and Technological Protection Measures (TPMs).
Resumo:
The Toolbox, combined with MATLAB ® and a modern workstation computer, is a useful and convenient environment for investigation of machine vision algorithms. For modest image sizes the processing rate can be sufficiently ``real-time'' to allow for closed-loop control. Focus of attention methods such as dynamic windowing (not provided) can be used to increase the processing rate. With input from a firewire or web camera (support provided) and output to a robot (not provided) it would be possible to implement a visual servo system entirely in MATLAB. Provides many functions that are useful in machine vision and vision-based control. Useful for photometry, photogrammetry, colorimetry. It includes over 100 functions spanning operations such as image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration and color space conversion.
Resumo:
This paper presents a survey of previously presented vision based aircraft detection flight test, and then presents new flight test results examining the impact of camera field-of view choice on the detection range and false alarm rate characteristics of a vision-based aircraft detection technique. Using data collected from approaching aircraft, we examine the impact of camera fieldof-view choice and confirm that, when aiming for similar levels of detection confidence, an improvement in detection range can be obtained by choosing a smaller effective field-of-view (in terms of degrees per pixel).
Resumo:
The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
Enterprise Systems (ES) have emerged as possibly the most important and challenging development in the corporate use of information technology in the last decade. Organizations have invested heavily in these large, integrated application software suites expecting improvments in; business processes, management of expenditure, customer service, and more generally, competitiveness, improved access to better information/knowledge (i.e., business intelligence and analytics). Forrester survey data consistently shows that investment in ES and enterprise applications in general remains the top IT spending priority, with the ES market estimated at $38 billion and predicted to grow at a steady rate of 6.9%, reaching $50 billion by 2012 (Wang & Hamerman, 2008). Yet, organizations have failed to realize all the anticipated benefits. One of the key reasons is the inability of employees to properly utilize the capabilities of the enterprise systems to complete the work and extract information critical to decision making. In response, universities (tertiary institutes) have developed academic programs aimed at addressing the skill gaps. In parallel with the proliferation of ES, there has been growing recognition of the importance of Teaching Enterprise Systems at tertiary education institutes. Many academic papers have discused the important role of Enterprise System curricula at tertiary education institutes (Ask, 2008; Hawking, 2004; Stewart, 2001), where the teaching philosophises, teaching approaches and challenges in Enterprise Systems education were discussed. Following the global trends, tertiary institutes in the Pacific-Asian region commenced introducing Enterprise System curricula in late 1990s with a range of subjects (a subject represents a single unit, rather than a collection of units; which we refer to as a course) in faculties / schools / departments of Information Technology, Business and in some cases in Engineering. Many tertiary educations commenced their initial subject offers around four salient concepts of Enterprise Systems: (1) Enterprise Systems implementations, (2) Introductions to core modules of Enterprise Systems, (3) Application customization using a programming language (e.g. ABAP) and (4) Systems Administration. While universities have come a long way in developing curricula in the enterprise system area, many obstacles remain: high cost of technology, qualified faculty to teach, lack of teaching materials, etc.
Resumo:
For many years, computer vision has lured researchers with promises of a low-cost, passive, lightweight and information-rich sensor suitable for navigation purposes. The prime difficulty in vision-based navigation is that the navigation solution will continually drift with time unless external information is available, whether it be cues from the appearance of the scene, a map of features (whether built online or known a priori), or from an externally-referenced sensor. It is not merely position that is of interest in the navigation problem. Attitude (i.e. the angular orientation of a body with respect to a reference frame) is integral to a visionbased navigation solution and is often of interest in its own right (e.g. flight control). This thesis examines vision-based attitude estimation in an aerospace environment, and two methods are proposed for constraining drift in the attitude solution; one through a novel integration of optical flow and the detection of the sky horizon, and the other through a loosely-coupled integration of Visual Odometry and GPS position measurements. In the first method, roll angle, pitch angle and the three aircraft body rates are recovered though a novel method of tracking the horizon over time and integrating the horizonderived attitude information with optical flow. An image processing front-end is used to select several candidate lines in a image that may or may not correspond to the true horizon, and the optical flow is calculated for each candidate line. Using an Extended Kalman Filter (EKF), the previously estimated aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and location of the horizon in the image. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To evaluate the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42° and 0.71° respectively when compared with a truth attitude source. The Cessna 172 flight resulted in pitch and roll error standard deviations of 1.79° and 1.75° respectively. In the second method for estimating attitude, a novel integrated GPS/Visual Odometry (GPS/VO) navigation filter is proposed, using a structure similar to a classic looselycoupled GPS/INS error-state navigation filter. Under such an arrangement, the error dynamics of the system are derived and a Kalman Filter is developed for estimating the errors in position and attitude. Through similar analysis to the GPS/INS problem, it is shown that the proposed filter is capable of recovering the complete attitude (i.e. pitch, roll and yaw) of the platform when subjected to acceleration not parallel to velocity for both the monocular and stereo variants of the filter. Furthermore, it is shown that under general straight line motion (e.g. constant velocity), only the component of attitude in the direction of motion is unobservable. Numerical simulations are performed to demonstrate the observability properties of the GPS/VO filter in both the monocular and stereo camera configurations. Furthermore, the proposed filter is tested on imagery collected using a Cessna 172 to demonstrate the observability properties on real-world data. The proposed GPS/VO filter does not require additional restrictions or assumptions such as platform-specific dynamics, map-matching, feature-tracking, visual loop-closing, gravity vector or additional sensors such as an IMU or magnetic compass. Since no platformspecific dynamics are required, the proposed filter is not limited to the aerospace domain and has the potential to be deployed in other platforms such as ground robots or mobile phones.