414 resultados para Near-vision impairment
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.
Resumo:
Mid-infrared (MIR) and near-infrared (NIR) spectroscopy have been compared and evaluated for differentiating kaolinite, coal bearing kaolinite and halloysite. Kaolinite, coal bearing kaolinite and halloysite are the three relative abundant mineral of the kaolin group, especially in China. In the MIR spectra, the differences are shown in the 3000-3600 cm-1 between kaolinite and halloysite. It can not be obviously differentiated the kaolinite and halloysite, let alone kaolinite and coal bearing kaolinite. However, NIR, together with MIR, give us the sufficient evidence to differentiate the kaolinite and halloysite, especially kaolinite and coal bearing kaolinite. There are obvious differences between kaolinite and halloysite in the all range of their spectra, and it also show some difference between kaolinite and coal bearing kaolinite. Therefore, the reproducibility of measurement, signal to noise ratio and richness of qualitative information should be simultaneously considered for proper selection of a spectroscopic method for mineral analysis.
Resumo:
Objective: To determine whether bifocal and prismatic bifocal spectacles could control myopia in children with high rates of myopic progression. ---------- Methods: This was a randomized controlled clinical trial. One hundred thirty-five (73 girls and 62 boys) myopic Chinese Canadian children (myopia of 1.00 diopters [D]) with myopic progression of at least 0.50 D in the preceding year were randomly assigned to 1 of 3 treatments: (1) single-vision lenses (n = 41), (2) +1.50-D executive bifocals (n = 48), or (3) +1.50-D executive bifocals with a 3–prism diopters base-in prism in the near segment of each lens (n = 46). ---------- Main Outcome Measures: Myopic progression measured by an automated refractor under cycloplegia and increase in axial length (secondary) measured by ultrasonography at 6-month intervals for 24 months. Only the data of the right eye were used. ---------- Results: Of the 135 children (mean age, 10.29 years [SE, 0.15 years]; mean visual acuity, –3.08 D [SE, 0.10 D]), 131 (97%) completed the trial after 24 months. Myopic progression averaged –1.55 D (SE, 0.12 D) for those who wore single-vision lenses, –0.96 D (SE, 0.09 D) for those who wore bifocals, and –0.70 D (SE, 0.10 D) for those who wore prismatic bifocals. Axial length increased an average of 0.62 mm (SE, 0.04 mm), 0.41 mm (SE, 0.04 mm), and 0.41 mm (SE, 0.05 mm), respectively. The treatment effect of bifocals (0.59 D) and prismatic bifocals (0.85 D) was significant (P < .001) and both bifocal groups had less axial elongation (0.21 mm) than the single-vision lens group (P < .001). ---------- Conclusions: Bifocal lenses can moderately slow myopic progression in children with high rates of progression after 24 months.
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
Tilted disc syndrome can cause visual field defects due to an optic disc anomaly. Recent electrophysiological findings demonstrate reduced central outer retinal function with ophthalmoscopically normal maculae. We measured macular sensitivity with the microperimeter and performed psychophysical assessment of mesopic rod and cone luminance temporal sensitivity (critical fusion frequency)in a 52-year-old male patient with tilted disc syndrome and ophthalmoscopically normal maculae. We found a marked reduction of sensitivity in the central 20 degrees and reduced rod- and cone-mediated mesopic visual function. Our findings extend previous electrophysiological data that suggest an outer retinal involvement of cone pathways and present a case with rod and cone impairment mediated via the magnocellular pathway in uncomplicated tilted disc syndrome.
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, commonly employ Bank-to-Turn ma- neuvers to change heading and thus direction of travel. Whilst effective, banking an aircraft during the inspection of ground based features hinders data collection, with body fixed sen- sors angled away from the direction of turn and a panning motion induced through roll rate that can reduce data quality. By adopting Skid-to-Turn maneuvers, the aircraft can change heading whilst maintaining wings level flight, thus allowing body fixed sensors to main- tain a downward facing orientation. An Image-Based Visual Servo controller is developed to directly control the position of features as captured by onboard inspection sensors. This improves on the indirect approach taken by other tracking controllers where a course over ground directly above the feature is assumed to capture it centered in the field of view. Performance of the proposed controller is compared against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to replicate the field of view of a body fixed camera.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved. This paper describes the development of detection algorithms and the evaluation of a real-time flight ready hardware implementation of a vision-based collision detection system suitable for fixed-wing small/medium size UAS. In particular, this paper demonstrates the use of Hidden Markov filter to track and estimate the elevation (β) and bearing (α) of the target, compares several candidate graphic processing hardware choices, and proposes an image based visual servoing approach to achieve collision avoidance