18 resultados para VISUAL INSPECTION METHODS
em Cambridge University Engineering Department Publications Database
Resumo:
Liquid crystal on silicon (LCOS) is one of the most exciting technologies, combining the optical modulation characteristics of liquid crystals with the power and compactness of a silicon backplane. The objective of our work is to improve cell assembly and inspection methods by introducing new equipment for automated assembly and by using an optical inspection microscope. A Suss-Micro'Tec Universal device bonder is used for precision assembly and device packaging and an Olympus BX51 high resolution microscope is employed for device inspection. ©2009 Optical Society of America.
Resumo:
Large concrete structures need to be inspected in order to assess their current physical and functional state, to predict future conditions, to support investment planning and decision making, and to allocate limited maintenance and rehabilitation resources. Current procedures in condition and safety assessment of large concrete structures are performed manually leading to subjective and unreliable results, costly and time-consuming data collection, and safety issues. To address these limitations, automated machine vision-based inspection procedures have increasingly been proposed by the research community. This paper presents current achievements and open challenges in vision-based inspection of large concrete structures. First, the general concept of Building Information Modeling is introduced. Then, vision-based 3D reconstruction and as-built spatial modeling of concrete civil infrastructure are presented. Following that, the focus is set on structural member recognition as well as on concrete damage detection and assessment exemplified for concrete columns. Although some challenges are still under investigation, it can be concluded that vision-based inspection methods have significantly improved over the last 10 years, and now, as-built spatial modeling as well as damage detection and assessment of large concrete structures have the potential to be fully automated.
Resumo:
Liquid crystal on silicon (LCOS) is one of the most exciting technologies, combining the optical modulation characteristics of liquid crystals with the power and compactness of a silicon backplane. The objective of our work is to improve cell assembly and inspection methods by introducing new equipment for automated assembly and by using an optical inspection microscope. A Suss-MicroTec Universal device bonder is used for precision assembly and device packaging and an Olympus BX51 high resolution microscope is employed for device inspection. © 2009 Optical Society of America.
Resumo:
We have constructed plasmids to be used for in vitro signature-tagged mutagenesis (STM) of Campylobacter jejuni and used these to generate STM libraries in three different strains. Statistical analysis of the transposon insertion sites in the C. jejuni NCTC 11168 chromosome and the plasmids of strain 81-176 indicated that their distribution was not uniform. Visual inspection of the distribution suggested that deviation from uniformity was not due to preferential integration of the transposon into a limited number of hot spots but rather that there was a bias towards insertions around the origin. We screened pools of mutants from the STM libraries for their ability to colonize the ceca of 2-week-old chickens harboring a standardized gut flora. We observed high-frequency random loss of colonization proficient mutants. When cohoused birds were individually inoculated with different tagged mutants, random loss of colonization-proficient mutants was similarly observed, as was extensive bird-to-bird transmission of mutants. This indicates that the nature of campylobacter colonization in chickens is complex and dynamic, and we hypothesize that bottlenecks in the colonization process and between-bird transmission account for these observations.
Resumo:
Aside from cracks, the impact of other surface defects, such as air pockets and discoloration, can be detrimental to the quality of concrete in terms of strength, appearance and durability. For this reason, local and national codes provide standards for quantifying the quality impact of these concrete surface defects and owners plan for regular visual inspections to monitor surface conditions. However, manual visual inspection of concrete surfaces is a qualitative (and subjective) process with often unreliable results due to its reliance on inspectors’ own criteria and experience. Also, it is labor intensive and time-consuming. This paper presents a novel, automated concrete surface defects detection and assessment approach that addresses these issues by automatically quantifying the extent of surface deterioration. According to this approach, images of the surface shot from a certain angle/distance can be used to automatically detect the number and size of surface air pockets, and the degree of surface discoloration. The proposed method uses histogram equalization and filtering to extract such defects and identify their properties (e.g. size, shape, location). These properties are used to quantify the degree of impact on the concrete surface quality and provide a numerical tool to help inspectors accurately evaluate concrete surfaces. The method has been implemented in C++ and results that validate its performance are presented.
Resumo:
There has recently been considerable research published on the applicability of monitoring systems for improving civil infrastructure management decisions. Less research has been published on the challenges in interpreting the collected data to provide useful information for engineering decision makers. This paper describes some installed monitoring systems on the Hammersmith Flyover, a major bridge located in central London (United Kingdom). The original goals of the deployments were to evaluate the performance of systems for monitoring prestressing tendon wire breaks and to assess the performance of the bearings supporting the bridge piers because visual inspections had indicated evidence of deterioration in both. This paper aims to show that value can be derived from detailed analysis of measurements from a number of different sensors, including acoustic emission monitors, strain, temperature and displacement gauges. Two structural monitoring systems are described, a wired system installed by a commercial contractor on behalf of the client and a research wireless deployment installed by the University of Cambridge. Careful interpretation of the displacement and temperature gauge data enabled bearings that were not functioning as designed to be identified. The acoustic emission monitoring indicated locations at which rapid deterioration was likely to be occurring; however, it was not possible to verify these results using any of the other sensors installed and hence the only method for confirming these results was by visual inspection. Recommendations for future bridge monitoring projects are made in light of the lessons learned from this monitoring case study. © 2014 This work is made available under the terms of the Creative Commons Attribution 4.0 International license,.
Resumo:
The safety of post-earthquake structures is evaluated manually through inspecting the visible damage inflicted on structural elements. This process is time-consuming and costly. In order to automate this type of assessment, several crack detection methods have been created. However, they focus on locating crack points. The next step, retrieving useful properties (e.g. crack width, length, and orientation) from the crack points, has not yet been adequately investigated. This paper presents a novel method of retrieving crack properties. In the method, crack points are first located through state-of-the-art crack detection techniques. Then, the skeleton configurations of the points are identified using image thinning. The configurations are integrated into the distance field of crack points calculated through a distance transform. This way, crack width, length, and orientation can be automatically retrieved. The method was implemented using Microsoft Visual Studio and its effectiveness was tested on real crack images collected from Haiti.
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
Several research studies have been recently initiated to investigate the use of construction site images for automated infrastructure inspection, progress monitoring, etc. In these studies, it is always necessary to extract material regions (concrete or steel) from the images. Existing methods made use of material's special color/texture ranges for material information retrieval, but they do not sufficiently discuss how to find these appropriate color/texture ranges. As a result, users have to define appropriate ones by themselves, which is difficult for those who do not have enough image processing background. This paper presents a novel method of identifying concrete material regions using machine learning techniques. Under the method, each construction site image is first divided into regions through image segmentation. Then, the visual features of each region are calculated and classified with a pre-trained classifier. The output value determines whether the region is composed of concrete or not. The method was implemented using C++ and tested over hundreds of construction site images. The results were compared with the manual classification ones to indicate the method's validity.
Resumo:
Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4
Resumo:
Air pockets, one kind of concrete surface defects, are often created on formed concrete surfaces during concrete construction. Their existence undermines the desired appearance and visual uniformity of architectural concrete. Therefore, measuring the impact of air pockets on the concrete surface in the form of air pockets is vital in assessing the quality of architectural concrete. Traditionally, such measurements are mainly based on in-situ manual inspections, the results of which are subjective and heavily dependent on the inspectors’ own criteria and experience. Often, inspectors may make different assessments even when inspecting the same concrete surface. In addition, the need for experienced inspectors costs owners or general contractors more in inspection fees. To alleviate these problems, this paper presents a methodology that can measure air pockets quantitatively and automatically. In order to achieve this goal, a high contrast, scaled image of a concrete surface is acquired from a fixed distance range and then a spot filter is used to accurately detect air pockets with the help of an image pyramid. The properties of air pockets (the number, the size, and the occupation area of air pockets) are subsequently calculated. These properties are used to quantify the impact of air pockets on the architectural concrete surface. The methodology is implemented in a C++ based prototype and tested on a database of concrete surface images. Comparisons with manual tests validated its measuring accuracy. As a result, the methodology presented in this paper can increase the reliability of concrete surface quality assessment
Resumo:
Manually inspecting bridges is a time-consuming and costly task. There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame as some state DOTs cannot afford the essential costs and manpower. This paper presents a novel method that can detect bridge concrete columns from visual data for the purpose of eventually creating an automated bridge condition assessment system. The method employs SIFT feature detection and matching to find overlapping areas among images. Affine transformation matrices are then calculated to combine images containing different segments of one column into a single image. Following that, the bridge columns are detected by identifying the boundaries in the stitched image and classifying the material within each boundary. Preliminary test results using real bridge images indicate that most columns in stitched images can be correctly detected and thus, the viability of the application of this research.