92 resultados para SEM image analysis
Resumo:
Accurate road lane information is crucial for advanced vehicle navigation and safety applications. With the increasing of very high resolution (VHR) imagery of astonishing quality provided by digital airborne sources, it will greatly facilitate the data acquisition and also significantly reduce the cost of data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lanes from aerial images with employment of the image analysis procedures. This algorithm starts with constructing the (Digital Surface Model) DSM and true orthophotos from the stereo images. Next, a maximum likelihood clustering algorithm is used to separate road from other ground objects. After the detection of road surface, the road traffic and lane lines are further detected using texture enhancement and morphological operations. Finally, the generated road network is evaluated to test the performance of the proposed approach, in which the datasets provided by Queensland department of Main Roads are used. The experiment result proves the effectiveness of our approach.
Resumo:
By incorporating ferrocene into the hydrophobic membrane of PEG-b-PCL polymersome nanoparticles it is possible to selectively visualize their core using Transmission Electron Microscopy (TEM). Two different sizes of ferrocene-loaded polymersomes with mean hydrodynamic diameters of approximately 40 and 90 nm were prepared. Image analysis of TEM pictures of these polymersomes found that the mean diameter of the core was 4–5 times smaller than the mean hydrodynamic diameter. The values obtained also allow the surface diameter and internal volume of the core to be calculated.
Resumo:
a presentation about immersive visualised simulation systems, image analysis and GPGPU Techonology
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
Purpose: To analyze the repeatability of measuring nerve fiber length (NFL) from images of the human corneal subbasal nerve plexus using semiautomated software. Methods: Images were captured from the corneas of 50 subjects with type 2 diabetes mellitus who showed varying severity of neuropathy, using the Heidelberg Retina Tomograph 3 with Rostock Corneal Module. Semiautomated nerve analysis software was independently used by two observers to determine NFL from images of the subbasal nerve plexus. This procedure was undertaken on two occasions, 3 days apart. Results: The intraclass correlation coefficient values were 0.95 (95% confidence intervals: 0.92–0.97) for individual subjects and 0.95 (95% confidence intervals: 0.74–1.00) for observer. Bland-Altman plots of the NFL values indicated a reduced spread of data with lower NFL values. The overall spread of data was less for (a) the observer who was more experienced at analyzing nerve fiber images and (b) the second measurement occasion. Conclusions: Semiautomated measurement of NFL in the subbasal nerve fiber layer is highly repeatable. Repeatability can be enhanced by using more experienced observers. It may be possible to markedly improve repeatability when measuring this anatomic structure using fully automated image analysis software.
Resumo:
Dry eye syndrome is one of the most commonly reported eye health conditions. Dynamic-area highspeed videokeratoscopy (DA-HSV) represents a promising alternative to the most invasive clinical methods for the assessment of the tear film surface quality (TFSQ), particularly as Placido-disk videokeratoscopy is both relatively inexpensive and widely used for corneal topography assessment. Hence, improving this technique to diagnose dry eye is of clinical significance and the aim of this work. First, a novel ray-tracing model is proposed that simulates the formation of a Placido image. This model shows the relationship between tear film topography changes and the obtained Placido image and serves as a benchmark for the assessment of indicators of the ring’s regularity. Further, a novel block-feature TFSQ indicator is proposed for detecting dry eye from a series of DA-HSV measurements. The results of the new indicator evaluated on data from a retrospective clinical study, which contains 22 normal and 12 dry eyes, have shown a substantial improvement of the proposed technique to discriminate dry eye from normal tear film subjects. The best discrimination was obtained under suppressed blinking conditions. In conclusion,this work highlights the potential of the DA-HSV as a clinical tool to diagnose dry eye syndrome.
Resumo:
An application of image processing techniques to recognition of hand-drawn circuit diagrams is presented. The scanned image of a diagram is pre-processed to remove noise and converted to bilevel. Morphological operations are applied to obtain a clean, connected representation using thinned lines. The diagram comprises of nodes, connections and components. Nodes and components are segmented using appropriate thresholds on a spatially varying object pixel density. Connection paths are traced using a pixel-stack. Nodes are classified using syntactic analysis. Components are classified using a combination of invariant moments, scalar pixel-distribution features, and vector relationships between straight lines in polygonal representations. A node recognition accuracy of 82% and a component recognition accuracy of 86% was achieved on a database comprising 107 nodes and 449 components. This recogniser can be used for layout “beautification” or to generate input code for circuit analysis and simulation packages
Resumo:
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Background: Adolescent idiopathic scoliosis is a complex three-dimensional deformity, involving a lateral deformity in the coronal plane and axial rotation of the vertebrae in the transverse plane. Gravitational loading plays an important biomechanical role in governing the coronal deformity, however, less is known about how they influence the axial deformity. This study investigates the change in three-dimensional deformity of a series of scoliosis patients due to compressive axial loading. Methods: Magnetic resonance imaging scans were obtained and coronal deformity (measured using the coronal Cobb angle) and axial rotations measured for a group of 18 scoliosis patients (Mean major Cobb angle was 43.4 o). Each patient was scanned in an unloaded and loaded condition while compressive loads equivalent to 50% body mass were applied using a custom developed compressive device. Findings: The mean increase in major Cobb angle due to compressive loading was 7.4 o (SD 3.5 o). The most axially rotated vertebra was observed at the apex of the structural curve and the largest average intravertebral rotations were observed toward the limits of the coronal deformity. A level-wise comparison showed no significant difference between the average loaded and unloaded vertebral axial rotations (intra-observer error = 2.56 o) or intravertebral rotations at each spinal level. Interpretation: This study suggests that the biomechanical effects of axial loading primarily influence the coronal deformity, with no significant change in vertebral axial rotation or intravertebral rotation observed between the unloaded and loaded condition. However, the magnitude of changes in vertebral rotation with compressive loading may have been too small to detect given the resolution of the current technique.
Resumo:
Argon ions were implanted on titanium discs to study its effect on bone cell adhesion and proli feration. Polished titanium discs were prepared and implanted with argon ions with different doses. Afterwards the samples were sterilized using UV light, inocu lated with human bone cells and incubated. Once fixed and rinsed, image analysis has been used to quantify the number of cells attached to the titanium discs. Cell proliferation tests were also conducted after a period of 120 hours. Cell adhesion was seen to be higher with ion im planted surface. SEM analysis has shown that the cells attached spread more on ion implanted surface. The numbers of cells attached were seen to be higher on implanted surfaces; they tend to occupy wider areas with healthier cells.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
The application of epoxy embedding and microtomy to individual chondritic interplanetary dust particles (lOP's)(Bradley and Brownlee, 1986a) provides not only higher precision in thin-film elemental analyses (Bradley and Brownlee, 19861:1), but also allows a wealth of other important techniques for the micro-characterization of these primitive extraterrestrial materials. For example, individual sections (e.g. 100 nm thick) or a series of sections, can be examined using image analysis techniques which utilize either transmitted or scanned secondary electron images, or alternatively, secondary X-ray spectra collected concurrently from a given region of sample. Individual particles, or groups of particles with similar image characteristics can then be rapidly identified using conventional grey-scale/particle recognition techniques for each microtomed section of lOP. This type of image analysis provides a suitable method for determination of particle size and shape distribution as well as porosity throughout the aggregate.
Resumo:
The Clay Minerals Society Source Clay kaolinites, Georgia KGa-1 and KGa-2, have been subjected to particle size determinations by 1) conventional sedimentation methods, 2) electron microscopy and image analysis, and 3) laser scattering using improved algorithms for the interaction of light with small particles. Particle shape, size distribution, and crystallinity vary considerably for each kaolinite. Replicate analyses of separated size fractions showed that in the <2 µm range, the sedimentation/centrifugation method of Tanner and Jackson (1947) is reproducible for different kaolinite types and that the calculated size ranges are in reasonable agreement with the size bins estimated from laser scattering. Particle sizes determined by laser scattering must be calculated using Mie theory when the dominant particle size is less than ∼5 µm. Based on this study of two well-known and structurally different kaolinites, laser scattering, with improved data reduction algorithms that include Mie theory, should be considered an internally consistent and rapid technique for clay particle sizing.