958 resultados para Edge Detection
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
Resumo:
Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
This work aims at developing a planetary rover capable of acting as an assistant astrobiologist: making a preliminary analysis of the collected visual images that will help to make better use of the scientists time by pointing out the most interesting pieces of data. This paper focuses on the problem of detecting and recognising particular types of stromatolites. Inspired by the processes actual astrobiologists go through in the field when identifying stromatolites, the processes we investigate focus on recognising characteristics associated with biogenicity. The extraction of these characteristics is based on the analysis of geometrical structure enhanced by passing the images of stromatolites into an edge-detection filter and its Fourier Transform, revealing typical spatial frequency patterns. The proposed analysis is performed on both simulated images of stromatolite structures and images of real stromatolites taken in the field by astrobiologists.
Resumo:
Background: Standard methods for quantifying IncuCyte ZOOM™ assays involve measurements that quantify how rapidly the initially-vacant area becomes re-colonised with cells as a function of time. Unfortunately, these measurements give no insight into the details of the cellular-level mechanisms acting to close the initially-vacant area. We provide an alternative method enabling us to quantify the role of cell motility and cell proliferation separately. To achieve this we calibrate standard data available from IncuCyte ZOOM™ images to the solution of the Fisher-Kolmogorov model. Results: The Fisher-Kolmogorov model is a reaction-diffusion equation that has been used to describe collective cell spreading driven by cell migration, characterised by a cell diffusivity, D, and carrying capacity limited proliferation with proliferation rate, λ, and carrying capacity density, K. By analysing temporal changes in cell density in several subregions located well-behind the initial position of the leading edge we estimate λ and K. Given these estimates, we then apply automatic leading edge detection algorithms to the images produced by the IncuCyte ZOOM™ assay and match this data with a numerical solution of the Fisher-Kolmogorov equation to provide an estimate of D. We demonstrate this method by applying it to interpret a suite of IncuCyte ZOOM™ assays using PC-3 prostate cancer cells and obtain estimates of D, λ and K. Comparing estimates of D, λ and K for a control assay with estimates of D, λ and K for assays where epidermal growth factor (EGF) is applied in varying concentrations confirms that EGF enhances the rate of scratch closure and that this stimulation is driven by an increase in D and λ, whereas K is relatively unaffected by EGF. Conclusions: Our approach for estimating D, λ and K from an IncuCyte ZOOM™ assay provides more detail about cellular-level behaviour than standard methods for analysing these assays. In particular, our approach can be used to quantify the balance of cell migration and cell proliferation and, as we demonstrate, allow us to quantify how the addition of growth factors affects these processes individually.
Resumo:
For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of ail edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however, we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM.
Resumo:
Text segmentation and localization algorithms are proposed for the born-digital image dataset. Binarization and edge detection are separately carried out on the three colour planes of the image. Connected components (CC's) obtained from the binarized image are thresholded based on their area and aspect ratio. CC's which contain sufficient edge pixels are retained. A novel approach is presented, where the text components are represented as nodes of a graph. Nodes correspond to the centroids of the individual CC's. Long edges are broken from the minimum spanning tree of the graph. Pair wise height ratio is also used to remove likely non-text components. A new minimum spanning tree is created from the remaining nodes. Horizontal grouping is performed on the CC's to generate bounding boxes of text strings. Overlapping bounding boxes are removed using an overlap area threshold. Non-overlapping and minimally overlapping bounding boxes are used for text segmentation. Vertical splitting is applied to generate bounding boxes at the word level. The proposed method is applied on all the images of the test dataset and values of precision, recall and H-mean are obtained using different approaches.
Resumo:
373 p. : il., gráf., fot., tablas
Resumo:
An ordered gray-scale erosion is suggested according to the definition of hit-miss transform. Instead of using three operations, two images, and two structuring elements, the developed operation requires only one operation and one structuring element, but with three gray-scale levels. Therefore, a union of the ordered gray-scale erosions with different structuring elements can constitute a simple image algebra to program any combined image processing function. An optical parallel ordered gray-scale erosion processor is developed based on the incoherent correlation in a single channel. Experimental results are also given for an edge detection and a pattern recognition. (C) 1998 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(98)00306-7].
Resumo:
An optoelectronic implementation based on optical neighborhood operations and electronic nonlinear feedback is proposed to perform morphological image processing such as erosion, dilation, opening, closing and edge detection. Results of a numerical simulation are given and experimentally verified.
Resumo:
Esta tese apresentada uma proposta de desenvolvimento de uma ferramenta computacional para metrologia com microtomografia computadorizada que possa ser implantada em sistemas de microtomógrafos convencionais. O estudo concentra-se nas diferentes técnicas de detecção de borda utilizadas em processamento de imagens digitais.Para compreender a viabilidade do desenvolvimento da ferramenta optou-se por utilizar o Matlab 2010a. A ferramenta computacional proposta é capaz de medir objetos circulares e retangulares. As medidas podem ser horizontais ou circulares, podendo ser realizada várias medidas de uma mesma imagem, uma medida de várias imagens ou várias medidas de várias imagens. As técnicas processamento de imagens digitais implementadas são a limiarização global com escolha do threshold manualmente baseado no histograma da imagem ou automaticamente pelo método de Otsu, os filtros de passa-alta no domínio do espaço Sobel, Prewitt, Roberts, LoG e Canny e medida entre os picos mais externos da 1 e 2 derivada da imagem. Os resultados foram validados através de comparação com os resultados de teste realizados pelo Laboratório de Ensaios Mecânicos e Metrologia (LEMec) do Intstituto Politécnico do Rio de Janeiro (IPRJ), Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburdo- RJ e pelo Serviço Nacional da Indústria Nova Friburgo (SENAI/NF). Os resultados obtidos pela ferramenta computacional foram equivalentes aos obtidos com os instrumentos de medição utilizados, demonstrando à viabilidade de utilização da ferramenta computacional a metrologia.
Resumo:
The location of a flame front is often taken as the point of maximum OH gradient. Planar laser-induced fluorescence of OH can be used to obtain the flame front by extracting the points of maximum gradient. This operation is typically performed using an edge detection algorithm. The choice of operating parameters a priori poses significant problems of robustness when handling images with a range of signal-to-noise ratios. A statistical method of parameter selection originating in the image processing literature is detailed, and its merit for this application is demonstrated. A reduced search space method is proposed to decrease computational cost and render the technique viable for large data sets. This gives nearly identical output to the full method. These methods demonstrate substantial decreases in data rejection compared to the use of a priori parameters. These methods are viable for any application where maximum gradient contours must be accurately extracted from images of species or temperature, even at very low signal-to-noise ratios.
Resumo:
The automated detection of structural elements (e.g., columns and beams) from visual data can be used to facilitate many construction and maintenance applications. The research in this area is under initial investigation. The existing methods solely rely on color and texture information, which makes them unable to identify each structural element if these elements connect each other and are made of the same material. The paper presents a novel method of automated concrete column detection from visual data. The method overcomes the limitation by combining columns’ boundary information with their color and texture cues. It starts from recognizing long vertical lines in an image/video frame through edge detection and Hough transform. The bounding rectangle for each pair of lines is then constructed. When the rectangle resembles the shape of a column and the color and texture contained in the pair of lines are matched with one of the concrete samples in knowledge base, a concrete column surface is assumed to be located. This way, one concrete column in images/videos is detected. The method was tested using real images/videos. The results are compared with the manual detection ones to indicate the method’s validity.
Resumo:
本文介绍了小波变换理论 ,讨论了基本小波函数的选取准则和小波变换算法 ,分析了小波变换与人工智能等其它方法的结合方式和特点 .通过介绍小波变换在信号瞬态分析、图像边沿检测、图像去噪、模式识别、数据压缩、分形信号分析等方面的应用实例 ,讨论了小波变换在处理非平稳信号和复杂图像时的优势 .最后 ,对小波变换理论的发展及其应用前景作了描述 .
Resumo:
针对目前焊缝坐标提取方法存在精度较低,难于实现视觉引导的机器人激光焊接高速度、高精度的要求,提出一种基于Zernike正交矩的曲线焊缝位置坐标信息获取算法,该算法首先采用Zernike边缘检测算法识别焊缝边缘,然后提取出焊缝的中心线,最后计算出该中心线的亚像素坐标。通过试验验证了该算法的可行性。