992 resultados para Edge detector


Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The aortic dissection is a disease that can cause a deadly situation, even with a correct treatment. It consists in a rupture of a layer of the aortic artery wall, causing a blood flow inside this rupture, called dissection. The aim of this paper is to contribute to its diagnosis, detecting the dissection edges inside the aorta. A subpixel accuracy edge detector based on the hypothesis of partial volume effect is used, where the intensity of an edge pixel is the sum of the contribution of each color weighted by its relative area inside the pixel. The method uses a floating window centred on the edge pixel and computes the edge features. The accuracy of our method is evaluated on synthetic images of different hickness and noise levels, obtaining an edge detection with a maximal mean error lower than 16 percent of a pixel.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose to employ bilateral filters to solve the problem of edge detection. The proposed methodology presents an efficient and noise robust method for detecting edges. Classical bilateral filters smooth images without distorting edges. In this paper, we modify the bilateral filter to perform edge detection, which is the opposite of bilateral smoothing. The Gaussian domain kernel of the bilateral filter is replaced with an edge detection mask, and Gaussian range kernel is replaced with an inverted Gaussian kernel. The modified range kernel serves to emphasize dissimilar regions. The resulting approach effectively adapts the detection mask according as the pixel intensity differences. The results of the proposed algorithm are compared with those of standard edge detection masks. Comparisons of the bilateral edge detector with Canny edge detection algorithm, both after non-maximal suppression, are also provided. The results of our technique are observed to be better and noise-robust than those offered by methods employing masks alone, and are also comparable to the results from Canny edge detector, outperforming it in certain cases.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A desirable property of any edge detector is that it be a projection in the mathematical sense, that is, that when it is applied to its own output it produces no further change. This report examines the behaviour of some conventional and some new operators when applied to line-drawings. The Marr-Hildreth and some gradient operators are among the conventional operators examined. Also a class of energy feature detectors is explored. It is shown that the energy feature detector is a true projection and does not proliferate edges when applied to a line-drawing, whereas several of the conventional operators do.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we consider two methods for automatically determining values for thresholding edge maps. Rather than use statistical methods they are based on the figural properties of the edges. Two approaches are taken. We investigate applying an edge evaluation measure based on edge continuity and edge thinness to determine the threshold on edge strength. However, the technique is not valid when applied to edge detector outputs that are one-pixel wide. In this case, we use a measure based on work by Lowe for assessing edges. This measure is based on length and average strength of complete linked edge lists.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this paper is to introduce a new approach for edge detection in grey shaded images. The proposed approach is based on the fuzzy number theory. The idea is to deal with the uncertainties concerning the grey shades making up the image and, thus, calculate the appropriateness of the pixels in relation to a homogeneous region around them. The pixels not belonging to the region are then classified as border pixels. The results have shown that the technique is simple, computationally efficient and with good results when compared with both the traditional border detectors and the fuzzy edge detectors. Copyright © 2009, Inderscience Publishers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Road surface macrotexture is identified as one of the factors contributing to the surface's skid resistance. Existing methods of quantifying the surface macrotexture, such as the sand patch test and the laser profilometer test, are either expensive or intrusive, requiring traffic control. High-resolution cameras have made it possible to acquire good quality images from roads for the automated analysis of texture depth. In this paper, a granulometric method based on image processing is proposed to estimate road surface texture coarseness distribution from their edge profiles. More than 1300 images were acquired from two different sites, extending to a total of 2.96 km. The images were acquired using camera orientations of 60 and 90 degrees. The road surface is modeled as a texture of particles, and the size distribution of these particles is obtained from chord lengths across edge boundaries. The mean size from each distribution is compared with the sensor measured texture depth obtained using a laser profilometer. By tuning the edge detector parameters, a coefficient of determination of up to R2 = 0.94 between the proposed method and the laser profilometer method was obtained. The high correlation is also confirmed by robust calibration parameters that enable the method to be used for unseen data after the method has been calibrated over road surface data with similar surface characteristics and under similar imaging conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (ORfclin Det ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORfclin Det measured using an IBA stereotactic field diode (SFD). k fclin, f msr Qclin,Qmsr was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k fclin, f msr Qclin,Qmsr was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which s “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k fclin, f msr Qclin,Qmsr values from the SFD to unknown diodes was tested by comparing the experimentally transferred k fclin, f msr Qclin,Qmsr values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. Results 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5–50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEeair) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated k fclin, f msr Qclin,Qmsr for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer k fclin, f msr Qclin,Qmsr from one commercially available detector to another using experimental methods and the recommended experimental setup. Conclusions It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be “correction-free” depends strongly on its design and composition. A nonwater-equivalent detector can only be “correction-free” if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

深入分析了经典的Canny边缘检测算法,针对其在参数确定的自主能力不高的问题,提出一种新的基于大津法和统计理论的自适应边缘提取方法,通过对一组参数进行了统计优化,自适应地确定边缘检测的全局最优参数。实验结果表明本文提出的非结构环境下目标自适应边缘提取方法能够有效地抑制噪声,自适应地确定最优边缘提取参数,提高了边缘定位精度。最后,通过实验表明,本文提出的方法在环境信息未知月球探测应用中具有较高边缘检测性能。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis examines a complete design framework for a real-time, autonomous system with specialized VLSI hardware for computing 3-D camera motion. In the proposed architecture, the first step is to determine point correspondences between two images. Two processors, a CCD array edge detector and a mixed analog/digital binary block correlator, are proposed for this task. The report is divided into three parts. Part I covers the algorithmic analysis; part II describes the design and test of a 32$\time $32 CCD edge detector fabricated through MOSIS; and part III compares the design of the mixed analog/digital correlator to a fully digital implementation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.