303 resultados para histogram


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Histograms have been used for Shape Representation and Retrieval. In this paper, the traditional technique has been modified to capture additional information. We compare the performance of the proposed method with the traditional method by performing experiments on a database of shapes. The results show that the proposed enhancement to the histogram based method improves the effectiveness significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There exists an enormous gap between low-level visual feature and high-level semantic information, and the accuracy of content-based image classification and retrieval depends greatly on the description of low-level visual features. Taking this into consideration, a novel texture and edge descriptor is proposed in this paper, which can be represented with a histogram. Furthermore, with the incorporation of the color, texture and edge histograms searnlessly, the images are grouped into semantic classes using a support vector machine (SVM). Experiment results show that the combination descriptor is more discriminative than other feature descriptors such as Gabor texture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An image fusion system accepts two source images and produces a 'better' fused image. The term 'better' differs from one context to another. In some contexts, it means holding more information. In other contexts, it means getting more accurate results or readings. In general, images hold more than just the color values. Histogram distribution, dynamic range of colors, and color maps are all as valuable as the color values presenting the pictorial information of the image. This paper studies the problems of fusing images from different domains. It proposes a method to extend the fusion algorithms to fuse image properties that define the interpretation of captured images as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work has focused on measuring the adhesion forces on both untreated and atmospheric helium plasma treated single jute fibre surfaces using scanning probe microscopy (SPM). The measurements were conducted on three differently aged surfaces for one week, three weeks and six weeks using a standard silicon nitride tip in force-volume (f-v) mode. Up to 256 adhesion data points were collected from various locations on the surface of the studied fibres using in-house developed software and the resulting data were statistically analysed by the histogram method. Results obtained from this analysis method were found to be very consistent with a small statistical variation. The work of adhesion, Wa, was calculated from measured adhesion force using the Johnson–Kendall–Roberts (JKR) and Derjaguin–Muller–Toporov (DMT) models. Increases in both adhesion force and work of adhesion were observed on jute fibre with certain levels of atmospheric plasma treatment and ageing time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a comparative evaluation of the state-of-art algorithms for detecting pedestrians in low frame rate and low resolution footage acquired by mobile sensors. Four approaches are compared: a) The Histogram of Oriented Gradient (HoG) approach [1]; b) A new histogram feature that is formed by the weighted sum of both the gradient magnitude and the filter responses from a set of elongated Gaussian filters [2] corresponding to the quantised orientation, called Histogram of Oriented Gradient Banks (HoGB) approach; c) The codebook based HoG feature with branch-and-bound (efficient subwindow search) algorithm [3] and; d) The codebook based HoGB approach. Results show that the HoG based detector achieves the highest performance in terms of the true positive detection, the HoGB approach has the lowest false positives whilst maintaining a comparable true positive rate to the HoG, and the codebook approaches allow computationally efficient detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a real-time obstacle detection system for the mobility improvement for the visually impaired using a handheld Smartphone. Though there are many existing assistants for the visually impaired, there is not a single one that is low cost, ultra-portable, non-intrusive and able to detect the low-height objects on the floor. This paper proposes a system to detect any objects attached to the floor regardless of their height. Unlike some existing systems where only histogram or edge information is used, the proposed system combines both cues and overcomes some limitations of existing systems. The obstacles on the floor in front of the user can be reliably detected in real time using the proposed system implemented on a Smartphone. The proposed system has been tested in different types of floor conditions and a field trial on five blind participants has been conducted. The experimental results demonstrate its reliability in comparison to existing systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Textural image classification technologies have been extensively explored and widely applied in many areas. It is advantageous to combine both the occurrence and spatial distribution of local patterns to describe a texture. However, most existing state-of-the-art approaches for textural image classification only employ the occurrence histogram of local patterns to describe textures, without considering their co-occurrence information. And they are usually very time-consuming because of the vector quantization involved. Moreover, those feature extraction paradigms are implemented at a single scale. In this paper we propose a novel multi-scale local pattern co-occurrence matrix (MS_LPCM) descriptor to characterize textural images through four major steps. Firstly, Gaussian filtering pyramid preprocessing is employed to obtain multi-scale images; secondly, a local binary pattern (LBP) operator is applied on each textural image to create a LBP image; thirdly, the gray-level co-occurrence matrix (GLCM) is utilized to extract local pattern co-occurrence matrix (LPCM) from LBP images as the features; finally, all LPCM features from the same textural image at different scales are concatenated as the final feature vectors for classification. The experimental results on three benchmark databases in this study have shown a higher classification accuracy and lower computing cost as compared with other state-of-the-art algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multisource image fusion is usually achieved by repeatedly fusing source images in pairs. However, there is no guarantee on the delivered quality considering the amount of information to be squeezed into the same spatial dimension. This paper presents a fusion capacity measure and examines the limits at which fusing more images will not add further information. The fusion capacity index employs Mutual Information (MI) to measure how far the histogram of the examined image is from a uniformly distributed histogram of a saturated image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of 3D object recognition is of immense practical importance, with the last decade witnessing a number of breakthroughs in the state of the art. Most of the previous work has focused on the matching of textured objects using local appearance descriptors extracted around salient image points. The recently proposed bag of boundaries method was the first to address directly the problem of matching smooth objects using boundary features. However, no previous work has attempted to achieve a holistic treatment of the problem by jointly using textural and shape features which is what we describe herein. Due to the complementarity of the two modalities, we fuse the corresponding matching scores and learn their relative weighting in a data specific manner by optimizing discriminative performance on synthetically distorted data. For the textural description of an object we adopt a representation in the form of a histogram of SIFT based visual words. Similarly the apparent shape of an object is represented by a histogram of discretized features capturing local shape. On a large public database of a diverse set of objects, the proposed method is shown to outperform significantly both purely textural and purely shape based approaches for matching across viewpoint variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work is to recognize faces using video sequences both for training and novel input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. There are three major areas of novelty: (i) illumination generalization is achieved by combining coarse histogram correction with fine illumination manifold-based normalization; (ii) pose robustness is achieved by decomposing each appearance manifold into semantic Gaussian pose clusters, comparing the corresponding clusters and fusing the results using an RBF network; (iii) a fully automatic recognition system based on the proposed method is described and extensively evaluated on 600 head motion video sequences with extreme illumination, pose and motion pattern variation. On this challenging data set our system consistently demonstrated a very high recognition rate (95% on average), significantly outperforming state-of-the-art methods from the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Segmentation is the process of extraction of objects from an image. This paper proposes a new algorithm to construct intuitionistic fuzzy set (IFS) from multiple fuzzy sets as an application to image segmentation. Hesitation degree in IFS is formulated as the degree of ignorance (due to the lack of knowledge) to determine whether the chosen membership function is best for image segmentation. By minimizing entropy of IFS generated from various fuzzy sets, an image is thresholded. Experimental results are provided to show the effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research presents a novel rank based image watermarking method and improved moment based and histogram based image watermarking methods. A high-frequency component modification step is also proposed to compensate the side effect of commonly used Gaussian pre-filtering. The proposed methods outperform the latest image watermarking methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lung segmentation in thoracic computed tomography (CT) scans is an important preprocessing step for computer-aided diagnosis (CAD) of lung diseases. This paper focuses on the segmentation of the lung field in thoracic CT images. Traditional lung segmentation is based on Gray level thresholding techniques, which often requires setting a threshold and is sensitive to image contrasts. In this paper, we present a fully automated method for robust and accurate lung segmentation, which includes a enhanced thresholding algorithm and a refinement scheme based on a texture-aware active contour model. In our thresholding algorithm, a histogram based image stretch technique is performed in advance to uniformly increase contrasts between areas with low Hounsfield unit (HU) values and areas with high HU in all CT images. This stretch step enables the following threshold-free segmentation, which is the Otsu algorithm with contour analysis. However, as a threshold based segmentation, it has common issues such as holes, noises and inaccurate segmentation boundaries that will cause problems in future CAD for lung disease detection. To solve these problems, a refinement technique is proposed that captures vessel structures and lung boundaries and then smooths variations via texture-aware active contour model. Experiments on 2,342 diagnosis CT images demonstrate the effectiveness of the proposed method. Performance comparison with existing methods shows the advantages of our method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem that frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semiautomatic surveillance analytics systems that detect abnormalities in close-circuit television footage using statistical models of low-level motion features. In this paper, we specifically address the problem of estimating the running quantile of a data stream when the memory for storing observations is limited. We make the following several major contributions: 1) we highlight the limitations of approaches previously described in the literature that make them unsuitable for nonstationary streams; 2) we describe a novel principle for the utilization of the available storage space; 3) we introduce two novel algorithms that exploit the proposed principle in different ways; and 4) we present a comprehensive evaluation and analysis of the proposed algorithms and the existing methods in the literature on both synthetic data sets and three large real-world streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that both of the proposed methods are highly successful and vastly outperform the existing alternatives. We show that the better of the two algorithms (data-aligned histogram) exhibits far superior performance in comparison with the previously described methods, achieving more than 10 times lower estimate errors on real-world data, even when its available working memory is an order of magnitude smaller.