13 resultados para Quantization

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a two-stage algorithm for vector quantization is proposed based on a self-organizing map (SOM) neural network. First, a conventional self-organizing map is modified to deal with dead codebooks in the learning process and is then used to obtain the codebook distribution structure for a given set of input data. Next, subblocks are classified based on the previous structure distribution with a prior criteria. Then, the conventional LBG algorithm is applied to these sub-blocks for data classification with initial values obtained via the SOM. Finally, extensive simulations illustrate that the proposed two-stage algorithm is very effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of unnatural control chart patterns (CCPs) from manufacturing process measurements is a critical task in quality control as these patterns indicate that the manufacturing process is out-of-control. Recently, there have been numerous efforts in developing pattern recognition and classification methods based on artificial neural network to automatically recognize unnatural patterns. Most of them assume that a single type of unnatural pattern exists in process data. Due to this restrictive assumption, severe performance degradations are observed in these methods when unnatural concurrent CCPs present in process data. To address this problem, this paper proposes a novel approach based on singular spectrum analysis (SSA) and learning vector quantization network to identify concurrent CCPs. The main advantage of the proposed method is that it can be applied to the identification of concurrent CCPs in univariate manufacturing processes. Moreover, there are no permutation and scaling ambiguities in the CCPs recovered by the SSA. These desirable features make the proposed algorithm an attractive alternative for the identification of concurrent CCPs. Computer simulations and a real application for aluminium smelting processes confirm the superior performance of proposed algorithm for sets of typical concurrent CCPs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel adaptive safe-band for quantization based audio watermarking methods, aiming to improve robustness. Considerable number of audio watermarking methods have been developed using quantization based techniques. These techniques are generally vulnerable to signal processing attacks. For these conventional quantization based techniques, robustness can be marginally improved by choosing larger step sizes at the cost of significant perceptual quality degradation. We first introduce fixed size safe-band between two quantization steps to improve robustness. This safe-band will act as a buffer to withstand certain types of attacks. Then we further improve the robustness by adaptively changing the size of the safe-band based on the audio signal feature used for watermarking. Compared with conventional quantization based method and the fixed size safe-band based method, the proposed adaptive safe-band based quantization method is more robust to attacks. The effectiveness of the proposed technique is demonstrated by simulation results. © 2014 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two corner detectors are presented, one of which works by testing similarity of image patches along the contour direction to detect curves in the image contour, and the other of which uses direct estimation image curvature along the contour direction. The operators are fast, robust to noise, and self-thresholding. An interpretation of the Kitchen-Rosenfeld corner operator is presented which shows that this operator can also be viewed as the second derivative of the image function along the edge direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a hybrid neural classifier combining the auto-encoder neural network and the Lattice Vector Quantization (LVQ) model is described. The auto-encoder network is used for dimensionality reduction by projecting high dimensional data into the 2D space. The LVQ model is used for data visualization by forming and adapting the granularity of a data map. The mapped data are employed to predict the target classes of new data samples. To improve classification accuracy, a majority voting scheme is adopted by the hybrid classifier. To demonstrate the applicability of the hybrid classifier, a series of experiments using simulated and real fault data from induction motors is conducted. The results show that the hybrid classifier is able to outperform the Multi-Layer Perceptron neural network, and to produce very good classification accuracy rates for various fault conditions of induction motors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new image segmentation approach that integrates color and texture features using the fuzzy c-means clustering algorithm is described. To demonstrate the applicability of the proposed approach to satellite image retrieval, an interactive region-based image query system is designed and developed. A database comprising 400 multispectral satellite images is used to evaluate the performance of the system. The results are analyzed and discussed, and a performance comparison with other methods is included. The outcomes reveal that the proposed approach is able to improve the quality of the segmentation results as well as the retrieval performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Textural image classification technologies have been extensively explored and widely applied in many areas. It is advantageous to combine both the occurrence and spatial distribution of local patterns to describe a texture. However, most existing state-of-the-art approaches for textural image classification only employ the occurrence histogram of local patterns to describe textures, without considering their co-occurrence information. And they are usually very time-consuming because of the vector quantization involved. Moreover, those feature extraction paradigms are implemented at a single scale. In this paper we propose a novel multi-scale local pattern co-occurrence matrix (MS_LPCM) descriptor to characterize textural images through four major steps. Firstly, Gaussian filtering pyramid preprocessing is employed to obtain multi-scale images; secondly, a local binary pattern (LBP) operator is applied on each textural image to create a LBP image; thirdly, the gray-level co-occurrence matrix (GLCM) is utilized to extract local pattern co-occurrence matrix (LPCM) from LBP images as the features; finally, all LPCM features from the same textural image at different scales are concatenated as the final feature vectors for classification. The experimental results on three benchmark databases in this study have shown a higher classification accuracy and lower computing cost as compared with other state-of-the-art algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our aim in this paper is to robustly match frontal faces in the presence of extreme illumination changes, using only a single training image per person and a single probe image. In the illumination conditions we consider, which include those with the dominant light source placed behind and to the side of the user, directly above and pointing downwards or indeed below and pointing upwards, this is a most challenging problem. The presence of sharp cast shadows, large poorly illuminated regions of the face, quantum and quantization noise and other nuisance effects, makes it difficult to extract a sufficiently discriminative yet robust representation. We introduce a representation which is based on image gradient directions near robust edges which correspond to characteristic facial features. Robust edges are extracted using a cascade of processing steps, each of which seeks to harness further discriminative information or normalize for a particular source of extra-personal appearance variability. The proposed representation was evaluated on the extremely difficult YaleB data set. Unlike most of the previous work we include all available illuminations, perform training using a single image per person and match these also to a single probe image. In this challenging evaluation setup, the proposed gradient edge map achieved 0.8% error rate, demonstrating a nearly perfect receiver-operator characteristic curve behaviour. This is by far the best performance achieved in this setup reported in the literature, the best performing methods previously proposed attaining error rates of approximately 6–7%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Healthcare plays an important role in promoting the general health and well-being of people around the world. The difficulty in healthcare data classification arises from the uncertainty and the high-dimensional nature of the medical data collected. This paper proposes an integration of fuzzy standard additive model (SAM) with genetic algorithm (GA), called GSAM, to deal with uncertainty and computational challenges. GSAM learning process comprises three continual steps: rule initialization by unsupervised learning using the adaptive vector quantization clustering, evolutionary rule optimization by GA and parameter tuning by the gradient descent supervised learning. Wavelet transformation is employed to extract discriminative features for high-dimensional datasets. GSAM becomes highly capable when deployed with small number of wavelet features as its computational burden is remarkably reduced. The proposed method is evaluated using two frequently-used medical datasets: the Wisconsin breast cancer and Cleveland heart disease from the UCI Repository for machine learning. Experiments are organized with a five-fold cross validation and performance of classification techniques are measured by a number of important metrics: accuracy, F-measure, mutual information and area under the receiver operating characteristic curve. Results demonstrate the superiority of the GSAM compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus helpful as a decision support system for medical practitioners in the healthcare practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a new multi-output interval type-2 fuzzy logic system (MOIT2FLS) that is automatically constructed from unsupervised data clustering method and trained using heuristic genetic algorithm for a protein secondary structure classification. Three structure classes are distinguished including helix, strand (sheet) and coil which correspond to three outputs of the MOIT2FLS. Quantitative properties of amino acids are used to characterize the twenty amino acids rather than the widely used computationally expensive binary encoding scheme. Amino acid sequences are parsed into learnable patterns using a local moving window strategy. Three clustering tasks are performed using the adaptive vector quantization method to derive an equal number of initial rules for each type of secondary structure. Genetic algorithm is applied to optimally adjust parameters of the MOIT2FLS with the purpose of maximizing the Q3 measure. Comprehensive experimental results demonstrate the strong superiority of the proposed approach over the traditional methods including Chou-Fasman method, Garnier-Osguthorpe-Robson method, and artificial neural network models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new multi-output interval type-2 fuzzy logic system (MOIT2FLS) is introduced for protein secondary structure prediction in this paper. Three outputs of the MOIT2FLS correspond to three structure classes including helix, strand (sheet) and coil. Quantitative properties of amino acids are employed to characterize twenty amino acids rather than the widely used computationally expensive binary encoding scheme. Three clustering tasks are performed using the adaptive vector quantization method to construct an equal number of initial rules for each type of secondary structure. Genetic algorithm is applied to optimally adjust parameters of the MOIT2FLS. The genetic fitness function is designed based on the Q3 measure. Experimental results demonstrate the dominance of the proposed approach against the traditional methods that are Chou-Fasman method, Garnier-Osguthorpe-Robson method, and artificial neural network models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel rank-based method for image watermarking. In the watermark embedding process, the host image is divided into blocks, followed by the 2-D discrete cosine transform (DCT). For each image block, a secret key is employed to randomly select a set of DCT coefficients suitable for watermark embedding. Watermark bits are inserted into an image block by modifying the set of DCT coefficients using a rank-based embedding rule. In the watermark detection process, the corresponding detection matrices are formed from the received image using the secret key. Afterward, the watermark bits are extracted by checking the ranks of the detection matrices. Since the proposed watermarking method only uses two DCT coefficients to hide one watermark bit, it can achieve very high embedding capacity. Moreover, our method is free of host signal interference. This desired feature and the usage of an error buffer in watermark embedding result in high robustness against attacks. Theoretical analysis and experimental results demonstrate the effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High embedding capacity is desired in digital image watermarking. In this paper, we propose a novel rank-based image watermarking method to achieve high embedding capacity. We first divide the host image into blocks. Then the 2-D discrete cosine transform (DCT) and zigzag scanning is used to construct the coefficient sets with a secret key. After that, the DCT coefficient sets are modified using a rank-based embedding strategy to insert the watermark bits. A buffer is also introduced during the embedding phase to enhance the robustness. At the decoding step, the watermark bits are extracted by checking the ranks of the detection matrices. The proposed method is host signal interference (HSI) free, invariant to amplitude scaling and constant luminance change, and robust against other common signal processing attacks. Experimental results demonstrate the effectiveness of the proposed method.