16 resultados para BRST Quantization

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a two-stage algorithm for vector quantization is proposed based on a self-organizing map (SOM) neural network. First, a conventional self-organizing map is modified to deal with dead codebooks in the learning process and is then used to obtain the codebook distribution structure for a given set of input data. Next, subblocks are classified based on the previous structure distribution with a prior criteria. Then, the conventional LBG algorithm is applied to these sub-blocks for data classification with initial values obtained via the SOM. Finally, extensive simulations illustrate that the proposed two-stage algorithm is very effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of unnatural control chart patterns (CCPs) from manufacturing process measurements is a critical task in quality control as these patterns indicate that the manufacturing process is out-of-control. Recently, there have been numerous efforts in developing pattern recognition and classification methods based on artificial neural network to automatically recognize unnatural patterns. Most of them assume that a single type of unnatural pattern exists in process data. Due to this restrictive assumption, severe performance degradations are observed in these methods when unnatural concurrent CCPs present in process data. To address this problem, this paper proposes a novel approach based on singular spectrum analysis (SSA) and learning vector quantization network to identify concurrent CCPs. The main advantage of the proposed method is that it can be applied to the identification of concurrent CCPs in univariate manufacturing processes. Moreover, there are no permutation and scaling ambiguities in the CCPs recovered by the SSA. These desirable features make the proposed algorithm an attractive alternative for the identification of concurrent CCPs. Computer simulations and a real application for aluminium smelting processes confirm the superior performance of proposed algorithm for sets of typical concurrent CCPs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel adaptive safe-band for quantization based audio watermarking methods, aiming to improve robustness. Considerable number of audio watermarking methods have been developed using quantization based techniques. These techniques are generally vulnerable to signal processing attacks. For these conventional quantization based techniques, robustness can be marginally improved by choosing larger step sizes at the cost of significant perceptual quality degradation. We first introduce fixed size safe-band between two quantization steps to improve robustness. This safe-band will act as a buffer to withstand certain types of attacks. Then we further improve the robustness by adaptively changing the size of the safe-band based on the audio signal feature used for watermarking. Compared with conventional quantization based method and the fixed size safe-band based method, the proposed adaptive safe-band based quantization method is more robust to attacks. The effectiveness of the proposed technique is demonstrated by simulation results. © 2014 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two corner detectors are presented, one of which works by testing similarity of image patches along the contour direction to detect curves in the image contour, and the other of which uses direct estimation image curvature along the contour direction. The operators are fast, robust to noise, and self-thresholding. An interpretation of the Kitchen-Rosenfeld corner operator is presented which shows that this operator can also be viewed as the second derivative of the image function along the edge direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a hybrid neural classifier combining the auto-encoder neural network and the Lattice Vector Quantization (LVQ) model is described. The auto-encoder network is used for dimensionality reduction by projecting high dimensional data into the 2D space. The LVQ model is used for data visualization by forming and adapting the granularity of a data map. The mapped data are employed to predict the target classes of new data samples. To improve classification accuracy, a majority voting scheme is adopted by the hybrid classifier. To demonstrate the applicability of the hybrid classifier, a series of experiments using simulated and real fault data from induction motors is conducted. The results show that the hybrid classifier is able to outperform the Multi-Layer Perceptron neural network, and to produce very good classification accuracy rates for various fault conditions of induction motors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new image segmentation approach that integrates color and texture features using the fuzzy c-means clustering algorithm is described. To demonstrate the applicability of the proposed approach to satellite image retrieval, an interactive region-based image query system is designed and developed. A database comprising 400 multispectral satellite images is used to evaluate the performance of the system. The results are analyzed and discussed, and a performance comparison with other methods is included. The outcomes reveal that the proposed approach is able to improve the quality of the segmentation results as well as the retrieval performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Textural image classification technologies have been extensively explored and widely applied in many areas. It is advantageous to combine both the occurrence and spatial distribution of local patterns to describe a texture. However, most existing state-of-the-art approaches for textural image classification only employ the occurrence histogram of local patterns to describe textures, without considering their co-occurrence information. And they are usually very time-consuming because of the vector quantization involved. Moreover, those feature extraction paradigms are implemented at a single scale. In this paper we propose a novel multi-scale local pattern co-occurrence matrix (MS_LPCM) descriptor to characterize textural images through four major steps. Firstly, Gaussian filtering pyramid preprocessing is employed to obtain multi-scale images; secondly, a local binary pattern (LBP) operator is applied on each textural image to create a LBP image; thirdly, the gray-level co-occurrence matrix (GLCM) is utilized to extract local pattern co-occurrence matrix (LPCM) from LBP images as the features; finally, all LPCM features from the same textural image at different scales are concatenated as the final feature vectors for classification. The experimental results on three benchmark databases in this study have shown a higher classification accuracy and lower computing cost as compared with other state-of-the-art algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our aim in this paper is to robustly match frontal faces in the presence of extreme illumination changes, using only a single training image per person and a single probe image. In the illumination conditions we consider, which include those with the dominant light source placed behind and to the side of the user, directly above and pointing downwards or indeed below and pointing upwards, this is a most challenging problem. The presence of sharp cast shadows, large poorly illuminated regions of the face, quantum and quantization noise and other nuisance effects, makes it difficult to extract a sufficiently discriminative yet robust representation. We introduce a representation which is based on image gradient directions near robust edges which correspond to characteristic facial features. Robust edges are extracted using a cascade of processing steps, each of which seeks to harness further discriminative information or normalize for a particular source of extra-personal appearance variability. The proposed representation was evaluated on the extremely difficult YaleB data set. Unlike most of the previous work we include all available illuminations, perform training using a single image per person and match these also to a single probe image. In this challenging evaluation setup, the proposed gradient edge map achieved 0.8% error rate, demonstrating a nearly perfect receiver-operator characteristic curve behaviour. This is by far the best performance achieved in this setup reported in the literature, the best performing methods previously proposed attaining error rates of approximately 6–7%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Healthcare plays an important role in promoting the general health and well-being of people around the world. The difficulty in healthcare data classification arises from the uncertainty and the high-dimensional nature of the medical data collected. This paper proposes an integration of fuzzy standard additive model (SAM) with genetic algorithm (GA), called GSAM, to deal with uncertainty and computational challenges. GSAM learning process comprises three continual steps: rule initialization by unsupervised learning using the adaptive vector quantization clustering, evolutionary rule optimization by GA and parameter tuning by the gradient descent supervised learning. Wavelet transformation is employed to extract discriminative features for high-dimensional datasets. GSAM becomes highly capable when deployed with small number of wavelet features as its computational burden is remarkably reduced. The proposed method is evaluated using two frequently-used medical datasets: the Wisconsin breast cancer and Cleveland heart disease from the UCI Repository for machine learning. Experiments are organized with a five-fold cross validation and performance of classification techniques are measured by a number of important metrics: accuracy, F-measure, mutual information and area under the receiver operating characteristic curve. Results demonstrate the superiority of the GSAM compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus helpful as a decision support system for medical practitioners in the healthcare practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a new multi-output interval type-2 fuzzy logic system (MOIT2FLS) that is automatically constructed from unsupervised data clustering method and trained using heuristic genetic algorithm for a protein secondary structure classification. Three structure classes are distinguished including helix, strand (sheet) and coil which correspond to three outputs of the MOIT2FLS. Quantitative properties of amino acids are used to characterize the twenty amino acids rather than the widely used computationally expensive binary encoding scheme. Amino acid sequences are parsed into learnable patterns using a local moving window strategy. Three clustering tasks are performed using the adaptive vector quantization method to derive an equal number of initial rules for each type of secondary structure. Genetic algorithm is applied to optimally adjust parameters of the MOIT2FLS with the purpose of maximizing the Q3 measure. Comprehensive experimental results demonstrate the strong superiority of the proposed approach over the traditional methods including Chou-Fasman method, Garnier-Osguthorpe-Robson method, and artificial neural network models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long-term bed-rest is used to simulate the effect of spaceflight on the human body and test different kinds of countermeasures. The 2nd Berlin BedRest Study (BBR2-2) tested the efficacy of whole-body vibration in addition to high-load resisitance exercise in preventing bone loss during bed-rest. Here we present the protocol of the study and discuss its implementation. Twenty-four male subjects underwent 60-days of six-degree head down tilt bed-rest and were randomised to an inactive control group (CTR), a high-load resistive exercise group (RE) or a high-load resistive exercise with whole-body vibration group (RVE). Subsequent to events in the course of the study (e.g. subject withdrawal), 9 subjects participated in the CTR-group, 7 in the RVE-group and 8 (7 beyond bed-rest day-30) in the RE-group. Fluid intake, urine output and axiallary temperature increased during bed-rest (p < .0001), though similarly in all groups (p > or = .17). Body weight changes differed between groups (p < .0001) with decreases in the CTR-group, marginal decreases in the RE-group and the RVE-group displaying significant decreases in body-weight beyond bed-rest day-51 only. In light of events and experiences of the current study, recommendations on various aspects of bed-rest methodology are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SUMMARY: The addition of whole-body vibration to high-load resistive exercise may provide a better stimulus for the reduction of bone loss during prolonged bed rest (spaceflight simulation) than high-load resistive exercise alone. INTRODUCTION: Prior work suggests that the addition of whole-body vibration to high-load resistive exercise (RVE) may be more effective in preventing bone loss in spaceflight and its simulation (bed rest) than resistive exercise alone (RE), though this hypothesis has not been tested in humans. METHODS: Twenty-four male subjects as part of the 2nd Berlin Bed Rest Study performed RVE (n = 7), RE (n = 8) or no exercise (control, n = 9) during 60-day head-down tilt bed rest. Whole-body, spine and total hip dual X-ray absorptiometry (DXA) measurements as well as peripheral quantitative computed tomography measurements of the tibia were conducted during bed rest and up to 90 days afterwards. RESULTS: A better retention of bone mass in RVE than RE was seen at the tibial diaphysis and proximal femur (p ≤ 0.024). Compared to control, RVE retained bone mass at the distal tibia and DXA leg sub-region (p ≤ 0.020), but with no significant difference to RE (p ≥ 0.10). RE impacted significantly (p = 0.038) on DXA leg sub-region bone mass only. Calf muscle size was impacted similarly by both RVE and RE. On lumbar spine DXA, whole-body DXA and calcium excretion measures, few differences between the groups were observed. CONCLUSIONS: Whilst further countermeasure optimisation is required, the results provide evidence that (1) combining whole-body vibration and high-load resistance exercise may be more efficient than high-load resistive exercise alone in preventing bone loss at some skeletal sites during and after prolonged bed rest and (2) the effects of exercise during bed rest impact upon bone recovery up to 3 months afterwards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: We evaluated which aspects of neuromuscular performance are associated with bone mass, density, strength and geometry. METHODS: 417 women aged 60-94years were examined. Countermovement jump, sit-to-stand test, grip strength, forearm and calf muscle cross-sectional area, areal bone mineral content and density (aBMC and aBMD) at the hip and lumbar spine via dual X-ray absorptiometry, and measures of volumetric vBMC and vBMD, bone geometry and section modulus at 4% and 66% of radius length and 4%, 38% and 66% of tibia length via peripheral quantitative computed tomography were performed. The first principal component of the neuromuscular variables was calculated to generate a summary neuromuscular variable. Percentage of total variance in bone parameters explained by the neuromuscular parameters was calculated. Step-wise regression was also performed. RESULTS: At all pQCT bone sites (radius, ulna, tibia, fibula), a greater percentage of total variance in measures of bone mass, cortical geometry and/or bone strength was explained by peak neuromuscular performance than for vBMD. Sit-to-stand performance did not relate strongly to bone parameters. No obvious differential in the explanatory power of neuromuscular performance was seen for DXA aBMC versus aBMD. In step-wise regression, bone mass, cortical morphology, and/or strength remained significant in relation to the first principal component of the neuromuscular variables. In no case was vBMD positively related to neuromuscular performance in the final step-wise regression models. CONCLUSION: Peak neuromuscular performance has a stronger relationship with leg and forearm bone mass and cortical geometry as well as proximal forearm section modulus than with vBMD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new multi-output interval type-2 fuzzy logic system (MOIT2FLS) is introduced for protein secondary structure prediction in this paper. Three outputs of the MOIT2FLS correspond to three structure classes including helix, strand (sheet) and coil. Quantitative properties of amino acids are employed to characterize twenty amino acids rather than the widely used computationally expensive binary encoding scheme. Three clustering tasks are performed using the adaptive vector quantization method to construct an equal number of initial rules for each type of secondary structure. Genetic algorithm is applied to optimally adjust parameters of the MOIT2FLS. The genetic fitness function is designed based on the Q3 measure. Experimental results demonstrate the dominance of the proposed approach against the traditional methods that are Chou-Fasman method, Garnier-Osguthorpe-Robson method, and artificial neural network models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel rank-based method for image watermarking. In the watermark embedding process, the host image is divided into blocks, followed by the 2-D discrete cosine transform (DCT). For each image block, a secret key is employed to randomly select a set of DCT coefficients suitable for watermark embedding. Watermark bits are inserted into an image block by modifying the set of DCT coefficients using a rank-based embedding rule. In the watermark detection process, the corresponding detection matrices are formed from the received image using the secret key. Afterward, the watermark bits are extracted by checking the ranks of the detection matrices. Since the proposed watermarking method only uses two DCT coefficients to hide one watermark bit, it can achieve very high embedding capacity. Moreover, our method is free of host signal interference. This desired feature and the usage of an error buffer in watermark embedding result in high robustness against attacks. Theoretical analysis and experimental results demonstrate the effectiveness of the proposed method.