64 resultados para Feature extraction and classification

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a feature selection approach based on Gabor wavelet feature and boosting for face verification. By convolution with a group of Gabor wavelets, the original images are transformed into vectors of Gabor wavelet features. Then for individual person, a small set of significant features are selected by the boosting algorithm from a large set of Gabor wavelet features. The experiment results have shown that the approach successfully selects meaningful and explainable features for face verification. The experiments also suggest that for the common characteristics such as eyes, noses, mouths may not be as important as some unique characteristic when training set is small. When training set is large, the unique characteristics and the common characteristics are both important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In rapid scan Fourier transform spectrometry, we show that the noise in the wavelet coefficients resulting from the filter bank decomposition of the complex insertion loss function is linearly related to the noise power in the sample interferogram by a noise amplification factor. By maximizing an objective function composed of the power of the wavelet coefficients divided by the noise amplification factor, optimal feature extraction in the wavelet domain is performed. The performance of a classifier based on the output of a filter bank is shown to be considerably better than that of an Euclidean distance classifier in the original spectral domain. An optimization procedure results in a further improvement of the wavelet classifier. The procedure is suitable for enhancing the contrast or classifying spectra acquired by either continuous wave or THz transient spectrometers as well as for increasing the dynamic range of THz imaging systems. (C) 2003 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This investigation examines metal release from freshwater sediment using sequential extraction and single-step cold-acid leaching. The concentrations of Cd, Cr, Cu, Fe, Ni, Pb and Zn released using a standard 3-step sequential extraction (Rauret et al., 1999) are compared to those released using a 0.5 M HCl; leach. The results show that the three sediments behave in very different ways when subject to the same leaching experiments: the cold-acid extraction appears to remove higher relative concentrations of metals from the iron-rich sediment than from the other two sediments. Cold-acid extraction appears to be more effective at removing metals from sediments with crystalline iron oxides than the "reducible" step of the sequential extraction. The results show that a single-step acid leach can be just as effective as sequential extractions at removing metals from sediment and are a great deal less time-consuming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting bloom occurrence in lakes and rivers. In this paper existing key models of cyanobacteria are reviewed, evaluated and classified. Two major groups emerge: deterministic mathematical and artificial neural network models. Mathematical models can be further subcategorized into those models concerned with impounded water bodies and those concerned with rivers. Most existing models focus on a single aspect such as the growth of transport mechanisms, but there are a few models which couple both.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a new method for clustering and building a topographic representation of a bacteria taxonomy is presented. The method is based on the analysis of stable parts of the genome, the so-called “housekeeping genes”. The proposed method generates topographic maps of the bacteria taxonomy, where relations among different type strains can be visually inspected and verified. Two well known DNA alignement algorithms are applied to the genomic sequences. Topographic maps are optimized to represent the similarity among the sequences according to their evolutionary distances. The experimental analysis is carried out on 147 type strains of the Gammaprotebacteria class by means of the 16S rRNA housekeeping gene. Complete sequences of the gene have been retrieved from the NCBI public database. In the experimental tests the maps show clusters of homologous type strains and present some singular cases potentially due to incorrect classification or erroneous annotations in the database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The extracting agent 2,6-bis(4,6-di-pivaloylamino-1,3,5-triazin-2-yl)-pyridine (L-5) in n-octanol was found, in synergy with 2-bromodecanoic acid, to give D-Am/D-Eu separation factors (SFs) between 2.4 and 3.7 when used to extract the metal ions from 0.02-0.12 M HNO3. Slightly higher SFs (4-6) were obtained in the absence of the synergist when the ligand was used to extract Am(III) and Eu(III) from 0.98 M HNO3. In order to investigate the possible nature of the extracted species crystal structures of L-5 and the complex formed between Yb(III) with 2,6-bis(4,6-di-amino-1,3,5-triazin-2-yl)-pyridine (L-4) were also determined. The structure of L-5 shows 3 methanol solvent molecules all of which form 2 or 3 hydrogen bonds with triazine nitrogen atoms, amide nitrogen or oxygen atoms, or pyridine nitrogen atoms. However, L-5 is relatively unstable in metal complexation reactions and loses amide groups to form the parent tetramine L-4. The crystal structure of Yb(L-4)(NO3)(3) shows ytterbium in a 9-coordinate environment being bonded to three donor atoms of the ligand and three bidentate nitrate ions. The solvent extraction properties of L-4 and L-5 are far inferior to those found for the 2,6-bis-(1,2,4-triazin-3-yl)-pyridines (L-1) which have SF values of ca. 140 and theoretical calculations have been made to compare the electronic properties of the ligands. The electronic charge distribution in L-4 and L-5 is similar to that found in other terdentate ligands such as terpyridine which have equally poor extraction properties and suggests that the unique properties of L-1 evolve from the presence of two adjacent nitrogen atoms in the triazine rings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bifunctional carbamoyl methyl sulfoxide ligands, PhCH2SOCH2CONHPh (L-1), PhCH2SOCH2CONHCH2Ph (L-2), (PhSOCH2CONPr2)-Pr-i (L-3), PhSOCH2CONBu2 (L-4), (PhSOCH2CONBu2)-Bu-i (L-5) and PhSOCH2CON(C8H17)(2) (L-6) have been synthesized and characterized by spectroscopic methods. The selected coordination chemistry of L-1, L-3, L-4 and L-5 with [UO2(NO3)(2)] and [Ce(NO3)(3)] has been evaluated. The structures of the compounds [UO2(NO3)(2)((PhSOCH2CONBu2)-Bu-i)] (10) and [Ce(NO3)(3)(PhSOCH2CONBu2)(2)] (12) have been determined by single crystal X-ray diffraction methods. Preliminary extraction studies of ligand L-6 with U(VI), Pu(IV) and Am(III) in tracer level showed an appreciable extraction for U(VI) and Pu(IV) in up to 10 M HNO3 but not for Am(III). Thermal studies on compounds 8 and 10 in air revealed that the ligands can be destroyed completely on incineration. The electron spray mass spectra of compounds 8 and 10 in acetone show that extensive ligand distribution reactions occur in solution to give a mixture of products with ligand to metal ratios of 1 : 1 and 2 : 1. However, 10 retains its solid state structure in CH2Cl2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chestnuts are an important economic resource in the chestnut growing regions, not only for the fruit, but also for the wood. The content of ellagic acid (EA), a naturally occurring inhibitor of carcinogenesis, was determined in chestnut fruits and bark. EA was extracted with methanol and free ellagic acid was determined by HPLC with UV detection, both in the crude extract and after hydrolysis. The concentration of EA was generally increased after hydrolysis due to the presence of ellagitannins in the crude extract. The concentration varied between 0.71 and 21.6 ing g(-1) (d.w.) in un-hydrolyzed samples, and between 2.83 and 18.4 mg g(-1) (d.w.) ill hydrolyzed samples. In chestnut fruits, traces of EA were present in the seed, with higher concentrations in the pellicle and pericarp. However, all fruit tissues had lower concentrations of EA than had the bark. The concentration of EA in the hydrolyzed samples showed a non-linear correlation with the concentration in the unhydrolyzed extracts. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.