106 resultados para face classification
Resumo:
3D Face Recognition is an active area of research for past several years. For a 3D face recognition system one would like to have an accurate as well as low cost setup for constructing 3D face model. In this paper, we use Profilometry approach to obtain a 3D face model.This method gives a low cost solution to the problem of acquiring 3D data and the 3D face models generated by this method are sufficiently accurate. We also develop an algorithm that can use the 3D face model generated by the above method for the recognition purpose.
Resumo:
The covalent linkage between the side-chain and the backbone nitrogen atom of proline leads to the formation of the five-membered pyrrolidine ring and hence restriction of the backbone torsional angle phi to values of -60 degrees +/- 30 degrees for the L-proline. Diproline segments constitute a chain fragment with considerably reduced conformational choices. In the current study, the conformational states for the diproline segment ((L)Pro-(L)Pro) found in proteins has been investigated with an emphasis on the cis and trans states for the Pro-Pro peptide bond. The occurrence of diproline segments in turns and other secondary structures has been studied and compared to that of Xaa-Pro-Yaa segments in proteins which gives us a better understanding on the restriction imposed on other residues by the diproline segment and the single proline residue. The study indicates that P(II)-P(II) and P(II)-alpha are the most favorable conformational states for the diproline segment. The analysis on Xaa-Pro-Yaa sequences reveals that the XaaPro peptide bond exists preferably as the trans conformer rather than the cis conformer. The present study may lead to a better understanding of the behavior of proline occurring in diproline segments which can facilitate various designed diproline-based synthetic templates for biological and structural studies. (C) 2011 Wiley Periodicals, Inc. Biopolymers 97: 54-64, 2012.
Resumo:
This paper investigates a new Glowworm Swarm Optimization (GSO) clustering algorithm for hierarchical splitting and merging of automatic multi-spectral satellite image classification (land cover mapping problem). Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to classify all the basic land cover classes of an urban region in a satisfactory manner. In unsupervised classification methods, the automatic generation of clusters to classify a huge database is not exploited to their full potential. The proposed methodology searches for the best possible number of clusters and its center using Glowworm Swarm Optimization (GSO). Using these clusters, we classify by merging based on parametric method (k-means technique). The performance of the proposed unsupervised classification technique is evaluated for Landsat 7 thematic mapper image. Results are evaluated in terms of the classification efficiency - individual, average and overall.
Resumo:
A technique is proposed for classifying respiratory volume waveforms(RVW) into normal and abnormal categories of respiratory pathways. The proposed method transforms the temporal sequence into frequency domain by using an orthogonal transform, namely discrete cosine transform (DCT) and the transformed signal is pole-zero modelled. A Bayes classifier using model pole angles as the feature vector performed satisfactorily when a limited number of RVWs recorded under deep and rapid (DR) manoeuvre are classified.
Resumo:
Earthquakes cause massive road damage which in turn causes adverse effects on the society. Previous studies have quantified the damage caused to residential and commercial buildings; however, not many studies have been conducted to quantify road damage caused by earthquakes. In this study, an attempt has been made to propose a new scale to classify and quantify the road damage due to earthquakes based on the data collected from major earthquakes in the past. The proposed classification for road damage due to earthquake is called as road damage scale (RDS). Earthquake details such as magnitude, distance of road damage from the epicenter, focal depth, and photographs of damaged roads have been collected from various sources with reported modified Mercalli intensity (MMI). The widely used MMI scale is found to be inadequate to clearly define the road damage. The proposed RDS is applied to various reported road damage and reclassified as per RDS. The correlation between RDS and earthquake parameters of magnitude, epicenter distance, hypocenter distance, and combination of magnitude with epicenter and hypocenter distance has been studied using available data. It is observed that the proposed RDS correlates well with the available earthquake data when compared with the MMI scale. Among several correlations, correlation between RDS and combination of magnitude and epicenter distance is appropriate. Summary of these correlations, their limitations, and the applicability of the proposed scale to forecast road damages and to carry out vulnerability analysis in urban areas is presented in the paper.
Resumo:
The widely used Bayesian classifier is based on the assumption of equal prior probabilities for all the classes. However, inclusion of equal prior probabilities may not guarantee high classification accuracy for the individual classes. Here, we propose a novel technique-Hybrid Bayesian Classifier (HBC)-where the class prior probabilities are determined by unmixing a supplemental low spatial-high spectral resolution multispectral (MS) data that are assigned to every pixel in a high spatial-low spectral resolution MS data in Bayesian classification. This is demonstrated with two separate experiments-first, class abundances are estimated per pixel by unmixing Moderate Resolution Imaging Spectroradiometer data to be used as prior probabilities, while posterior probabilities are determined from the training data obtained from ground. These have been used for classifying the Indian Remote Sensing Satellite LISS-III MS data through Bayesian classifier. In the second experiment, abundances obtained by unmixing Landsat Enhanced Thematic Mapper Plus are used as priors, and posterior probabilities are determined from the ground data to classify IKONOS MS images through Bayesian classifier. The results indicated that HBC systematically exploited the information from two image sources, improving the overall accuracy of LISS-III MS classification by 6% and IKONOS MS classification by 9%. Inclusion of prior probabilities increased the average producer's and user's accuracies by 5.5% and 6.5% in case of LISS-III MS with six classes and 12.5% and 5.4% in IKONOS MS for five classes considered.
Resumo:
In this paper, we give a brief review of pattern classification algorithms based on discriminant analysis. We then apply these algorithms to classify movement direction based on multivariate local field potentials recorded from a microelectrode array in the primary motor cortex of a monkey performing a reaching task. We obtain prediction accuracies between 55% and 90% using different methods which are significantly above the chance level of 12.5%.
Resumo:
Proving the unsatisfiability of propositional Boolean formulas has applications in a wide range of fields. Minimal Unsatisfiable Sets (MUS) are signatures of the property of unsatisfiability in formulas and our understanding of these signatures can be very helpful in answering various algorithmic and structural questions relating to unsatisfiability. In this paper, we explore some combinatorial properties of MUS and use them to devise a classification scheme for MUS. We also derive bounds on the sizes of MUS in Horn, 2-SAT and 3-SAT formulas.
Resumo:
In this paper, we consider the problem of time series classification. Using piecewise linear interpolation various novel kernels are obtained which can be used with Support vector machines for designing classifiers capable of deciding the class of a given time series. The approach is general and is applicable in many scenarios. We apply the method to the task of Online Tamil handwritten character recognition with promising results.
Resumo:
The interaction of guar gum with the hydrophobic solids namely talc, mica and graphite has been investigated through adsorption, electrokinetic and flotation experiments. The adsorption densities of guar gum onto the above hydrophobic minerals show that they are more or less independent of pH. The adsorption isotherms of guar gum onto talc, mica and graphite indicate that the adsorption densities increase with increase in guar gum concentration and all the isotherms follow the as L1 type according to Giles classification. The magnitude of the adsorption density of guar gum onto the above minerals may be arranged in the following sequence: talc > graphite > mica The effect of particle size on the adsorption density of guar gum onto these minerals has indicated that higher adsorption takes place in the coarser size fraction, consequent to an increase in the surface face-to-edge ratio. In the case of the talc and mica samples pretreated with EDTA and the leached graphite sample, a decrease in the adsorption density of guar gum is observed, due to a reduction in the metallic adsorption sites. The adsorption densities of guar gum increase with decrease in sample weight for all the three minerals. Electrokinetic measurements have indicated that the isoelectric points (iep) of these minerals lie between pH 2-3, Addition of guar gum decreases the negative electrophoretic mobility values in proportion to the guar gum concentration without any observable shift in the iep values, resembling the influence of an indifferent electrolyte. The flotation recovery is diminished in the presence of guar gum for all the three minerals, The magnitude of depression follows the same sequence as observed in the adsorption studies. The floatability of EDTA treated talc and mica samples as well as the leached graphite sample is enhanced, complementing the adsorption data, Possible mechanisms of interaction between the hydrophobic minerals and guar gum are discussed.
Resumo:
This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from ``normal'' (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions. (c) 2012 COSPAR. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization/constraint (ER/EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC/ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models.