22 resultados para Polynomial classifier
em Cochin University of Science
Resumo:
Most adaptive linearization circuits for the nonlinear amplifier have a feedback loop that returns the output signal oj'tne eunplifier to the lineurizer. The loop delay of the linearizer most be controlled precisely so that the convergence of the linearizer should be assured lot this Letter a delay control circuit is presented. It is a delay lock loop (ULL) with it modified early-lute gate and can he easily applied to a DSP implementation. The proposed DLL circuit is applied to an adaptive linearizer with the use of a polynomial predistorter, and the simulalion for a 16-QAM signal is performed. The simulation results show that the proposed DLL eliminates the delay between the reference input signal and the delayed feedback signal of the linearizing circuit perfectly, so that the predistorter polynomial coefficients converge into the optimum value and a high degree of linearization is achieved
Resumo:
This thesis addresses one of the emerging topics in Sonar Signal Processing.,viz.the implementation of a target classifier for the noise sources in the ocean, as the operator assisted classification turns out to be tedious,laborious and time consuming.In the work reported in this thesis,various judiciously chosen components of the feature vector are used for realizing the newly proposed Hierarchical Target Trimming Model.The performance of the proposed classifier has been compared with the Euclidean distance and Fuzzy K-Nearest Neighbour Model classifiers and is found to have better success rates.The procedures for generating the Target Feature Record or the Feature vector from the spectral,cepstral and bispectral features have also been suggested.The Feature vector ,so generated from the noise data waveform is compared with the feature vectors available in the knowledge base and the most matching pattern is identified,for the purpose of target classification.In an attempt to improve the success rate of the Feature Vector based classifier,the proposed system has been augmented with the HMM based Classifier.Institutions where both the classifier decisions disagree,a contention resolving mechanism built around the DUET algorithm has been suggested.
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576
Resumo:
The paper investigates the feasibility of implementing an intelligent classifier for noise sources in the ocean, with the help of artificial neural networks, using higher order spectral features. Non-linear interactions between the component frequencies of the noise data can give rise to certain phase relations called Quadratic Phase Coupling (QPC), which cannot be characterized by power spectral analysis. However, bispectral analysis, which is a higher order estimation technique, can reveal the presence of such phase couplings and provide a measure to quantify such couplings. A feed forward neural network has been trained and validated with higher order spectral features
Resumo:
Cast Ai-Si alloys are widely used in the automotive, aerospace and general engineering industries due to their excellent combination of properties such as good castability, low coefficient of thermal expansion, high strength-to-weight ratio and good corrosion resistance. The present investigation is on the influence of alloying additions on the structure and properties of Ai-7Si-0.3Mg alloy. The primary objective of this present investigation is to study these beneficial effects of calcium on the structure and properties of Ai-7Si-0.3Mg-xFe alloys. The second objective of this work is to study the effects of Mn,Be and Sr addition as Fe neutralizers and also to study the interaction of Mn,Be,Sr and Ca in Ai-7Si-0.3Mg-xFe alloys. In this study the duel beneficial effects of Ca viz;modification and Fe-neutralization, comparison of the effects of Ca and Sr with common Fe neutralizers. The casting have been characterized with respect to their microstructure, %porosity and electrical conductivity, solidification behaviour and mechanical properties. One of the interesting observations in the present work is that a low level of calcium reduces the porosity compared to the untreated alloy. However higher level of calcium addition lead to higher porosity in the casting. An empirical analysis carried out for comparing the results of the present work with those of the other researchers on the effect of increasing iron content on UTS and % elongation of Ai-Si-Mg and Ai-Si-Cu alloys has shown a linear and an inverse first order polynomial relationships respectively.
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
An attempt is made by the researcher to establish a theory of discrete functions in the complex plane. Classical analysis q-basic theory, monodiffric theory, preholomorphic theory and q-analytic theory have been utilised to develop concepts like differentiation, integration and special functions.
Resumo:
Median filtering is a simple digital non—linear signal smoothing operation in which median of the samples in a sliding window replaces the sample at the middle of the window. The resulting filtered sequence tends to follow polynomial trends in the original sample sequence. Median filter preserves signal edges while filtering out impulses. Due to this property, median filtering is finding applications in many areas of image and speech processing. Though median filtering is simple to realise digitally, its properties are not easily analysed with standard analysis techniques,
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.
Resumo:
Treating e-mail filtering as a binary text classification problem, researchers have applied several statistical learning algorithms to email corpora with promising results. This paper examines the performance of a Naive Bayes classifier using different approaches to feature selection and tokenization on different email corpora
Resumo:
Learning Disability (LD) is a classification including several disorders in which a child has difficulty in learning in a typical manner, usually caused by an unknown factor or factors. LD affects about 15% of children enrolled in schools. The prediction of learning disability is a complicated task since the identification of LD from diverse features or signs is a complicated problem. There is no cure for learning disabilities and they are life-long. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. The aim of this paper is to develop a new algorithm for imputing missing values and to determine the significance of the missing value imputation method and dimensionality reduction method in the performance of fuzzy and neuro fuzzy classifiers with specific emphasis on prediction of learning disabilities in school age children. In the basic assessment method for prediction of LD, checklists are generally used and the data cases thus collected fully depends on the mood of children and may have also contain redundant as well as missing values. Therefore, in this study, we are proposing a new algorithm, viz. the correlation based new algorithm for imputing the missing values and Principal Component Analysis (PCA) for reducing the irrelevant attributes. After the study, it is found that, the preprocessing methods applied by us improves the quality of data and thereby increases the accuracy of the classifiers. The system is implemented in Math works Software Mat Lab 7.10. The results obtained from this study have illustrated that the developed missing value imputation method is very good contribution in prediction system and is capable of improving the performance of a classifier.