839 resultados para Polynomial Classifier
Resumo:
Learning Disability (LD) is a classification including several disorders in which a child has difficulty in learning in a typical manner, usually caused by an unknown factor or factors. LD affects about 15% of children enrolled in schools. The prediction of learning disability is a complicated task since the identification of LD from diverse features or signs is a complicated problem. There is no cure for learning disabilities and they are life-long. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. The aim of this paper is to develop a new algorithm for imputing missing values and to determine the significance of the missing value imputation method and dimensionality reduction method in the performance of fuzzy and neuro fuzzy classifiers with specific emphasis on prediction of learning disabilities in school age children. In the basic assessment method for prediction of LD, checklists are generally used and the data cases thus collected fully depends on the mood of children and may have also contain redundant as well as missing values. Therefore, in this study, we are proposing a new algorithm, viz. the correlation based new algorithm for imputing the missing values and Principal Component Analysis (PCA) for reducing the irrelevant attributes. After the study, it is found that, the preprocessing methods applied by us improves the quality of data and thereby increases the accuracy of the classifiers. The system is implemented in Math works Software Mat Lab 7.10. The results obtained from this study have illustrated that the developed missing value imputation method is very good contribution in prediction system and is capable of improving the performance of a classifier.
Resumo:
The set of vertices that maximize (minimize) the remoteness is the antimedian (median) set of the profile. It is proved that for an arbitrary graph G and S V (G) it can be decided in polynomial time whether S is the antimedian set of some profile. Graphs in which every antimedian set is connected are also considered.
Resumo:
Learning Disability (LD) is a neurological condition that affects a child’s brain and impairs his ability to carry out one or many specific tasks. LD affects about 15 % of children enrolled in schools. The prediction of LD is a vital and intricate job. The aim of this paper is to design an effective and powerful tool, using the two intelligent methods viz., Artificial Neural Network and Adaptive Neuro-Fuzzy Inference System, for measuring the percentage of LD that affected in school-age children. In this study, we are proposing some soft computing methods in data preprocessing for improving the accuracy of the tool as well as the classifier. The data preprocessing is performed through Principal Component Analysis for attribute reduction and closest fit algorithm is used for imputing missing values. The main idea in developing the LD prediction tool is not only to predict the LD present in children but also to measure its percentage along with its class like low or minor or major. The system is implemented in Mathworks Software MatLab 7.10. The results obtained from this study have illustrated that the designed prediction system or tool is capable of measuring the LD effectively
Resumo:
This paper presents a Robust Content Based Video Retrieval (CBVR) system. This system retrieves similar videos based on a local feature descriptor called SURF (Speeded Up Robust Feature). The higher dimensionality of SURF like feature descriptors causes huge storage consumption during indexing of video information. To achieve a dimensionality reduction on the SURF feature descriptor, this system employs a stochastic dimensionality reduction method and thus provides a model data for the videos. On retrieval, the model data of the test clip is classified to its similar videos using a minimum distance classifier. The performance of this system is evaluated using two different minimum distance classifiers during the retrieval stage. The experimental analyses performed on the system shows that the system has a retrieval performance of 78%. This system also analyses the performance efficiency of the low dimensional SURF descriptor.
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
In this paper an attempt has been made to determine the number of Premature Ventricular Contraction (PVC) cycles accurately from a given Electrocardiogram (ECG) using a wavelet constructed from multiple Gaussian functions. It is difficult to assess the ECGs of patients who are continuously monitored over a long period of time. Hence the proposed method of classification will be helpful to doctors to determine the severity of PVC in a patient. Principal Component Analysis (PCA) and a simple classifier have been used in addition to the specially developed wavelet transform. The proposed wavelet has been designed using multiple Gaussian functions which when summed up looks similar to that of a normal ECG. The number of Gaussians used depends on the number of peaks present in a normal ECG. The developed wavelet satisfied all the properties of a traditional continuous wavelet. The new wavelet was optimized using genetic algorithm (GA). ECG records from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) database have been used for validation. Out of the 8694 ECG cycles used for evaluation, the classification algorithm responded with an accuracy of 97.77%. In order to compare the performance of the new wavelet, classification was also performed using the standard wavelets like morlet, meyer, bior3.9, db5, db3, sym3 and haar. The new wavelet outperforms the rest
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals
Resumo:
The Bieberbach conjecture about the coefficients of univalent functions of the unit disk was formulated by Ludwig Bieberbach in 1916 [Bieberbach1916]. The conjecture states that the coefficients of univalent functions are majorized by those of the Koebe function which maps the unit disk onto a radially slit plane. The Bieberbach conjecture was quite a difficult problem, and it was surprisingly proved by Louis de Branges in 1984 [deBranges1985] when some experts were rather trying to disprove it. It turned out that an inequality of Askey and Gasper [AskeyGasper1976] about certain hypergeometric functions played a crucial role in de Branges' proof. In this article I describe the historical development of the conjecture and the main ideas that led to the proof. The proof of Lenard Weinstein (1991) [Weinstein1991] follows, and it is shown how the two proofs are interrelated. Both proofs depend on polynomial systems that are directly related with the Koebe function. At this point algorithms of computer algebra come into the play, and computer demonstrations are given that show how important parts of the proofs can be automated.
Resumo:
This article surveys the classical orthogonal polynomial systems of the Hahn class, which are solutions of second-order differential, difference or q-difference equations. Orthogonal families satisfy three-term recurrence equations. Example applications of an algorithm to determine whether a three-term recurrence equation has solutions in the Hahn class - implemented in the computer algebra system Maple - are given. Modifications of these families, in particular associated orthogonal systems, satisfy fourth-order operator equations. A factorization of these equations leads to a solution basis.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.
Resumo:
In a previous paper we have determined a generic formula for the polynomial solution families of the well-known differential equation of hypergeometric type σ(x)y"n(x)+τ(x)y'n(x)-λnyn(x)=0. In this paper, we give another such formula which enables us to present a generic formula for the values of monic classical orthogonal polynomials at their boundary points of definition.
Resumo:
In this 1984 proof of the Bieberbach and Milin conjectures de Branges used a positivity result of special functions which follows from an identity about Jacobi polynomial sums thas was published by Askey and Gasper in 1976. The de Branges functions Tn/k(t) are defined as the solutions of a system of differential recurrence equations with suitably given initial values. The essential fact used in the proof of the Bieberbach and Milin conjectures is the statement Tn/k(t)<=0. In 1991 Weinstein presented another proof of the Bieberbach and Milin conjectures, also using a special function system Λn/k(t) which (by Todorov and Wilf) was realized to be directly connected with de Branges', Tn/k(t)=-kΛn/k(t), and the positivity results in both proofs Tn/k(t)<=0 are essentially the same. In this paper we study differential recurrence equations equivalent to de Branges' original ones and show that many solutions of these differential recurrence equations don't change sign so that the above inequality is not as surprising as expected. Furthermore, we present a multiparameterized hypergeometric family of solutions of the de Branges differential recurrence equations showing that solutions are not rare at all.
Resumo:
In a similar manner as in some previous papers, where explicit algorithms for finding the differential equations satisfied by holonomic functions were given, in this paper we deal with the space of the q-holonomic functions which are the solutions of linear q-differential equations with polynomial coefficients. The sum, product and the composition with power functions of q-holonomic functions are also q-holonomic and the resulting q-differential equations can be computed algorithmically.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.