931 resultados para Permutation Polynomial
Resumo:
An attempt is made by the researcher to establish a theory of discrete functions in the complex plane. Classical analysis q-basic theory, monodiffric theory, preholomorphic theory and q-analytic theory have been utilised to develop concepts like differentiation, integration and special functions.
Resumo:
Median filtering is a simple digital non—linear signal smoothing operation in which median of the samples in a sliding window replaces the sample at the middle of the window. The resulting filtered sequence tends to follow polynomial trends in the original sample sequence. Median filter preserves signal edges while filtering out impulses. Due to this property, median filtering is finding applications in many areas of image and speech processing. Though median filtering is simple to realise digitally, its properties are not easily analysed with standard analysis techniques,
Resumo:
The set of vertices that maximize (minimize) the remoteness is the antimedian (median) set of the profile. It is proved that for an arbitrary graph G and S V (G) it can be decided in polynomial time whether S is the antimedian set of some profile. Graphs in which every antimedian set is connected are also considered.
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
One can do research in pointfree topology in two ways. The rst is the contravariant way where research is done in the category Frm but the ultimate objective is to obtain results in Loc. The other way is the covariant way to carry out research in the category Loc itself directly. According to Johnstone [23], \frame theory is lattice theory applied to topology whereas locale theory is topology itself". The most part of this thesis is written according to the rst view. In this thesis, we make an attempt to study about 1. the frame counterparts of maximal compactness, minimal Hausdor - ness and reversibility, 2. the automorphism groups of a nite frame and its relation with the subgroups of the permutation group on the generator set of the frame
Resumo:
The Bieberbach conjecture about the coefficients of univalent functions of the unit disk was formulated by Ludwig Bieberbach in 1916 [Bieberbach1916]. The conjecture states that the coefficients of univalent functions are majorized by those of the Koebe function which maps the unit disk onto a radially slit plane. The Bieberbach conjecture was quite a difficult problem, and it was surprisingly proved by Louis de Branges in 1984 [deBranges1985] when some experts were rather trying to disprove it. It turned out that an inequality of Askey and Gasper [AskeyGasper1976] about certain hypergeometric functions played a crucial role in de Branges' proof. In this article I describe the historical development of the conjecture and the main ideas that led to the proof. The proof of Lenard Weinstein (1991) [Weinstein1991] follows, and it is shown how the two proofs are interrelated. Both proofs depend on polynomial systems that are directly related with the Koebe function. At this point algorithms of computer algebra come into the play, and computer demonstrations are given that show how important parts of the proofs can be automated.
Resumo:
This article surveys the classical orthogonal polynomial systems of the Hahn class, which are solutions of second-order differential, difference or q-difference equations. Orthogonal families satisfy three-term recurrence equations. Example applications of an algorithm to determine whether a three-term recurrence equation has solutions in the Hahn class - implemented in the computer algebra system Maple - are given. Modifications of these families, in particular associated orthogonal systems, satisfy fourth-order operator equations. A factorization of these equations leads to a solution basis.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.
Resumo:
In a previous paper we have determined a generic formula for the polynomial solution families of the well-known differential equation of hypergeometric type σ(x)y"n(x)+τ(x)y'n(x)-λnyn(x)=0. In this paper, we give another such formula which enables us to present a generic formula for the values of monic classical orthogonal polynomials at their boundary points of definition.
Resumo:
In this 1984 proof of the Bieberbach and Milin conjectures de Branges used a positivity result of special functions which follows from an identity about Jacobi polynomial sums thas was published by Askey and Gasper in 1976. The de Branges functions Tn/k(t) are defined as the solutions of a system of differential recurrence equations with suitably given initial values. The essential fact used in the proof of the Bieberbach and Milin conjectures is the statement Tn/k(t)<=0. In 1991 Weinstein presented another proof of the Bieberbach and Milin conjectures, also using a special function system Λn/k(t) which (by Todorov and Wilf) was realized to be directly connected with de Branges', Tn/k(t)=-kΛn/k(t), and the positivity results in both proofs Tn/k(t)<=0 are essentially the same. In this paper we study differential recurrence equations equivalent to de Branges' original ones and show that many solutions of these differential recurrence equations don't change sign so that the above inequality is not as surprising as expected. Furthermore, we present a multiparameterized hypergeometric family of solutions of the de Branges differential recurrence equations showing that solutions are not rare at all.
Resumo:
In a similar manner as in some previous papers, where explicit algorithms for finding the differential equations satisfied by holonomic functions were given, in this paper we deal with the space of the q-holonomic functions which are the solutions of linear q-differential equations with polynomial coefficients. The sum, product and the composition with power functions of q-holonomic functions are also q-holonomic and the resulting q-differential equations can be computed algorithmically.
Resumo:
Various results on parity of the number of irreducible factors of given polynomials over finite fields have been obtained in the recent literature. Those are mainly based on Swan’s theorem in which discriminants of polynomials over a finite field or the integral ring Z play an important role. In this paper we consider discriminants of the composition of some polynomials over finite fields. The relation between the discriminants of composed polynomial and the original ones will be established. We apply this to obtain some results concerning the parity of the number of irreducible factors for several special polynomials over finite fields.