943 resultados para POLYNOMIAL-MAPPINGS
Resumo:
Cast Ai-Si alloys are widely used in the automotive, aerospace and general engineering industries due to their excellent combination of properties such as good castability, low coefficient of thermal expansion, high strength-to-weight ratio and good corrosion resistance. The present investigation is on the influence of alloying additions on the structure and properties of Ai-7Si-0.3Mg alloy. The primary objective of this present investigation is to study these beneficial effects of calcium on the structure and properties of Ai-7Si-0.3Mg-xFe alloys. The second objective of this work is to study the effects of Mn,Be and Sr addition as Fe neutralizers and also to study the interaction of Mn,Be,Sr and Ca in Ai-7Si-0.3Mg-xFe alloys. In this study the duel beneficial effects of Ca viz;modification and Fe-neutralization, comparison of the effects of Ca and Sr with common Fe neutralizers. The casting have been characterized with respect to their microstructure, %porosity and electrical conductivity, solidification behaviour and mechanical properties. One of the interesting observations in the present work is that a low level of calcium reduces the porosity compared to the untreated alloy. However higher level of calcium addition lead to higher porosity in the casting. An empirical analysis carried out for comparing the results of the present work with those of the other researchers on the effect of increasing iron content on UTS and % elongation of Ai-Si-Mg and Ai-Si-Cu alloys has shown a linear and an inverse first order polynomial relationships respectively.
Resumo:
This thesis is a study of discrete nonlinear systems represented by one dimensional mappings.As one dimensional interative maps represent Poincarre sections of higher dimensional flows,they offer a convenient means to understand the dynamical evolution of many physical systems.It highlighting the basic ideas of deterministic chaos.Qualitative and quantitative measures for the detection and characterization of chaos in nonlinear systems are discussed.Some simple mathematical models exhibiting chaos are presented.The bifurcation scenario and the possible routes to chaos are explained.It present the results of the numerical computational of the Lyapunov exponents (λ) of one dimensional maps.This thesis focuses on the results obtained by our investigations on combinations maps,scaling behaviour of the Lyapunov characteristic exponents of one dimensional maps and the nature of bifurcations in a discontinous logistic map.It gives a review of the major routes to chaos in dissipative systems,namely, Period-doubling ,Intermittency and Crises.This study gives a theoretical understanding of the route to chaos in discontinous systems.A detailed analysis of the dynamics of a discontinous logistic map is carried out, both analytically and numerically ,to understand the route it follows to chaos.The present analysis deals only with the case of the discontinuity parameter applied to the right half of the interval of mapping.A detailed analysis for the n –furcations of various periodicities can be made and a more general theory for the map with discontinuities applied at different positions can be on a similar footing
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
An attempt is made by the researcher to establish a theory of discrete functions in the complex plane. Classical analysis q-basic theory, monodiffric theory, preholomorphic theory and q-analytic theory have been utilised to develop concepts like differentiation, integration and special functions.
Resumo:
Median filtering is a simple digital non—linear signal smoothing operation in which median of the samples in a sliding window replaces the sample at the middle of the window. The resulting filtered sequence tends to follow polynomial trends in the original sample sequence. Median filter preserves signal edges while filtering out impulses. Due to this property, median filtering is finding applications in many areas of image and speech processing. Though median filtering is simple to realise digitally, its properties are not easily analysed with standard analysis techniques,
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
The set of vertices that maximize (minimize) the remoteness is the antimedian (median) set of the profile. It is proved that for an arbitrary graph G and S V (G) it can be decided in polynomial time whether S is the antimedian set of some profile. Graphs in which every antimedian set is connected are also considered.
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals
Resumo:
The Bieberbach conjecture about the coefficients of univalent functions of the unit disk was formulated by Ludwig Bieberbach in 1916 [Bieberbach1916]. The conjecture states that the coefficients of univalent functions are majorized by those of the Koebe function which maps the unit disk onto a radially slit plane. The Bieberbach conjecture was quite a difficult problem, and it was surprisingly proved by Louis de Branges in 1984 [deBranges1985] when some experts were rather trying to disprove it. It turned out that an inequality of Askey and Gasper [AskeyGasper1976] about certain hypergeometric functions played a crucial role in de Branges' proof. In this article I describe the historical development of the conjecture and the main ideas that led to the proof. The proof of Lenard Weinstein (1991) [Weinstein1991] follows, and it is shown how the two proofs are interrelated. Both proofs depend on polynomial systems that are directly related with the Koebe function. At this point algorithms of computer algebra come into the play, and computer demonstrations are given that show how important parts of the proofs can be automated.
Resumo:
This article surveys the classical orthogonal polynomial systems of the Hahn class, which are solutions of second-order differential, difference or q-difference equations. Orthogonal families satisfy three-term recurrence equations. Example applications of an algorithm to determine whether a three-term recurrence equation has solutions in the Hahn class - implemented in the computer algebra system Maple - are given. Modifications of these families, in particular associated orthogonal systems, satisfy fourth-order operator equations. A factorization of these equations leads to a solution basis.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.
Resumo:
In a previous paper we have determined a generic formula for the polynomial solution families of the well-known differential equation of hypergeometric type σ(x)y"n(x)+τ(x)y'n(x)-λnyn(x)=0. In this paper, we give another such formula which enables us to present a generic formula for the values of monic classical orthogonal polynomials at their boundary points of definition.
Resumo:
In this 1984 proof of the Bieberbach and Milin conjectures de Branges used a positivity result of special functions which follows from an identity about Jacobi polynomial sums thas was published by Askey and Gasper in 1976. The de Branges functions Tn/k(t) are defined as the solutions of a system of differential recurrence equations with suitably given initial values. The essential fact used in the proof of the Bieberbach and Milin conjectures is the statement Tn/k(t)<=0. In 1991 Weinstein presented another proof of the Bieberbach and Milin conjectures, also using a special function system Λn/k(t) which (by Todorov and Wilf) was realized to be directly connected with de Branges', Tn/k(t)=-kΛn/k(t), and the positivity results in both proofs Tn/k(t)<=0 are essentially the same. In this paper we study differential recurrence equations equivalent to de Branges' original ones and show that many solutions of these differential recurrence equations don't change sign so that the above inequality is not as surprising as expected. Furthermore, we present a multiparameterized hypergeometric family of solutions of the de Branges differential recurrence equations showing that solutions are not rare at all.