980 resultados para zeros of polynomials
Resumo:
The Bohnenblust-Hille inequality says that the $\ell^{\frac{2m}{m+1}}$ -norm of the coefficients of an $m$-homogeneous polynomial $P$ on $\Bbb{C}^n$ is bounded by $\| P \|_\infty$ times a constant independent of $n$, where $\|\cdot \|_\infty$ denotes the supremum norm on the polydisc $\mathbb{D}^n$. The main result of this paper is that this inequality is hypercontractive, i.e., the constant can be taken to be $C^m$ for some $C>1$. Combining this improved version of the Bohnenblust-Hille inequality with other results, we obtain the following: The Bohr radius for the polydisc $\mathbb{D}^n$ behaves asymptotically as $\sqrt{(\log n)/n}$ modulo a factor bounded away from 0 and infinity, and the Sidon constant for the set of frequencies $\bigl\{ \log n: n \text{a positive integer} \le N\bigr\}$ is $\sqrt{N}\exp\{(-1/\sqrt{2}+o(1))\sqrt{\log N\log\log N}\}$.
Resumo:
A continuous random variable is expanded as a sum of a sequence of uncorrelated random variables. These variables are principal dimensions in continuous scaling on a distance function, as an extension of classic scaling on a distance matrix. For a particular distance, these dimensions are principal components. Then some properties are studied and an inequality is obtained. Diagonal expansions are considered from the same continuous scaling point of view, by means of the chi-square distance. The geometric dimension of a bivariate distribution is defined and illustrated with copulas. It is shown that the dimension can have the power of continuum.
Resumo:
The arbitrary angular momentum solutions of the Schrödinger equation for a diatomic molecule with the general exponential screened coulomb potential of the form V(r) = (- a / r){1+ (1+ b )e-2b } has been presented. The energy eigenvalues and the corresponding eigenfunctions are calculated analytically by the use of Nikiforov-Uvarov (NU) method which is related to the solutions in terms of Jacobi polynomials. The bounded state eigenvalues are calculated numerically for the 1s state of N2 CO and NO
Resumo:
The main topic of the thesis is optimal stopping. This is treated in two research articles. In the first article we introduce a new approach to optimal stopping of general strong Markov processes. The approach is based on the representation of excessive functions as expected suprema. We present a variety of examples, in particular, the Novikov-Shiryaev problem for Lévy processes. In the second article on optimal stopping we focus on differentiability of excessive functions of diffusions and apply these results to study the validity of the principle of smooth fit. As an example we discuss optimal stopping of sticky Brownian motion. The third research article offers a survey like discussion on Appell polynomials. The crucial role of Appell polynomials in optimal stopping of Lévy processes was noticed by Novikov and Shiryaev. They described the optimal rule in a large class of problems via these polynomials. We exploit the probabilistic approach to Appell polynomials and show that many classical results are obtained with ease in this framework. In the fourth article we derive a new relationship between the generalized Bernoulli polynomials and the generalized Euler polynomials.
Resumo:
In the present paper we discuss the development of "wave-front", an instrument for determining the lower and higher optical aberrations of the human eye. We also discuss the advantages that such instrumentation and techniques might bring to the ophthalmology professional of the 21st century. By shining a small light spot on the retina of subjects and observing the light that is reflected back from within the eye, we are able to quantitatively determine the amount of lower order aberrations (astigmatism, myopia, hyperopia) and higher order aberrations (coma, spherical aberration, etc.). We have measured artificial eyes with calibrated ametropia ranging from +5 to -5 D, with and without 2 D astigmatism with axis at 45º and 90º. We used a device known as the Hartmann-Shack (HS) sensor, originally developed for measuring the optical aberrations of optical instruments and general refracting surfaces in astronomical telescopes. The HS sensor sends information to a computer software for decomposition of wave-front aberrations into a set of Zernike polynomials. These polynomials have special mathematical properties and are more suitable in this case than the traditional Seidel polynomials. We have demonstrated that this technique is more precise than conventional autorefraction, with a root mean square error (RMSE) of less than 0.1 µm for a 4-mm diameter pupil. In terms of dioptric power this represents an RMSE error of less than 0.04 D and 5º for the axis. This precision is sufficient for customized corneal ablations, among other applications.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.
Resumo:
Let f(x) be a complex rational function. In this work, we study conditions under which f(x) cannot be written as the composition of two rational functions which are not units under the operation of function composition. In this case, we say that f(x) is prime. We give sufficient conditions for complex rational functions to be prime in terms of their degrees and their critical values, and we derive some conditions for the case of complex polynomials. We consider also the divisibility of integral polynomials, and we present a generalization of a theorem of Nieto. We show that if f(x) and g(x) are integral polynomials such that the content of g divides the content of f and g(n) divides f(n) for an integer n whose absolute value is larger than a certain bound, then g(x) divides f(x) in Z[x]. In addition, given an integral polynomial f(x), we provide a method to determine if f is irreducible over Z, and if not, find one of its divisors in Z[x].
Resumo:
Cette thèse s'intéresse à l'étude des propriétés et applications de quatre familles des fonctions spéciales associées aux groupes de Weyl et dénotées $C$, $S$, $S^s$ et $S^l$. Ces fonctions peuvent être vues comme des généralisations des polynômes de Tchebyshev. Elles sont en lien avec des polynômes orthogonaux à plusieurs variables associés aux algèbres de Lie simples, par exemple les polynômes de Jacobi et de Macdonald. Elles ont plusieurs propriétés remarquables, dont l'orthogonalité continue et discrète. En particulier, il est prouvé dans la présente thèse que les fonctions $S^s$ et $S^l$ caractérisées par certains paramètres sont mutuellement orthogonales par rapport à une mesure discrète. Leur orthogonalité discrète permet de déduire deux types de transformées discrètes analogues aux transformées de Fourier pour chaque algèbre de Lie simple avec racines des longueurs différentes. Comme les polynômes de Tchebyshev, ces quatre familles des fonctions ont des applications en analyse numérique. On obtient dans cette thèse quelques formules de <
Resumo:
A new procedure for the classification of lower case English language characters is presented in this work . The character image is binarised and the binary image is further grouped into sixteen smaller areas ,called Cells . Each cell is assigned a name depending upon the contour present in the cell and occupancy of the image contour in the cell. A data reduction procedure called Filtering is adopted to eliminate undesirable redundant information for reducing complexity during further processing steps . The filtered data is fed into a primitive extractor where extraction of primitives is done . Syntactic methods are employed for the classification of the character . A decision tree is used for the interaction of the various components in the scheme . 1ike the primitive extraction and character recognition. A character is recognized by the primitive by primitive construction of its description . Openended inventories are used for including variants of the characters and also adding new members to the general class . Computer implementation of the proposal is discussed at the end using handwritten character samples . Results are analyzed and suggestions for future studies are made. The advantages of the proposal are discussed in detail .
Resumo:
This article surveys the classical orthogonal polynomial systems of the Hahn class, which are solutions of second-order differential, difference or q-difference equations. Orthogonal families satisfy three-term recurrence equations. Example applications of an algorithm to determine whether a three-term recurrence equation has solutions in the Hahn class - implemented in the computer algebra system Maple - are given. Modifications of these families, in particular associated orthogonal systems, satisfy fourth-order operator equations. A factorization of these equations leads to a solution basis.
Resumo:
The main aim of this paper is the development of suitable bases (replacing the power basis x^n (n\in\IN_\le 0) which enable the direct series representation of orthogonal polynomial systems on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable). We present two bases of this type, the first of which allows to write solutions of arbitrary divided-difference equations in terms of series representations extending results given in [16] for the q-case. Furthermore it enables the representation of the Stieltjes function which can be used to prove the equivalence between the Pearson equation for a given linear functional and the Riccati equation for the formal Stieltjes function. If the Askey-Wilson polynomials are written in terms of this basis, however, the coefficients turn out to be not q-hypergeometric. Therefore, we present a second basis, which shares several relevant properties with the first one. This basis enables to generate the defining representation of the Askey-Wilson polynomials directly from their divided-difference equation. For this purpose the divided-difference equation must be rewritten in terms of suitable divided-difference operators developed in [5], see also [6].
Resumo:
Using the functional approach, we state and prove a characterization theorem for classical orthogonal polynomials on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable) including the Askey-Wilson polynomials. This theorem proves the equivalence between seven characterization properties, namely the Pearson equation for the linear functional, the second-order divided-difference equation, the orthogonality of the derivatives, the Rodrigues formula, two types of structure relations,and the Riccati equation for the formal Stieltjes function.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
Most of economic literature has presented its analysis under the assumption of homogeneous capital stock. However, capital composition differs across countries. What has been the pattern of capital composition associated with World economies? We make an exploratory statistical analysis based on compositional data transformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statistical estimators of association among the components. The goal is to detect distinctive patterns in the composition. As initial findings could be cited that: 1. Sectorial components behaved in a correlated way, building industries on one side and , in a less clear view, equipment industries on the other. 2. Full sample estimation shows a negative correlation between durable goods component and other buildings component and between transportation and building industries components. 3. Countries with zeros in some components are mainly low income countries at the bottom of the income category and behaved in a extreme way distorting main results observed in the full sample. 4. After removing these extreme cases, conclusions seem not very sensitive to the presence of another isolated cases