731 resultados para Szego polynomials
Resumo:
Lateral-distortional buckling may occur in I-section beams with slender webs and stocky flanges. A computationally efficient method is presented in this paper to study this phenomenon. Previous studies on distortional buckling have been on the use of 3(rd) and 5(th) order polynomials to model the displacements. The present study provides an alternative way, using Fourier Series, to model the behaviour. Beams of different cross-sectional dimensions, load cases and restraint conditions are examined and compared. The accuracy and versatility of the method are verified by calibrating against the results of other published studies. The present method is believed to be a simple and efficient way of determining the buckling load and mode shapes of I-section beams that are susceptible to lateral-distortional buckling modes.
Resumo:
We use the consumption-based asset pricing model with habit formation to study the predictability and cross-section of returns from the international equity markets. We find that the predictability of returns from many developed countries' equity markets is explained in part by changing prices of risks associated with consumption relative to habit at the world as well as local levels. We also provide an exploratory investigation of the cross-sectional implications of the model under the complete world market integration hypothesis and find that the model performs mildly better than the traditional consumption-based model. the unconditional and conditional world CAPMs and a three-factor international asset pricing model. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
What is the computational power of a quantum computer? We show that determining the output of a quantum computation is equivalent to counting the number of solutions to an easily computed set of polynomials defined over the finite field Z(2). This connection allows simple proofs to be given for two known relationships between quantum and classical complexity classes, namely BQP subset of P-#P and BQP subset of PP.
Resumo:
A new approach to identify multivariable Hammerstein systems is proposed in this paper. By using cardinal cubic spline functions to model the static nonlinearities, the proposed method is effective in modelling processes with hard and/or coupled nonlinearities. With an appropriate transformation, the nonlinear models are parameterized such that the nonlinear identification problem is converted into a linear one. The persistently exciting condition for the transformed input is derived to ensure the estimates are consistent with the true system. A simulation study is performed to demonstrate the effectiveness of the proposed method compared with the existing approaches based on polynomials. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
In some circumstances, there may be no scientific model of the relationship between X and Y that can be specified in advance and indeed the objective of the investigation may be to provide a ‘curve of best fit’ for predictive purposes. In such an example, the fitting of successive polynomials may be the best approach. There are various strategies to decide on the polynomial of best fit depending on the objectives of the investigation.
Resumo:
A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
Purpose: To study the effects of ocular lubricants on higher order aberrations in normal and self-diagnosed dry eyes. Methods: Unpreserved hypromellose drops, Tears Again™ liposome spray and a combination of both were administered to the right eye of 24 normal and 24 dry eye subjects following classification according to a 5 point questionnaire. Total ocular higher order aberrations, coma, spherical aberration and Strehl ratios for higher order aberrations were measured using the Nidek OPD-Scan III (Nidek Technologies, Gamagori, Japan) at baseline, immediately after application and after 60. min. The aberration data were analyzed over a 5. mm natural pupil using Zernike polynomials. Each intervention was assessed on a separate day and comfort levels were recorded before and after application. Corneal staining was assessed and product preference recorded after the final measurement for each intervention. Results: Hypromellose drops caused an increase in total higher order aberrations (p= <0.01 in normal and dry eyes) and a reduction in Strehl ratio (normal eyes: p= <0.01, dry eyes p= 0.01) immediately after instillation. There were no significant differences between normal and self-diagnosed dry eyes for response to intervention and no improvement in visual quality or reduction in higher order aberrations after 60. min. Differences in comfort levels failed to reach statistical significance. Conclusion: Combining treatments does not offer any benefit over individual treatments in self-diagnosed dry eyes and no individual intervention reached statistical significance. Symptomatic subjects with dry eye and no corneal staining reported an improvement in comfort after using lubricants. © 2013 British Contact Lens Association.
Resumo:
Determination of the so-called optical constants (complex refractive index N, which is usually a function of the wavelength, and physical thickness D) of thin films from experimental data is a typical inverse non-linear problem. It is still a challenge to the scientific community because of the complexity of the problem and its basic and technological significance in optics. Usually, solutions are looked for models with 3-10 parameters. Best estimates of these parameters are obtained by minimization procedures. Herein, we discuss the choice of orthogonal polynomials for the dispersion law of the thin film refractive index. We show the advantage of their use, compared to the Selmeier, Lorentz or Cauchy models.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.
Resumo:
In 2000 A. Alesina and M. Galuzzi presented Vincent’s theorem “from a modern point of view” along with two new bisection methods derived from it, B and C. Their profound understanding of Vincent’s theorem is responsible for simplicity — the characteristic property of these two methods. In this paper we compare the performance of these two new bisection methods — i.e. the time they take, as well as the number of intervals they examine in order to isolate the real roots of polynomials — against that of the well-known Vincent-Collins-Akritas method, which is the first bisection method derived from Vincent’s theorem back in 1976. Experimental results indicate that REL, the fastest implementation of the Vincent-Collins-Akritas method, is still the fastest of the three bisection methods, but the number of intervals it examines is almost the same as that of B. Therefore, further research on speeding up B while preserving its simplicity looks promising.
Resumo:
Constacyclic codes with one and the same generator polynomial and distinct length are considered. We give a generalization of the previous result of the first author [4] for constacyclic codes. Suitable maps between vector spaces determined by the lengths of the codes are applied. It is proven that the weight distributions of the coset leaders don’t depend on the word length, but on generator polynomials only. In particular, we prove that every constacyclic code has the same weight distribution of the coset leaders as a suitable cyclic code.
Resumo:
In this paper we investigate the Boolean functions with maximum essential arity gap. Additionally we propose a simpler proof of an important theorem proved by M. Couceiro and E. Lehtonen in [3]. They use Zhegalkin’s polynomials as normal forms for Boolean functions and describe the functions with essential arity gap equals 2. We use to instead Full Conjunctive Normal Forms of these polynomials which allows us to simplify the proofs and to obtain several combinatorial results concerning the Boolean functions with a given arity gap. The Full Conjunctive Normal Forms are also sum of conjunctions, in which all variables occur.