995 resultados para polynomial model
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
Introduction. Leaf area is often related to plant growth, development, physiology and yield. Many non-destructive models have been proposed for leaf area estimation of several plant genotypes, demonstrating that leaf length, leaf width and leaf area are closely correlated. Thus, the objective of our study was to develop a reliable model for leaf area estimation from linear measurements of leaf dimensions for citrus genotypes. Materials and methods. Leaves of citrus genotypes were harvested, and their dimensions (length, width and area) were measured. Values of leaf area were regressed against length, width, the square of length, the square of width and the product (length x width). The most accurate equations, either linear or second-order polynomial, were regressed again with a new data set; then the most reliable equation was defined. Results and discussion. The first analysis showed that the variables length, width and the square of length gave better results in second-order polynomial equations, while the linear equations were more suitable and accurate when the width and the product (length x width) were used. When these equations were regressed with the new data set, the coefficient of determination (R(2)) and the agreement index 'd' were higher for the one that used the variable product (length x width), while the Mean Absolute Percentage Error was lower. Conclusion. The product of the simple leaf dimensions (length x width) can provide a reliable and simple non-destructive model for leaf area estimation across citrus genotypes.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
It is often necessary to run response surface designs in blocks. In this paper the analysis of data from such experiments, using polynomial regression models, is discussed. The definition and estimation of pure error in blocked designs are considered. It is recommended that pure error is estimated by assuming additive block and treatment effects, as this is more consistent with designs without blocking. The recovery of inter-block information using REML analysis is discussed, although it is shown that it has very little impact if thc design is nearly orthogonally blocked. Finally prediction from blocked designs is considered and it is shown that prediction of many quantities of interest is much simpler than prediction of the response itself.
Resumo:
The study of robust design methodologies and techniques has become a new topical area in design optimizations in nearly all engineering and applied science disciplines in the last 10 years due to inevitable and unavoidable imprecision or uncertainty which is existed in real word design problems. To develop a fast optimizer for robust designs, a methodology based on polynomial chaos and tabu search algorithm is proposed. In the methodology, the polynomial chaos is employed as a stochastic response surface model of the objective function to efficiently evaluate the robust performance parameter while a mechanism to assign expected fitness only to promising solutions is introduced in tabu search algorithm to minimize the requirement for determining robust metrics of intermediate solutions. The proposed methodology is applied to the robust design of a practical inverse problem with satisfactory results.
Resumo:
Some dynamical properties of a particle suffering the action of a generic drag force are obtained for a dissipative Fermi Acceleration model. The dissipation is introduced via a viscous drag force, like a gas, and is assumed to be proportional to a power of the velocity: F alpha -nu(gamma). The dynamics is described by a two-dimensional nonlinear area-contracting mapping obtained via the solution of Newton's second law of motion. We prove analytically that the decay of high energy is given by a continued fraction which recovers the following expressions: (i) linear for gamma = 1; (ii) exponential for gamma = 2; and (iii) second-degree polynomial type for gamma = 1.5. Our results are discussed for both the complete version and the simplified version. The procedure used in the present paper can be extended to many different kinds of system, including a class of billiards problems.
Resumo:
A total of 20,065 weights recorded on 3016 Nelore animals were used to estimate covariance functions for growth from birth to 630 days of age, assuming a parametric correlation structure to model within-animal correlations. The model of analysis included fixed effects of contemporary groups and age of dam as quadratic covariable. Mean trends were taken into account by a cubic regression on orthogonal polynomials of animal age. Genetic effects of the animal and its dam and maternal permanent environmental effects were modelled by random regressions on Legendre polynomials of age at recording. Changes in direct permanent environmental effect variances were modelled by a polynomial variance function, together with a parametric correlation function to account for correlations between ages. Stationary and nonstationary models were used to model within-animal correlations between different ages. Residual variances were considered homogeneous or heterogeneous, with changes modelled by a step or polynomial function of age at recording. Based on Bayesian information criterion, a model with a cubic variance function combined with a nonstationary correlation function for permanent environmental effects, with 49 parameters to be estimated, fitted best. Modelling within-animal correlations through a parametric correlation structure can describe the variation pattern adequately. Moreover, the number of parameters to be estimated can be decreased substantially compared to a model fitting random regression on Legendre polynomial of age. © 2004 Elsevier B.V. All rights reserved.
Resumo:
Breast cancer is the most common cancer among women. In CAD systems, several studies have investigated the use of wavelet transform as a multiresolution analysis tool for texture analysis and could be interpreted as inputs to a classifier. In classification, polynomial classifier has been used due to the advantages of providing only one model for optimal separation of classes and to consider this as the solution of the problem. In this paper, a system is proposed for texture analysis and classification of lesions in mammographic images. Multiresolution analysis features were extracted from the region of interest of a given image. These features were computed based on three different wavelet functions, Daubechies 8, Symlet 8 and bi-orthogonal 3.7. For classification, we used the polynomial classification algorithm to define the mammogram images as normal or abnormal. We also made a comparison with other artificial intelligence algorithms (Decision Tree, SVM, K-NN). A Receiver Operating Characteristics (ROC) curve is used to evaluate the performance of the proposed system. Our system is evaluated using 360 digitized mammograms from DDSM database and the result shows that the algorithm has an area under the ROC curve Az of 0.98 ± 0.03. The performance of the polynomial classifier has proved to be better in comparison to other classification algorithms. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)