965 resultados para Zeros of perturbed polynomials
Resumo:
It is well known that Stickelberger-Swan theorem is very important for determining reducibility of polynomials over a binary field. Using this theorem it was determined the parity of the number of irreducible factors for some kinds of polynomials over a binary field, for instance, trinomials, tetranomials, self-reciprocal polynomials and so on. We discuss this problem for type II pentanomials namely x^m +x^{n+2} +x^{n+1} +x^n +1 \in\ IF_2 [x]. Such pentanomials can be used for efficient implementing multiplication in finite fields of characteristic two. Based on the computation of discriminant of these pentanomials with integer coefficients, it will be characterized the parity of the number of irreducible factors over IF_2 and be established the necessary conditions for the existence of this kind of irreducible pentanomials.
Resumo:
The main aim of this paper is the development of suitable bases (replacing the power basis x^n (n\in\IN_\le 0) which enable the direct series representation of orthogonal polynomial systems on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable). We present two bases of this type, the first of which allows to write solutions of arbitrary divided-difference equations in terms of series representations extending results given in [16] for the q-case. Furthermore it enables the representation of the Stieltjes function which can be used to prove the equivalence between the Pearson equation for a given linear functional and the Riccati equation for the formal Stieltjes function. If the Askey-Wilson polynomials are written in terms of this basis, however, the coefficients turn out to be not q-hypergeometric. Therefore, we present a second basis, which shares several relevant properties with the first one. This basis enables to generate the defining representation of the Askey-Wilson polynomials directly from their divided-difference equation. For this purpose the divided-difference equation must be rewritten in terms of suitable divided-difference operators developed in [5], see also [6].
Resumo:
Using the functional approach, we state and prove a characterization theorem for classical orthogonal polynomials on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable) including the Askey-Wilson polynomials. This theorem proves the equivalence between seven characterization properties, namely the Pearson equation for the linear functional, the second-order divided-difference equation, the orthogonality of the derivatives, the Rodrigues formula, two types of structure relations,and the Riccati equation for the formal Stieltjes function.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
Most of economic literature has presented its analysis under the assumption of homogeneous capital stock. However, capital composition differs across countries. What has been the pattern of capital composition associated with World economies? We make an exploratory statistical analysis based on compositional data transformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statistical estimators of association among the components. The goal is to detect distinctive patterns in the composition. As initial findings could be cited that: 1. Sectorial components behaved in a correlated way, building industries on one side and , in a less clear view, equipment industries on the other. 2. Full sample estimation shows a negative correlation between durable goods component and other buildings component and between transportation and building industries components. 3. Countries with zeros in some components are mainly low income countries at the bottom of the income category and behaved in a extreme way distorting main results observed in the full sample. 4. After removing these extreme cases, conclusions seem not very sensitive to the presence of another isolated cases
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
The prediction of extratropical cyclones by the European Centre for Medium Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) Ensemble Prediction Systems (EPS) has been investigated using an objective feature tracking methodology to identify and track the cyclones along the forecast trajectories. Overall the results show that the ECMWF EPS has a slightly higher level of skill than the NCEP EPS in the northern hemisphere (NH). However in the southern hemisphere (SH), NCEP has higher predictive skill than ECMWF for the intensity of the cyclones. The results from both EPS indicate a higher level of predictive skill for the position of extratropical cyclones than their intensity and show that there is a larger spread in intensity than position. Further analysis shows that the predicted propagation speed of cyclones is generally too slow for the ECMWF EPS and show a slight bias for the intensity of the cyclones to be overpredicted. This is also true for the NCEP EPS in the SH. For the NCEP EPS in the NH the intensity of the cyclones is underpredicted. There is small bias in both the EPS for the cyclones to be displaced towards the poles. For each ensemble forecast of each cyclone, the predictive skill of the ensemble member that best predicts the cyclones position and intensity was computed. The results are very encouraging showing that the predictive skill of the best ensemble member is significantly higher than that of the control forecast in terms of both the position and intensity of the cyclones. The prediction of cyclones before they are identified as 850 hPa vorticity centers in the analysis cycle was also considered. It is shown that an indication of extratropical cyclones can be given by at least 1 ensemble member 7 days before they are identified in the analysis. Further analysis of the ECMWF EPS shows that the ensemble mean has a higher level of skill than the control forecast, particularly for the intensity of the cyclones, 2 from day 3 of the forecast. There is a higher level of skill in the NH than the SH and the spread in the SH is correspondingly larger. The difference between the ensemble mean and spread is very small for the position of the cyclones, but the spread of the ensemble is smaller than the ensemble mean error for the intensity of the cyclones in both hemispheres. Results also show that the ECMWF control forecast has ½ to 1 day more skill than the perturbed members, for both the position and intensity of the cyclones, throughout the forecast.
Resumo:
The land/sea warming contrast is a phenomenon of both equilibrium and transient simulations of climate change: large areas of the land surface at most latitudes undergo temperature changes whose amplitude is more than those of the surrounding oceans. Using idealised GCM experiments with perturbed SSTs, we show that the land/sea contrast in equilibrium simulations is associated with local feedbacks and the hydrological cycle over land, rather than with externally imposed radiative forcing. This mechanism also explains a large component of the land/sea contrast in transient simulations as well. We propose a conceptual model with three elements: (1) there is a spatially variable level in the lower troposphere at which temperature change is the same over land and sea; (2) the dependence of lapse rate on moisture and temperature causes different changes in lapse rate upon warming over land and sea, and hence a surface land/sea temperature contrast; (3) moisture convergence over land predominantly takes place at levels significantly colder than the surface; wherever moisture supply over land is limited, the increase of evaporation over land upon warming is limited, reducing the relative humidity in the boundary layer over land, and hence also enhancing the land/sea contrast. The non-linearity of the Clausius–Clapeyron relationship of saturation specific humidity to temperature is critical in (2) and (3). We examine the sensitivity of the land/sea contrast to model representations of different physical processes using a large ensemble of climate model integrations with perturbed parameters, and find that it is most sensitive to representation of large-scale cloud and stomatal closure. We discuss our results in the context of high-resolution and Earth-system modelling of climate change.
Resumo:
Aerosols from anthropogenic and natural sources have been recognized as having an important impact on the climate system. However, the small size of aerosol particles (ranging from 0.01 to more than 10 μm in diameter) and their influence on solar and terrestrial radiation makes them difficult to represent within the coarse resolution of general circulation models (GCMs) such that small-scale processes, for example, sulfate formation and conversion, need parameterizing. It is the parameterization of emissions, conversion, and deposition and the radiative effects of aerosol particles that causes uncertainty in their representation within GCMs. The aim of this study was to perturb aspects of a sulfur cycle scheme used within a GCM to represent the climatological impacts of sulfate aerosol derived from natural and anthropogenic sulfur sources. It was found that perturbing volcanic SO2 emissions and the scavenging rate of SO2 by precipitation had the largest influence on the sulfate burden. When these parameters were perturbed the sulfate burden ranged from 0.73 to 1.17 TgS for 2050 sulfur emissions (A2 Special Report on Emissions Scenarios (SRES)), comparable with the range in sulfate burden across all the Intergovernmental Panel on Climate Change SRESs. Thus, the results here suggest that the range in sulfate burden due to model uncertainty is comparable with scenario uncertainty. Despite the large range in sulfate burden there was little influence on the climate sensitivity, which had a range of less than 0.5 K across the ensemble. We hypothesize that this small effect was partly associated with high sulfate loadings in the control phase of the experiment.
Resumo:
The mechanisms underlying the increase in stress for large mechanical strains of a polymer glass, quantified by the strain-hardening modulus, are still poorly understood. In the present paper we aim to elucidate this matter and present new mechanisms. Molecular-dynamics simulations of two polymers with very different strain-hardening moduli (polycarbonate and polystyrene) have been carried out. Nonaffine displacements occur because of steric hindrances and connectivity constraints. We argue that it is not necessary to introduce the concept of entanglements to understand strain hardening, but that hardening is rather coupled with the increase in the rate of nonaffine particle displacements. This rate increases faster for polycarbonate, which has the higher strain-hardening modulus. Also more nonaffine chain stretching is present for polycarbonate. It is shown that the inner distances of such a nonaffinely deformed chain can be well described by the inner distances of the worm-like chain, but with an effective stiffness length (equal to the Kuhn length for an infinite worm-like chain) that increases during deformation. It originates from the finite extensibility of the chain. In this way the increase in nonaffine particle displacement can be understood as resulting from an increase in the effective stiffness length of the perturbed chain during deformation, so that at larger strains a higher rate of plastic events in terms of nonaffine displacement is necessary, causing in turn the observed strain hardening in polymer glasses.
Resumo:
The Atlantic meridional overturning circulation (AMOC) is an important component of the climate system. Models indicate that the AMOC can be perturbed by freshwater forcing in the North Atlantic. Using an ocean-atmosphere general circulation model, we investigate the dependence of such a perturbation of the AMOC, and the consequent climate change, on the region of freshwater forcing. A wide range of changes in AMOC strength is found after 100 years of freshwater forcing. The largest changes in AMOC strength occur when the regions of deepwater formation in the model are forced directly, although reductions in deepwater formation in one area may be compensated by enhanced formation elsewhere. North Atlantic average surface air temperatures correlate linearly with the AMOC decline, but warming may occur in localised regions, notably over Greenland and where deepwater formation is enhanced. This brings into question the representativeness of temperature changes inferred from Greenland ice-core records.
Resumo:
The experimental variogram computed in the usual way by the method of moments and the Haar wavelet transform are similar in that they filter data and yield informative summaries that may be interpreted. The variogram filters out constant values; wavelets can filter variation at several spatial scales and thereby provide a richer repertoire for analysis and demand no assumptions other than that of finite variance. This paper compares the two functions, identifying that part of the Haar wavelet transform that gives it its advantages. It goes on to show that the generalized variogram of order k=1, 2, and 3 filters linear, quadratic, and cubic polynomials from the data, respectively, which correspond with more complex wavelets in Daubechies's family. The additional filter coefficients of the latter can reveal features of the data that are not evident in its usual form. Three examples in which data recorded at regular intervals on transects are analyzed illustrate the extended form of the variogram. The apparent periodicity of gilgais in Australia seems to be accentuated as filter coefficients are added, but otherwise the analysis provides no new insight. Analysis of hyerpsectral data with a strong linear trend showed that the wavelet-based variograms filtered it out. Adding filter coefficients in the analysis of the topsoil across the Jurassic scarplands of England changed the upper bound of the variogram; it then resembled the within-class variogram computed by the method of moments. To elucidate these results, we simulated several series of data to represent a random process with values fluctuating about a mean, data with long-range linear trend, data with local trend, and data with stepped transitions. The results suggest that the wavelet variogram can filter out the effects of long-range trend, but not local trend, and of transitions from one class to another, as across boundaries.
Resumo:
The ECMWF full-physics and dry singular vector (SV) packages, using a dry energy norm and a 1-day optimization time, are applied to four high impact European cyclones of recent years that were almost universally badly forecast in the short range. It is shown that these full-physics SVs are much more relevant to severe cyclonic development than those based on dry dynamics plus boundary layer alone. The crucial extra ingredient is the representation of large-scale latent heat release. The severe winter storms all have a long, nearly straight region of high baroclinicity stretching across the Atlantic towards Europe, with a tongue of very high moisture content on its equatorward flank. In each case some of the final-time top SV structures pick out the region of the actual storm. The initial structures were generally located in the mid- to low troposphere. Forecasts based on initial conditions perturbed by moist SVs with opposite signs and various amplitudes show the range of possible 1-day outcomes for reasonable magnitudes of forecast error. In each case one of the perturbation structures gave a forecast very much closer to the actual storm than the control forecast. Deductions are made about the predictability of high-impact extratropical cyclone events. Implications are drawn for the short-range forecast problem and suggestions made for one practicable way to approach short-range ensemble forecasting. Copyright © 2005 Royal Meteorological Society.
Resumo:
An analytical dispersion relation is derived for linear perturbations to a Rankine vortex governed by surface quasi-geostrophic dynamics. Such a Rankine vortex is a circular region of uniform anomalous surface temperature evolving under quasi-geostrophic dynamics with uniform interior potential vorticity. The dispersion relation is analysed in detail and compared to the more familiar dispersion relation for a perturbed Rankine vortex governed by the Euler equations. The results are successfully verified against numerical simulations of the full equations. The dispersion relation is relevant to problems including wave propagation on surface temperature fronts and the stability of vortices in quasi-geostrophic turbulence.