959 resultados para Poles and zeros
Resumo:
Deux décennies après la chute de l'URSS (1991), ce mémoire propose une réévaluation de la thèse de Francis Fukuyama sur la Fin de l'Histoire, élaborée en 1989, qui postule qu'avec la chute de l'URSS aucune idéologie ne peut rivaliser avec la démocratie libérale capitaliste; et de la thèse de Samuel P. Huntington sur le Choc des civilisations, élaborée en 1993, qui pose l'existence d'un nombre fini de civilisations homogènes et antagonistes. Pourtant, lorsque confrontées à une étude approfondie des séquences historiques, ces deux théories apparaissent pour le moins relatives. Deux questions ont été traitées: l'interaction entre Idéologie et Conditions historiques, et la thèse de l'homogénéité intracivilisationnelle et de l'hétérogénéité antagoniste intercivilisationnelle. Sans les invalider complètement, cette recherche conclut toutefois que ces deux théories doivent être nuancées; elles se situent aux deux extrémités du spectre des relations internationales. La recherche effectuée a montré que les idéologies et leur poids relatif sont tributaires d'un contexte, contrairement à Fukuyama qui les pose dans l'absolu. De plus, l'étude de la Chine maoïste et particulièrement de la pensée de Mao Zedong montre que les traditions politiques locales sont plus hétérogènes qu'il n'y paraît au premier abord, ce qui relativise la thèse de Huntington. En conclusion, les rapports entre États sont plus dynamiques que ne le laissent penser les thèses de Fukuyama et de Huntington.
Resumo:
Soit $\displaystyle P(z):=\sum_{\nu=0}^na_\nu z^{\nu}$ un polynôme de degré $n$ et $\displaystyle M:=\sup_{|z|=1}|P(z)|.$ Sans aucne restriction suplémentaire, on sait que $|P'(z)|\leq Mn$ pour $|z|\leq 1$ (inégalité de Bernstein). Si nous supposons maintenant que les zéros du polynôme $P$ sont à l'extérieur du cercle $|z|=k,$ quelle amélioration peut-on apporter à l'inégalité de Bernstein? Il est déjà connu [{\bf \ref{Mal1}}] que dans le cas où $k\geq 1$ on a $$(*) \qquad |P'(z)|\leq \frac{n}{1+k}M \qquad (|z|\leq 1),$$ qu'en est-il pour le cas où $k < 1$? Quelle est l'inégalité analogue à $(*)$ pour une fonction entière de type exponentiel $\tau ?$ D'autre part, si on suppose que $P$ a tous ses zéros dans $|z|\geq k \, \, (k\geq 1),$ quelle est l'estimation de $|P'(z)|$ sur le cercle unité, en terme des quatre premiers termes de son développement en série entière autour de l'origine. Cette thèse constitue une contribution à la théorie analytique des polynômes à la lumière de ces questions.
Resumo:
Inspiré par la réflexion épistémologique de l'anthropologue Michel Verdon, ce mémoire propose un cadre conceptuel pour l'étude de l'organisation sociale des castes en Inde. L'ethnographie de Jonathan Parry, Caste and Kinship in Kangra, est analysée et réinterprétée dans un langage dit « opérationnel ». Les différentes approches des castes oscillent entre deux pôles théoriques opposés : l'idéalisme, représenté notamment par la démarche structuraliste de Louis Dumont, et le substantialisme, jadis adopté par les dirigeants coloniaux et incarné plus récemment dans les travaux de Dipankar Gupta. Toutes deux holistes, ces options conduisent pourtant à une impasse dans l'étude comparative de l'organisation sociale, car elles rendent les groupes « ontologiquement variables » et, par conséquent, incomparables. En repensant les prémisses sur lesquelles repose la conception générale de l'organisation sociale, un cadre opérationnel confère à la notion de groupe une réalité binaire, discontinue, évitant ainsi la variabilité ontologique des groupes et favorisant le comparatisme. Il rend également possible l'étude des rapports entre groupes et réseaux. La relecture de l'ethnographie Caste and Kinship in Kangra montre la pertinence d'une telle approche dans l'étude des castes. Le caractère segmentaire de ces dernières est remis en cause et l'autonomie des foyers, qui forment des réseaux d'alliances en matière d'activités rituelles, est mise de l'avant. Cette nouvelle description incite enfin à de nouvelles comparaisons.
Resumo:
The paper presents a compact planar Ultra Wide Band ¯lter employing folded stepped impedance resonators with series capacitors and dumb bell shaped defected ground structures. An interdigital quarter wavelength coupled line is used for achieving the band pass characteristics. The transmission zeros are produced by stepped impedance resonators. The ¯lter has steep roll o® rate and good attenuation in its lower and upper stop bands, contributed by the series capacitor and defected ground structures respectively.
Resumo:
The starting point of our reflections is a classroom situation in grade 12 in which it was to be proved intuitively that non-trivial solutions of the differential equation f' = f have no zeros. We give a working definition of the concept of preformal proving, as well as three examples of preformal proofs. Then we furnish several such proofs of the aforesaid fact, and we analyse these proofs in detail. Finally, we draw some conclusions for mathematics in school and in teacher training.
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
This paper examines a dataset which is modeled well by the Poisson-Log Normal process and by this process mixed with Log Normal data, which are both turned into compositions. This generates compositional data that has zeros without any need for conditional models or assuming that there is missing or censored data that needs adjustment. It also enables us to model dependence on covariates and within the composition
Resumo:
The prediction of extratropical cyclones by the European Centre for Medium Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) Ensemble Prediction Systems (EPS) has been investigated using an objective feature tracking methodology to identify and track the cyclones along the forecast trajectories. Overall the results show that the ECMWF EPS has a slightly higher level of skill than the NCEP EPS in the northern hemisphere (NH). However in the southern hemisphere (SH), NCEP has higher predictive skill than ECMWF for the intensity of the cyclones. The results from both EPS indicate a higher level of predictive skill for the position of extratropical cyclones than their intensity and show that there is a larger spread in intensity than position. Further analysis shows that the predicted propagation speed of cyclones is generally too slow for the ECMWF EPS and show a slight bias for the intensity of the cyclones to be overpredicted. This is also true for the NCEP EPS in the SH. For the NCEP EPS in the NH the intensity of the cyclones is underpredicted. There is small bias in both the EPS for the cyclones to be displaced towards the poles. For each ensemble forecast of each cyclone, the predictive skill of the ensemble member that best predicts the cyclones position and intensity was computed. The results are very encouraging showing that the predictive skill of the best ensemble member is significantly higher than that of the control forecast in terms of both the position and intensity of the cyclones. The prediction of cyclones before they are identified as 850 hPa vorticity centers in the analysis cycle was also considered. It is shown that an indication of extratropical cyclones can be given by at least 1 ensemble member 7 days before they are identified in the analysis. Further analysis of the ECMWF EPS shows that the ensemble mean has a higher level of skill than the control forecast, particularly for the intensity of the cyclones, 2 from day 3 of the forecast. There is a higher level of skill in the NH than the SH and the spread in the SH is correspondingly larger. The difference between the ensemble mean and spread is very small for the position of the cyclones, but the spread of the ensemble is smaller than the ensemble mean error for the intensity of the cyclones in both hemispheres. Results also show that the ECMWF control forecast has ½ to 1 day more skill than the perturbed members, for both the position and intensity of the cyclones, throughout the forecast.
Resumo:
General circulation models (GCMs) use the laws of physics and an understanding of past geography to simulate climatic responses. They are objective in character. However, they tend to require powerful computers to handle vast numbers of calculations. Nevertheless, it is now possible to compare results from different GCMs for a range of times and over a wide range of parameterisations for the past, present and future (e.g. in terms of predictions of surface air temperature, surface moisture, precipitation, etc.). GCMs are currently producing simulated climate predictions for the Mesozoic, which compare favourably with the distributions of climatically sensitive facies (e.g. coals, evaporites and palaeosols). They can be used effectively in the prediction of oceanic upwelling sites and the distribution of petroleum source rocks and phosphorites. Models also produce evaluations of other parameters that do not leave a geological record (e.g. cloud cover, snow cover) and equivocal phenomena such as storminess. Parameterisation of sub-grid scale processes is the main weakness in GCMs (e.g. land surfaces, convection, cloud behaviour) and model output for continental interiors is still too cold in winter by comparison with palaeontological data. The sedimentary and palaeontological record provides an important way that GCMs may themselves be evaluated and this is important because the same GCMs are being used currently to predict possible changes in future climate. The Mesozoic Earth was, by comparison with the present, an alien world, as we illustrate here by reference to late Triassic, late Jurassic and late Cretaceous simulations. Dense forests grew close to both poles but experienced months-long daylight in warm summers and months-long darkness in cold snowy winters. Ocean depths were warm (8 degrees C or more to the ocean floor) and reefs, with corals, grew 10 degrees of latitude further north and south than at the present time. The whole Earth was warmer than now by 6 degrees C or more, giving more atmospheric humidity and a greatly enhanced hydrological cycle. Much of the rainfall was predominantly convective in character, often focused over the oceans and leaving major desert expanses on the continental areas. Polar ice sheets are unlikely to have been present because of the high summer temperatures achieved. The model indicates extensive sea ice in the nearly enclosed Arctic seaway through a large portion of the year during the late Cretaceous, and the possibility of sea ice in adjacent parts of the Midwest Seaway over North America. The Triassic world was a predominantly warm world, the model output for evaporation and precipitation conforming well with the known distributions of evaporites, calcretes and other climatically sensitive facies for that time. The message from the geological record is clear. Through the Phanerozoic, Earth's climate has changed significantly, both on a variety of time scales and over a range of climatic states, usually baldly referred to as "greenhouse" and "icehouse", although these terms disguise more subtle states between these extremes. Any notion that the climate can remain constant for the convenience of one species of anthropoid is a delusion (although the recent rate of climatic change is exceptional). (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper uses spatial economic data from four small English towns to measure the strength of economic integration between town and hinterland and to estimate the magnitude of town-hinterland spill-over effects. Following estimation of local integration indicators and inter-locale flows, sub-regional social accounting matrices (SAMs) are developed to estimate the strength of local employment and output multipliers for various economic sectors. The potential value of a town as a 'sub-pole' in local economic development is shown to be dependent on structural differences in the local economy, such as the particular mix of firms within towns. Although the multipliers are generally small, indicating a low level of local linkages, some sectors, particularly financial services and banking, show consistently higher multipliers for both output and employment. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
This paper considers PID control in terms of its implementation by means of an ARMA plant model. Two controller actions are considered, namely pole placement and deadbeat, both being applied via a PID structure for the adaptive real-time control of an industrial level system. As well as looking at two controller types separately, a comparison is made between the forms and it is shown how, under certain circumstances, the two forms can be seen to be identical. It is shown how the pole-placement PID form does not in fact realise an action which is equivalent to the deadbeat controller, when all closed-loop poles are chosen to be at the origin of the z-plane.
Resumo:
The presence of mismatch between controller and system is considered. A novel discrete-time approach is used to investigate the migration of closed-loop poles when this mismatch occurs. Two forms of state estimator are employed giving rise to several interesting features regarding stability and performance.