946 resultados para rotated to zero


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of compositional data analysis through log ratio trans- formations corresponds to a multinomial logit model for the shares themselves. This model is characterized by the property of Independence of Irrelevant Alter- natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactly this invariance of the ratio that underlies the commonly used zero replacement procedure in compositional data analysis. In this paper we investigate using the nested logit model that does not embody IIA and an associated zero replacement procedure and compare its performance with that of the more usual approach of using the multinomial logit model. Our comparisons exploit a data set that com- bines voting data by electoral division with corresponding census data for each division for the 2001 Federal election in Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent essential zeros or because it is below detection limit rounded zeros. Because the second kind of zeros is usually understood as a trace too small to measure, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martn-Fernndez, Barcel-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts and thus the metric properties should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martn-Fernndez, Barcel-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martn-Fernndez, Barcel-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is natural in the sense that it recovers the true composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesnt includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these zero data represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360. These are known as Essential zeros, but what can we do with Rounded zeros that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be rounded zeros. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the rounded zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the rounded zeros, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restricted Hartree-Fock 6-31G calculations of electrical and mechanical anharmonicity contributions to the longitudinal vibrational second hyperpolarizability have been carried out for eight homologous series of conjugated oligomers - polyacetylene, polyyne, polydiacetylene, polybutatriene, polycumulene, polysilane, polymethineimine, and polypyrrole. To draw conclusions about the limiting infinite polymer behavior, chains containing up to 12 heavy atoms along the conjugated backbone were considered. In general, the vibrational hyperpolarizabilities are substantial in comparison with their static electronic counterparts for the dc-Kerr and degenerate four-wave mixing processes (as well as for static fields) but not for electric field-induced second harmonic generation or third harmonic generation. Anharmonicity terms due to nuclear relaxation are important for the dc-Kerr effect (and for the static hyperpolarizability) in the -conjugated polymer, polysilane, as well as the nonplanar systems polymethineimine and polypyrrole. Restricting polypyrrole to be planar, as it is in the crystal phase, causes these anharmonic terms to become negligible. When the same restriction is applied to polymethineimine the effect is reduced but remains quantitatively significant due to the first-order contribution. We conclude that anharmonicity associated with nuclear relaxation can be ignored, for semiquantitative purposes, in planar -conjugated polymers. The role of zero-point vibrational averaging remains to be evaluated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Mller-Plesset perturbation theory, fourth-order Mller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The response of a uniform horizontal temperature gradient to prescribed fixed heating is calculated in the context of an extended version of surface quasigeostrophic dynamics. It is found that for zero mean surface flow and weak cross-gradient structure the prescribed heating induces a mean temperature anomaly proportional to the spatial Hilbert transform of the heating. The interior potential vorticity generated by the heating enhances this surface response. The time-varying part is independent of the heating and satisfies the usual linearized surface quasigeostrophic dynamics. It is shown that the surface temperature tendency is a spatial Hilbert transform of the temperature anomaly itself. It then follows that the temperature anomaly is periodically modulated with a frequency proportional to the vertical wind shear. A strong local bound on wave energy is also found. Reanalysis diagnostics are presented that indicate consistency with key findings from this theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under global warming, the predicted intensification of the global freshwater cycle will modify the net freshwater flux at the ocean surface. Since the freshwater flux maintains ocean salinity structures, changes to the density-driven ocean circulation are likely. A modified ocean circulation could further alter the climate, potentially allowing rapid changes, as seen in the past. The relevant feedback mechanisms and timescales are poorly understood in detail, however, especially at low latitudes where the effects of salinity are relatively subtle. In an attempt to resolve some of these outstanding issues, we present an investigation of the climate response of the low-latitude Pacific region to changes in freshwater forcing. Initiated from the present-day thermohaline structure, a control run of a coupled ocean-atmosphere general circulation model is compared with a perturbation run in which the net freshwater flux is prescribed to be zero over the ocean. Such an extreme experiment helps to elucidate the general adjustment mechanisms and their timescales. The atmospheric greenhouse gas concentrations are held constant, and we restrict our attention to the adjustment of the upper 1,000 m of the Pacific Ocean between 40N and 40S, over 100 years. In the perturbation run, changes to the surface buoyancy, near-surface vertical mixing and mixed-layer depth are established within 1 year. Subsequently, relative to the control run, the surface of the low-latitude Pacific Ocean in the perturbation run warms by an average of 0.6C, and the interior cools by up to 1.1C, after a few decades. This vertical re-arrangement of the ocean heat content is shown to be achieved by a gradual shutdown of the heat flux due to isopycnal (i.e. along surfaces of constant density) mixing, the vertical component of which is downwards at low latitudes. This heat transfer depends crucially upon the existence of density-compensating temperature and salinity gradients on isopycnal surfaces. The timescale of the thermal changes in the perturbation run is therefore set by the timescale for the decay of isopycnal salinity gradients in response to the eliminated freshwater forcing, which we demonstrate to be around 10-20 years. Such isopycnal heat flux changes may play a role in the response of the low-latitude climate to a future accelerated freshwater cycle. Specifically, the mechanism appears to represent a weak negative sea surface temperature feedback, which we speculate might partially shield from view the anthropogenically-forced global warming signal at low latitudes. Furthermore, since the surface freshwater flux is shown to play a role in determining the ocean's thermal structure, it follows that evaporation and/or precipitation biases in general circulation models are likely to cause sea surface temperature biases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This contribution describes the optimization of chlorine extraction from silicate samples by pyrohydrolysis prior to the precise determination of Cl stable-isotope compositions (637 Cl) by gas source, dual inlet Isotope Ratio Mass Spectrometry (IRMS) on CH(3)Clg. The complete method was checked on three international reference materials for Cl-content and two laboratory glass standards. Whole procedure blanks are lower than 0. 5 mu mol, corresponding to less than 10 wt.% of most of the sample chloride analysed. In the absence of international chlorine isotope rock, we report here Cl extracted compared to accepted Cl contents and reproducibilities on Cl and delta Cl-37 measurements for the standard rocks. After extraction, the Cl contents of the three international references compared within error with the accepted values (mean yield = 94 +/-10%) with reproducibilities better than 12% (10). The laboratory glass standards - andesite SO100DS92 and phonolite S9(2) - were used specifically to test the effect of chloride amount on the measurements. They gave Cl extraction yields of 100 +/-6% (1 sigma-; n = 15) and 105 +/- 8% (1 sigma-; n = 7), respectively, with delta Cl-37 values of -0.51 0.14%o and -0.39 0.17%o (1g). In summary, for silicate samples with Cl contents between 39 and 9042 ppm, the Pyrohydrolysis/HPLC method leads to overall CI extraction yields of 100 8%, reproducibilities on Cl contents of 7% and on delta Cl-37 measurements of 0.12%o (all 1 sigma). The method was further applied to ten silicate rocks of various mineralogy and chemistry (meteorite, fresh MORB glasses, altered basalts and setpentinized peridotites) chosen for their large range of Cl contents (70-2156 ppm) and their geological significance. delta Cl-37 values range between -2.33 and -0.50%o. These strictly negative values contrast with the large range and mainly positive values previously reported for comparable silicate samples and shown here to be affected by analytical problems. Thus we propose a preliminary, revised terrestrial CI cycle, mainly dominated by negative and zero delta Cl-37 values. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Testing of the Integrated Nitrogen model for Catchments (INCA) in a wide range of ecosystem types across Europe has shown that the model underestimates N transformation processes to a large extent in northern catchments of Finland and Norway in winter and spring. It is found, and generally assumed, that microbial activity in soils proceeds at low rates at northern latitudes during winter, even at sub-zero temperatures. The INCA model was modified to improve the simulation of N transformation rates in northern catchments, characterised by cold climates and extensive snow accumulation and insulation in winter, by introducing an empirical function to simulate soil temperatures below the seasonal snow pack, and a degree-day model to calculate the depth of the snow pack. The proposed snow-correction factor improved the simulation of soil temperatures at Finnish and Norwegian field sites in winter, although soil temperature was still underestimated during periods with a thin snow cover. Finally, a comparison between the modified INCA version (v. 1.7) and the former version (v. 1.6) was made at the Simojoki river basin in northern Finland and at Dalelva Brook in northern Norway. The new modules did not imply any significant changes in simulated NO3- concentration levels in the streams but improved the timing of simulated higher concentrations. The inclusion of a modified temperature response function and an empirical snow-correction factor improved the flexibility and applicability of the model for climate effect studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine the stability of lamellar stacks in the presence of an electric field, E-0, applied normal to the lamellae. Calculations are performed with self-consistent field theory (SCFT) supplemented by an exact treatment of the electrostatic energy for linear dielectric materials. The calculations identify a critical electric field, E-0*, beyond which the lamellar stack becomes unstable with respect to undulations. This E-0* rapidly decreases towards zero as the number of lamellae in the stack diverges. Our quantitative predictions for E-0* are consistent with previous experimental measurements by Xu and co-workers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical orthogonal functions (EOFs) are widely used in climate research to identify dominant patterns of variability and to reduce the dimensionality of climate data. EOFs, however, can be difficult to interpret. Rotated empirical orthogonal functions (REOFs) have been proposed as more physical entities with simpler patterns than EOFs. This study presents a new approach for finding climate patterns with simple structures that overcomes the problems encountered with rotation. The method achieves simplicity of the patterns by using the main properties of EOFs and REOFs simultaneously. Orthogonal patterns that maximise variance subject to a constraint that induces a form of simplicity are found. The simplified empirical orthogonal function (SEOF) patterns, being more 'local'. are constrained to have zero loadings outside the main centre of action. The method is applied to winter Northern Hemisphere (NH) monthly mean sea level pressure (SLP) reanalyses over the period 1948-2000. The 'simplified' leading patterns of variability are identified and compared to the leading patterns obtained from EOFs and REOFs. Copyright (C) 2005 Royal Meteorological Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cross-sectional study was conducted in Tanga and Iringa regions of Tanzania, and a longitudinal study in Tanga, to investigate tick-control methods and other factors influencing tick attachment to the cattle of smallholder dairy farms. Most farmers reported applying acaricides at intervals of 1-2 weeks, most used acaricides that require on-farm dilution and most farmers incorrectly diluted the acaricides. Rhipicephalus appendiculatus and Boophilus spp. ticks were those most-frequently encountered on the cattle, but few cattle carried ticks of any species (only 13 and 4.6% of tick counts of the cattle yielded adult R. appendiculatus and Boophilus spp., respectively). Animals were more likely to carry one or more adult Boophilus spp. ticks if they also carried one or more R. appendiculatus adults (OR = 14.4, CI = 9.2, 22.5). The use of pour-on acaricides was associated with lower odds that animals carried a R. appendiculatus tick (OR = 0.29, CI = 0. 18, 0.49) but higher odds that they carried a Boophilus spp. tick (OR = 2.48, CI = 1.55, 3.97). Animals > 4 months old and those with a recent history of grazing had higher odds of carrying either a R. appendiculatus (ORs = 3.41 and 2.58, CIs = 2.34, 4.98 and 1.80, 3.71), or a Boophilus spp. tick (ORs = 5.70 and 2.18, CIs = 2.34, 4.98 and 1.49. 3.25), but zero-grazing management did not prevent ticks attaching to cattle even when combined with high-frequency acaricide treatments. The odds that animals carried ticks varied amongst the agro-ecological zones (AEZs) and administrative districts where the farms were situated-but there was still considerable residual variation in tick infestation at the farm level. (c) 2004 Elsevier B.V. All rights reserved.