965 resultados para Maximum displacement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to gain understanding of the movement of pollutant metals in soil. the chemical mechanisms involved in the transport of zinc were studied. The displacement of zinc through mixtures of sand and cation exchange resin was measured to validate the methods used for soil. With cation exchange capacities of 2.5 and 5.0 cmol(c) kg(-1). 5.6 and 8.4 pore volumes of 10 mM CaCl2, respectively, were required to displace a pulse of ZnCl2. A simple Burns-type model (Wineglass) using an adsorption coefficient (K-d) determined by fitting a straight line relationship to an adsorption isotherm gave a good fit to the data (K-d=0.73 and 1.29 ml g(-1), respectively). Surface and subsurface samples of an acidic sandy loam (organic matter 4.7 and 1.0%. cation exchange capacity (CEC) 11.8 and 6.1 cmol(c) kg(-1) respectively) were leached with 10 mM calcium chloride. nitrate and perchlorate. With chloride. the zinc pulse was displaced after 25 and 5 pore volumes, respectively. The Kd values were 6.1 and 2.0 ml g(-1). but are based on linear relationships fitted to isotherms which are both curved and show hysteresis. Thus. a simple model has limited value although it does give a general indication of rate of displacement. Leaching with chloride and perchlorate gave similar displacement and Kd values, but slower movement occurred with nitrate in both soil samples (35 and 7 pore volumes, respectively) which reflected higher Kd values when the isotherms were measured using this anion (7.7 and 2.8 ml g(-1) respectively). Although pH values were a little hi-her with nitrate in the leachates, the differences were insufficient to suggest that this increased the CEC enough to cause the delay. No increases in pH occurred with nitrate in the isotherm experiments. Geochem was used to calculate the proportions of Zn complexed with the three anions and with fulvic acid determined from measurements of dissolved organic matter. In all cases, more than 91% of the Zn was present as Zn2+ and there were only minor differences between the anions. Thus, there is an unexplained factor associated with the greater adsorption of Zn in the presence of nitrate. Because as little as five pore volumes of solution displaced Zn through the subsurface soil, contamination of ground waters may be a hazard where Zn is entering a light-textured soil, particularly where soil salinity is increased. Reductions in organic matter content due to cultivation will increase the hazard. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Samples of glacial till deposited since the Little Ice Age (LIA) maximum by two glaciers, North Bogbre at Svartisen and Corneliussen-breen at Okstindan, northern Norway, were obtained from transects running from the current glacier snout to the LIA (c. AD 1750) limit. The samples were analysed to determine their sediment magnetic properties, which display considerable variability. Significant trends in some magnetic parameters are evident with distance from the glacier margin and hence length of subaerial exposure. Magnetic susceptibility (X) decreases away from the contemporary snout, perhaps due to the weathering of ferrimagnetic minerals into antiferromagnetic forms, although this trend is generally not statistically significant. Trends in the ratios of soft IRM/hard IRM which are statistically significant support this hypothesis, suggesting that antiferromagnetic minerals are increasing relative to ferrimagnetic minerals towards the LIA maximum. Backfield ratios (IRM -100 mT/SIRM) also display a significant and strong trend towards magnetically harder behaviour with proximity to the LIA maximum. Thus, by employing a chronosequence approach, it may be possible to use sediment magnetics data as a tool for reconstructing glacier retreat in areas where more traditional techniques, such as lichenometry, are not applicable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New data show that island arc rocks have (Pb-210/Ra-226)(o) ratios which range from as low as 0.24 up to 2.88. In contrast, (Ra-22S/Th-232) appears always within error of I suggesting that the large Ra-226-excesses observed in arc rocks were generated more than 30 years ago. This places a maximum estimate on melt ascent velocities of around 4000 m/year and provides further confidence that the Ra-226 excesses reflect deep (source) processes rather than shallow level alteration or seawater contamination. Conversely, partial melting must have occurred more than 30 years prior to eruption. The Pb-210 deficits are most readily explained by protracted magma degassing. Using published numerical models, the data suggest that degassing occurred continuously for periods up to several decades just prior to eruption but no link with eruption periodicity was found. Longer periods are required if degassing is discontinuous, less than 100% efficient or if magma is recharged or stored after degassing. The long durations suggest much of this degassing occurs at depth with implications for the formation of hydrothermal and copper-porphyry systems. A suite of lavas erupted in 1985-1986 from Sangeang Api volcano in the Sunda arc are characterised by deficits of Pb-210 relative to Ra-226 from which 6-8 years of continuous Rn-222 degassing would be inferred from recent numerical models. These data also form a linear (Pb-210)/Pb-(Ra-226)/Pb array which might be interpreted as a 71-year isochron. However, the array passes through the origin suggesting displacement downwards from the equiline in response to degassing and so the slope of the array is inferred not to have any age significance. Simple modelling shows that the range of (Ra-226)/Pb ratios requires thousands of years to develop consistent with differentiation occurring in response to cooling at the base of the crust. Thus, degassing post-dated, and was not responsible for magma differentiation. The formation, migration and extraction of gas bubbles must be extremely efficient in mafic magma whereas the higher viscosity of more siliceous magmas retards the process and can lead to Pb-210 excesses. A possible negative correlation between (Pb-210/Ra-226)(o) and SO2 emission rate requires further testing but may have implications for future eruptions. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature (T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature–mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature–mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use geomagnetic activity data to study the rise and fall over the past century of the solar wind flow speed VSW, the interplanetary magnetic field strength B, and the open solar flux FS. Our estimates include allowance for the kinematic effect of longitudinal structure in the solar wind flow speed. As well as solar cycle variations, all three parameters show a long-term rise during the first half of the 20th century followed by peaks around 1955 and 1986 and then a recent decline. Cosmogenic isotope data reveal that this constitutes a grand maximum of solar activity which began in 1920, using the definition that such grand maxima are when 25-year averages of the heliospheric modulation potential exceeds 600 MV. Extrapolating the linear declines seen in all three parameters since 1985, yields predictions that the grand maximum will end in the years 2013, 2014, or 2027 using VSW, FS, or B, respectively. These estimates are consistent with predictions based on the probability distribution of the durations of past grand solar maxima seen in cosmogenic isotope data. The data contradict any suggestions of a floor to the open solar flux: we show that the solar minimum open solar flux, kinematically corrected to allow for the excess flux effect, has halved over the past two solar cycles.