996 resultados para sampling frequency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To provide reliable estimates for mapping soil properties for precision agriculture requires intensive sampling and costly laboratory analyses. If the spatial structure of ancillary data, such as yield, digital information from aerial photographs, and soil electrical conductivity (EC) measurements, relates to that of soil properties they could be used to guide the sampling intensity for soil surveys. Variograins of permanent soil properties at two study sites on different parent materials were compared with each other and with those for ancillary data. The ranges of spatial dependence identified by the variograms of both sets of properties are of similar orders of magnitude for each study site, Maps of the ancillary data appear to show similar patterns of variation and these seem to relate to those of the permanent properties of the soil. Correlation analysis has confirmed these relations. Maps of kriged estimates from sub-sampled data and the original variograrns showed that the main patterns of variation were preserved when a sampling interval of less than half the average variogram range of ancillary data was used. Digital data from aerial photographs for different years and EC appear to show a more consistent relation with the soil properties than does yield. Aerial photographs, in particular those of bare soil, seem to be the most useful ancillary data and they are often cheaper to obtain than yield and EC data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long-term monitoring of forest soils as part of a pan-European network to detect environmental change depends on an accurate determination of the mean of the soil properties at each monitoring event. Forest soil is known to be very variable spatially, however. A study was undertaken to explore and quantify this variability at three forest monitoring plots in Britain. Detailed soil sampling was carried out, and the data from the chemical analyses were analysed by classical statistics and geostatistics. An analysis of variance showed that there were no consistent effects from the sample sites in relation to the position of the trees. The variogram analysis showed that there was spatial dependence at each site for several variables and some varied in an apparently periodic way. An optimal sampling analysis based on the multivariate variogram for each site suggested that a bulked sample from 36 cores would reduce error to an acceptable level. Future sampling should be designed so that it neither targets nor avoids trees and disturbed ground. This can be achieved best by using a stratified random sampling design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Representative Soil Sampling Scheme of England and Wales has recorded information on the soil of agricultural land in England and Wales since 1969. It is a valuable source of information about the soil in the context of monitoring for sustainable agricultural development. Changes in soil nutrient status and pH were examined over the period 1971-2001. Several methods of statistical analysis were applied to data from the surveys during this period. The main focus here is on the data for 1971, 1981, 1991 and 2001. The results of examining change over time in general show that levels of potassium in the soil have increased, those of magnesium have remained fairly constant, those of phosphorus have declined and pH has changed little. Future sampling needs have been assessed in the context of monitoring, to determine the mean at a given level of confidence and tolerable error and to detect change in the mean over time at these same levels over periods of 5 and 10 years. The results of a non-hierarchical multivariate classification suggest that England and Wales could be stratified to optimize future sampling and analysis. To monitor soil quality and health more generally than for agriculture, more of the country should be sampled and a wider range of properties recorded.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated diurnal nitrate (NO3-) concentration variability in the San Joaquin River using an in situ optical NO3- sensor and discrete sampling during a 5-day summer period characterized by high algal productivity. Dual NO3- isotopes (delta N-15(NO3) and delta O-18(NO3)) and dissolved oxygen isotopes (delta O-18(DO)) were measured over 2 days to assess NO3- sources and biogeochemical controls over diurnal time-scales. Concerted temporal patterns of dissolved oxygen (DO) concentrations and delta O-18(DO) were consistent with photosynthesis, respiration and atmospheric O-2 exchange, providing evidence of diurnal biological processes independent of river discharge. Surface water NO3- concentrations varied by up to 22% over a single diurnal cycle and up to 31% over the 5-day study, but did not reveal concerted diurnal patterns at a frequency comparable to DO concentrations. The decoupling of delta N-15(NO3) and delta O-18(NO3) isotopes suggests that algal assimilation and denitrification are not major processes controlling diurnal NO3- variability in the San Joaquin River during the study. The lack of a clear explanation for NO3- variability likely reflects a combination of riverine biological processes and time-varying physical transport of NO3- from upstream agricultural drains to the mainstem San Joaquin River. The application of an in situ optical NO3- sensor along with discrete samples provides a view into the fine temporal structure of hydrochemical data and may allow for greater accuracy in pollution assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experiments have been performed using a simplified, Newtonian forced, global circulation model to investigate how variability of the tropospheric jet can be characterized by examining the combined fluctuations of the two leading modes of annular variability. Eddy forcing of this variability is analyzed in the phase space of the leading modes using the vertically integrated momentum budget. The nature of the annular variability and eddy forcing depends on the time scale. At low frequencies the zonal flow and baroclinic eddies are in quasi equilibrium and anomalies propagate poleward. The eddies are shown primarily to reinforce the anomalous state and are closely balanced by the linear damping, leaving slow evolution as a residual. At high frequencies the flow is strongly evolving and anomalies are initiated on the poleward side of the tropospheric jet and propagate equatorward. The eddies are shown to drive this evolution strongly: eddy location and amplitude reflect the past baroclinicity, while eddy feedback on the zonal flow may be interpreted in terms of wave breaking associated with baroclinic life cycles in lateral shear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An isentropic potential vorticity (PV) budget analysis is employed to examine the role of synoptic transients, advection, and nonconservative processes as forcings for the evolution of the low-frequency PV anomalies locally and those associated with the North Atlantic Oscillation (NAO) and the Pacific–North American (PNA) pattern. Specifically, the rate of change of the low-frequency PV is expressed as a sum of tendencies due to divergence of eddy transport, advection by the low-frequency flow (hereafter referred to as advection), and the residual nonconservative processes. The balance between the variances and covariances of these terms is illustrated using a novel vector representation. It is shown that for most locations, as well as for the PNA pattern, the PV variability is dominantly driven by advection. The eddy forcing explains a small amount of the tendency variance. For the NAO, the role of synoptic eddy fluxes is found to be stronger, explaining on average 15% of the NAO tendency variance. Previous studies have not assessed quantitively how the various forcings balance the tendency. Thus, such studies may have overestimated the role of eddy fluxes for the evolution of teleconnections by examining, for example, composites and regressions that indicate maintenance, rather than evolution driven by the eddies. The authors confirm this contrasting view by showing that during persistent blocking (negative NAO) episodes the eddy driving is relatively stronger.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is now well established that subthalamic nucleus high-frequency stimulation (STN HFS) alleviates motor problems in Parkinson's disease. However, its efficacy for cognitive function remains a matter of debate. The aim of this study was to assess the effects of STN HFS in rats performing a visual attentional task. Bilateral STN HFS was applied in intact and in bilaterally dopamine (DA)-depleted rats. In all animals, STN HFS had a transient debilitating effect on all the variables measured in the task. In DA-depleted rats, STN HFS did not alleviate the deficits induced by the DA lesion such as omissions and latency to make correct responses, but induced perseverative approaches to the food magazine, an indicator of enhanced motivation. In sham-operated controls, STN HFS significantly reduced accuracy and induced perseverative behaviour, mimicking partially the effects of bilateral STN lesions in the same task. These results are in line with the hypothesis that STN HFS only partially mimics inactivation of STN produced by lesioning and confirm the motivational exacerbation induced by STN inactivation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.