27 resultados para Sampling method
em CentAUR: Central Archive University of Reading - UK
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
The soil fauna is often a neglected group in many large-scale studies of farmland biodiversity due to difficulties in extracting organisms efficiently from the soil. This study assesses the relative efficiency of the simple and cheap sampling method of handsorting against Berlese-Tullgren funnel and Winkler apparatus extraction. Soil cores were taken from grassy arable field margins and wheat fields in Cambridgeshire, UK, and the efficiencies of the three methods in assessing the abundances and species densities of soil macroinver-tebrates were compared. Handsorting in most cases was as efficient at extracting the majority of the soil macrofauna as the Berlese-Tullgren funnel and Winkler bag methods, although it underestimated the species densities of the woodlice and adult beetles. There were no obvious biases among the three methods for the particular vegetation types sampled and no significant differences in the size distributions of the earthworms and beetles. Proportionally fewer damaged earthworms were recorded in larger (25 x 25 cm) soil cores when compared with smaller ones (15 x 15 cm). Handsorting has many benefits, including targeted extraction, minimum disturbance to the habitat and shorter sampling periods and may be the most appropriate method for studies of farmland biodiversity when a high number of soil cores need to be sampled. (C) 2008 Elsevier Masson SAS. All rights reserved.
Resumo:
The goal of this paper is to study and further develop the orthogonality sampling or stationary waves algorithm for the detection of the location and shape of objects from the far field pattern of scattered waves in electromagnetics or acoustics. Orthogonality sampling can be seen as a special beam forming algorithm with some links to the point source method and to the linear sampling method. The basic idea of orthogonality sampling is to sample the space under consideration by calculating scalar products of the measured far field pattern , with a test function for all y in a subset Q of the space , m = 2, 3. The way in which this is carried out is important to extract the information which the scattered fields contain. The theoretical foundation of orthogonality sampling is only partly resolved, and the goal of this work is to initiate further research by numerical demonstration of the high potential of the approach. We implement the method for a two-dimensional setting for the Helmholtz equation, which represents electromagnetic scattering when the setup is independent of the third coordinate. We show reconstructions of the location and shape of objects from measurements of the scattered field for one or several directions of incidence and one or many frequencies or wave numbers, respectively. In particular, we visualize the indicator function both with the Dirichlet and Neumann boundary condition and for complicated inhomogeneous media.
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
Bee pollinators are currently recorded with many different sampling methods. However, the relative performances of these methods have not been systematically evaluated and compared. In response to the strong need to record ongoing shifts in pollinator diversity and abundance, global and regional pollinator initiatives must adopt standardized sampling protocols when developing large-scale and long-term monitoring schemes. We systematically evaluated the performance of six sampling methods (observation plots, pan traps, standardized and variable transect walks, trap nests with reed internodes or paper tubes) that are commonly used across a wide range of geographical regions in Europe and in two habitat types (agricultural and seminatural). We focused on bees since they represent the most important pollinator group worldwide. Several characteristics of the methods were considered in order to evaluate their performance in assessing bee diversity: sample coverage, observed species richness, species richness estimators, collector biases (identified by subunit-based rarefaction curves), species composition of the samples, and the indication of overall bee species richness (estimated from combined total samples). The most efficient method in all geographical regions, in both the agricultural and seminatural habitats, was the pan trap method. It had the highest sample coverage, collected the highest number of species, showed negligible collector bias, detected similar species as the transect methods, and was the best indicator of overall bee species richness. The transect methods were also relatively efficient, but they had a significant collector bias. The observation plots showed poor performance. As trap nests are restricted to cavity-nesting bee species, they had a naturally low sample coverage. However, both trap nest types detected additional species that were not recorded by any of the other methods. For large-scale and long-term monitoring schemes with surveyors with different experience levels, we recommend pan traps as the most efficient, unbiased, and cost-effective method for sampling bee diversity. Trap nests with reed internodes could be used as a complementary sampling method to maximize the numbers of collected species. Transect walks are the principal method for detailed studies focusing on plant-pollinator associations. Moreover, they can be used in monitoring schemes after training the surveyors to standardize their collection skills.
Resumo:
Accurate differentiation between tropical forest and savannah ecosystems in the fossil pollen record is hampered by the combination of: i) poor taxonomic resolution in pollen identification, and ii) the high species diversity of many lowland tropical families, i.e. with many different growth forms living in numerous environmental settings. These barriers to interpreting the fossil record hinder our understanding of the past distributions of different Neotropical ecosystems and consequently cloud our knowledge of past climatic, biodiversity and carbon storage patterns. Modern pollen studies facilitate an improved understanding of how ecosystems are represented by the pollen their plants produce and therefore aid interpretation of fossil pollen records. To understand how to differentiate ecosystems palynologically, it is essential that a consistent sampling method is used across ecosystems. However, to date, modern pollen studies from tropical South America have employed a variety of methodologies (e.g. pollen traps, moss polsters, soil samples). In this paper, we present the first modern pollen study from the Neotropics to examine the modern pollen rain from moist evergreen tropical forest (METF), semi-deciduous dry tropical forest (SDTF) and wooded savannah (cerradão) using a consistent sampling methodology (pollen traps). Pollen rain was sampled annually in September for the years 1999–2001 from within permanent vegetation study plots in, or near, the Noel Kempff Mercado National Park (NKMNP), Bolivia. Comparison of the modern pollen rain within these plots with detailed floristic inventories allowed estimates of the relative pollen productivity and dispersal for individual taxa to be made (% pollen/% vegetation or ‘p/v’). The applicability of these data to interpreting fossil records from lake sediments was then explored by comparison with pollen assemblages obtained from five lake surface samples.
Resumo:
The use of social network sites (SNS) has become very valuable to educational institutions. Some universities have formally integrated these social media in their educational systems and are using them to improve their service delivery. The main aim of this study was to establish whether African universities have embraced this emerging technology by having official presence on SNS. A purposive sampling method was used to study 24 universities from which data were obtained by visiting their official websites and following the official links to the most common SNS.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity.
Resumo:
The jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs.We propose a jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.
Resumo:
It is common practice to design a survey with a large number of strata. However, in this case the usual techniques for variance estimation can be inaccurate. This paper proposes a variance estimator for estimators of totals. The method proposed can be implemented with standard statistical packages without any specific programming, as it involves simple techniques of estimation, such as regression fitting.
Resumo:
We show that the Hájek (Ann. Math Statist. (1964) 1491) variance estimator can be used to estimate the variance of the Horvitz–Thompson estimator when the Chao sampling scheme (Chao, Biometrika 69 (1982) 653) is implemented. This estimator is simple and can be implemented with any statistical packages. We consider a numerical and an analytic method to show that this estimator can be used. A series of simulations supports our findings.
Resumo:
Imputation is commonly used to compensate for item non-response in sample surveys. If we treat the imputed values as if they are true values, and then compute the variance estimates by using standard methods, such as the jackknife, we can seriously underestimate the true variances. We propose a modified jackknife variance estimator which is defined for any without-replacement unequal probability sampling design in the presence of imputation and non-negligible sampling fraction. Mean, ratio and random-imputation methods will be considered. The practical advantage of the method proposed is its breadth of applicability.
Resumo:
Until recently, there has been little investigation concerning the poor indoor air quality (IAQ) in classrooms. Despite the evidence that the educational building systems in many of the UK institutions have significant defects that may degrade IAQ, systematic assessments of IAQ measurements has been rarely undertaken. When undertaking IAQ measurement, there is a difficult task of representing and characterizing the environment parameters. Although technologies exist to measure these parameters, direct measurements especially in a naturally ventilated spaces are often difficult. This paper presents a methodology for developing a method to characterize indoor environment flow parameters as well as the Carbon Dioxide (CO2) concentrations. Thus, CO2 concentration level can be influenced by the differences in the selection of sampling points and heights. However, because this research focuses on natural ventilation in classrooms, air exchange is provided mainly by air infiltration. It is hoped that the methodology developed and evaluated in this research can effectively simplify the process of estimating the parameters for a systematic assessment of IAQ measurements in a naturally ventilated classrooms.