943 resultados para Orthogonal sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ocean gliders constitute an important advance in the highly demanding ocean monitoring scenario. Their effciency, endurance and increasing robustness make these vehicles an ideal observing platform for many long term oceanographic applications. However, they have proved to be also useful in the opportunis-tic short term characterization of dynamic structures. Among these, mesoscale eddies are of particular interest due to the relevance they have in many oceano-graphic processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This book describes the mortality for all causes of death and the trend in major causes of death since 1970s in Shandong Province, China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The deposition of biological material (biofouling) onto polymeric contact lenses is thought to be a major contributor to lens discomfort and hence discontinuation of wear. We describe a method to characterize lipid deposits directly from worn contact lenses utilizing liquid extraction surface analysis coupled to tandem mass spectrometry (LESA-MS/MS). This technique effected facile and reproducible extraction of lipids from the contact lens surfaces and identified lipid molecular species representing all major classes present in human tear film. Our data show that LESA-MS/MS is a rapid and comprehensive technique for the characterization of lipid-related biofouling on polymer surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mammographic density (MD) adjusted for age and body mass index (BMI) is a strong heritable breast cancer risk factor; however, its biological basis remains elusive. Previous studies assessed MD-associated histology using random sampling approaches, despite evidence that high and low MD areas exist within a breast and are negatively correlated with respect to one another. We have used an image-guided approach to sample high and low MD tissues from within individual breasts to examine the relationship between histology and degree of MD. Image-guided sampling was performed using two different methodologies on mastectomy tissues (n = 12): (1) sampling of high and low MD regions within a slice guided by bright (high MD) and dark (low MD) areas in a slice X-ray film; (2) sampling of high and low MD regions within a whole breast using a stereotactically guided vacuum-assisted core biopsy technique. Pairwise analysis accounting for potential confounders (i.e. age, BMI, menopausal status, etc.) provides appropriate power for analysis despite the small sample size. High MD tissues had higher stromal (P = 0.002) and lower fat (P = 0.002) compositions, but no evidence of difference in glandular areas (P = 0.084) compared to low MD tissues from the same breast. High MD regions had higher relative gland counts (P = 0.023), and a preponderance of Type I lobules in high MD compared to low MD regions was observed in 58% of subjects (n = 7), but did not achieve significance. These findings clarify the histologic nature of high MD tissue and support hypotheses regarding the biophysical impact of dense connective tissue on mammary malignancy. They also provide important terms of reference for ongoing analyses of the underlying genetics of MD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As part of a wider study to develop an ecosystem-health monitoring program for wadeable streams of south-eastern Queensland, Australia, comparisons were made regarding the accuracy, precision and relative efficiency of single-pass backpack electrofishing and multiple-pass electrofishing plus supplementary seine netting to quantify fish assemblage attributes at two spatial scales (within discrete mesohabitat units and within stream reaches consisting of multiple mesohabitat units). The results demonstrate that multiple-pass electrofishing plus seine netting provide more accurate and precise estimates of fish species richness, assemblage composition and species relative abundances in comparison to single-pass electrofishing alone, and that intensive sampling of three mesohabitat units (equivalent to a riffle-run-pool sequence) is a more efficient sampling strategy to estimate reach-scale assemblage attributes than less intensive sampling over larger spatial scales. This intensive sampling protocol was sufficiently sensitive that relatively small differences in assemblage attributes (<20%) could be detected with a high statistical power (1-β > 0.95) and that relatively few stream reaches (<4) need be sampled to accurately estimate assemblage attributes close to the true population means. The merits and potential drawbacks of the intensive sampling strategy are discussed, and it is deemed to be suitable for a range of monitoring and bioassessment objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine some variations of standard probability designs that preferentially sample sites based on how easy they are to access. Preferential sampling designs deliver unbiased estimates of mean and sampling variance and will ease the burden of data collection but at what cost to our design efficiency? Preferential sampling has the potential to either increase or decrease sampling variance depending on the application. We carry out a simulation study to gauge what effect it will have when sampling Soil Organic Carbon (SOC) values in a large agricultural region in south-eastern Australia. Preferential sampling in this region can reduce the distance to travel by up to 16%. Our study is based on a dataset of predicted SOC values produced from a datamining exercise. We consider three designs and two ways to determine ease of access. The overall conclusion is that sampling performance deteriorates as the strength of preferential sampling increases, due to the fact the regions of high SOC are harder to access. So our designs are inadvertently targeting regions of low SOC value. The good news, however, is that Generalised Random Tessellation Stratification (GRTS) sampling designs are not as badly affected as others and GRTS remains an efficient design compared to competitors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper provides a systematic approach to designing the laboratory phase of a multiphase experiment, taking into account previous phases. General principles are outlined for experiments in which orthogonal designs can be employed. Multiphase experiments occur widely, although their multiphase nature is often not recognized. The need to randomize the material produced from the first phase in the laboratory phase is emphasized. Factor-allocation diagrams are used to depict the randomizations in a design and the use of skeleton analysis-of-variance (ANOVA) tables to evaluate their properties discussed. The methods are illustrated using a scenario and a case study. A basis for categorizing designs is suggested. This article has supplementary material online.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work investigates the design of ideal threshold secret sharing in the context of cheating prevention. We showed that each orthogonal array is exactly a defining matrix of an ideal threshold scheme. To prevent cheating, defining matrices should be nonlinear so both the cheaters and honest participants have the same chance of guessing of the valid secret. The last part of the work shows how to construct nonlinear secret sharing based on orthogonal arrays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A laboratory experiment was set up in small chambers for monitoring greenhouse gas emissions and determining the most suitable time for sampling. A six-treatment experiment was conducted, including a one week pre-incubation and a week for incubation. Timelines for sampling were 1, 2, 3, 6 and 24 hours after closing the lid of the incubation chambers. Variation in greenhouse gas fluxes was high due to the time of sampling. The rates of gas emissions increased in first three hours and decreased afterward. The rates of greenhouse gas emissions at 3 hours after closing lids was close to the mean for the 24-h period.