888 resultados para stratified random sampling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As an extension to an activity introducing Year 5 students to the practice of statistics, the software TinkerPlots made it possible to collect repeated random samples from a finite population to informally explore students’ capacity to begin reasoning with a distribution of sample statistics. This article provides background for the sampling process and reports on the success of students in making predictions for the population from the collection of simulated samples and in explaining their strategies. The activity provided an application of the numeracy skill of using percentages, the numerical summary of the data, rather than graphing data in the analysis of samples to make decisions on a statistical question. About 70% of students made what were considered at least moderately good predictions of the population percentages for five yes–no questions, and the correlation between predictions and explanations was 0.78.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The total abundance of small pelagic species in Mozambican waters was estimated to be 100 (plus or minus 45) thousand tonnes of which scad and mackerel constituted 57 (plus or minus 39) thousand tonnes. These abundance estimates must be considered as minimum estimates of biomass as the efficiency of the trawl was assumed to be equal to 1.0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Milkfish and prawn pond operation in the Philippines is often associated with lab-lab culture. Lab-lab is a biological complex of blue-green algae, diatoms, bacteria and various animals which form a mat at the bottom of nursery ponds or floating patches along the margins of ponds. This complex is considered the most favorable food of milkfish in brackishwater ponds. Variations in the quantity and quality of lab-lab between and within areas of a 1,000 sq. m. pond was determined over 2 culture periods (6 month duration) and the applicability and suitability of stratified random sampling as a method of sampling lab-lab was evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory ‘tail’ DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the ‘randomness’ of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term monitoring of forest soils as part of a pan-European network to detect environmental change depends on an accurate determination of the mean of the soil properties at each monitoring event. Forest soil is known to be very variable spatially, however. A study was undertaken to explore and quantify this variability at three forest monitoring plots in Britain. Detailed soil sampling was carried out, and the data from the chemical analyses were analysed by classical statistics and geostatistics. An analysis of variance showed that there were no consistent effects from the sample sites in relation to the position of the trees. The variogram analysis showed that there was spatial dependence at each site for several variables and some varied in an apparently periodic way. An optimal sampling analysis based on the multivariate variogram for each site suggested that a bulked sample from 36 cores would reduce error to an acceptable level. Future sampling should be designed so that it neither targets nor avoids trees and disturbed ground. This can be achieved best by using a stratified random sampling design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Prospective and Retrospective Memory Questionnaire (PRMQ) has been shown to have acceptable reliability and factorial, predictive, and concurrent validity. However, the PRMQ has never been administered to a probability sample survey representative of all ages in adulthood, nor have previous studies controlled for factors that are known to influence metamemory, such as affective status. Here, the PRMQ was applied in a survey adopting a probabilistic three-stage cluster sample representative of the population of Sao Paulo, Brazil, according to gender, age (20-80 years), and economic status (n=1042). After excluding participants who had conditions that impair memory (depression, anxiety, used psychotropics, and/or had neurological/psychiatric disorders), in the remaining 664 individuals we (a) used confirmatory factor analyses to test competing models of the latent structure of the PRMQ, and (b) studied effects of gender, age, schooling, and economic status on prospective and retrospective memory complaints. The model with the best fit confirmed the same tripartite structure (general memory factor and two orthogonal prospective and retrospective memory factors) previously reported. Women complained more of general memory slips, especially those in the first 5 years after menopause, and there were more complaints of prospective than retrospective memory, except in participants with lower family income.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, ‘hidden’ trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Demographic characteristics associated with gambling participation and problem gambling severity were investigated in a stratified random survey in Tasmania, Australia. Computer-assisted telephone interviews were conducted in March 2011 resulting in a representative sample of 4,303 Tasmanian residents aged 18 years or older. Overall, 64.8 % of Tasmanian adults reported participating in some form of gambling in the previous 12 months. The most common forms of gambling were lotteries (46.5 %), keno (24.3 %), instant scratch tickets (24.3 %), and electronic gaming machines (20.5 %). Gambling severity rates were estimated at non-gambling (34.8 %), non-problem gambling (57.4 %), low risk gambling (5.3 %), moderate risk (1.8 %), and problem gambling (.7 %). Compared to Tasmanian gamblers as a whole significantly higher annual participation rates were reported by couples with no children, those in full time paid employment, and people who did not complete secondary school. Compared to Tasmanian gamblers as a whole significantly higher gambling frequencies were reported by males, people aged 65 or older, and people who were on pensions or were unable to work. Compared to Tasmanian gamblers as a whole significantly higher gambling expenditure was reported by males. The highest average expenditure was for horse and greyhound racing ($AUD 1,556), double the next highest gambling activity electronic gaming machines ($AUD 767). Compared to Tasmanian gamblers as a whole problem gamblers were significantly younger, in paid employment, reported lower incomes, and were born in Australia. Although gambling participation rates appear to be falling, problem gambling severity rates remain stable. These changes appear to reflect a maturing gambling market and the need for population specific harm minimisation strategies. © 2014 Springer Science+Business Media New York.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classical sampling methods can be used to estimate the mean of a finite or infinite population. Block kriging also estimates the mean, but of an infinite population in a continuous spatial domain. In this paper, I consider a finite population version of block kriging (FPBK) for plot-based sampling. The data are assumed to come from a spatial stochastic process. Minimizing mean-squared-prediction errors yields best linear unbiased predictions that are a finite population version of block kriging. FPBK has versions comparable to simple random sampling and stratified sampling, and includes the general linear model. This method has been tested for several years for moose surveys in Alaska, and an example is given where results are compared to stratified random sampling. In general, assuming a spatial model gives three main advantages over classical sampling: (1) FPBK is usually more precise than simple or stratified random sampling, (2) FPBK allows small area estimation, and (3) FPBK allows nonrandom sampling designs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attitudes and practices towards older workers were surveyed in Brisbane with 525 employees randomly sampled from the electoral roll and executives of 104 companies obtained by stratified random sampling from the Register of Workplaces (response rates, 60% and 80% respectively). The results indicated that “older workers” are young in terms of contemporary life expectancy, and younger for employers than employees; they have some desirable personal qualities (eg. loyalty), but are not perceived as adaptable; workers aged 25–39 were preferred on qualities held to be important in the workplace and there was minimal interest in recruiting anyone over 45 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper details the processes and challenges involved in collecting inventory data from smallholder and community woodlots on Leyte Island, Philippines. Over the period from 2005 through to 2012, 253 woodlots at 170 sites were sampled as part of a large multidisciplinary project, resulting in a substantial timber inventory database. The inventory was undertaken to provide information for three separate but interrelated studies, namely (1) tree growth, performance and timber availability from private smallholder woodlots on Leyte Island; (2) tree growth and performance of mixed-species plantings of native species; and (3) the assessment of reforestation outcomes from various forms of reforestation. A common procedure for establishing plots within each site was developed and applied in each study, although the basis of site selection varied. A two-stage probability proportion to size sampling framework was developed to select smallholder woodlots for inclusion in the inventory. In contrast, community-based forestry woodlots were selected using stratified random sampling. Challenges encountered in undertaking the inventory were mostly associated with the need to consult widely before the commencement of the inventory and problems in identifying woodlots for inclusion. Most smallholder woodlots were only capable of producing merchantable volumes of less than 44 % of the site potential due to a lack of appropriate silviculture. There was a clear bimodal distribution of proportion that the woodlots comprised of the total smallholding area. This bimodality reflects two major motivations for smallholders to establish woodlots, namely timber production and to secure land tenure.