208 resultados para Sampling rates
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
A new technique called the reef resource inventory (RRI) was developed to map the distribution and abundance of benthos and substratum on reefs. The rapid field sampling technique uses divers to visually estimate the percentage cover of categories of benthos and substratum along 2x20 in plotless strip-transects positioned randomly over the tops, and systematically along the edge of reefs. The purpose of this study was to compare the relative sampling accuracy of the RRI against the line intercept transect technique (LIT), an international standard for sampling reef benthos and substratum. Analysis of paired sampling with LIT and RRI at 51 sites indicated sampling accuracy was not different (P > 0.05) for 8 of the 12 benthos and substratum categories used in the study. Significant differences were attributed to small-scale patchiness and cryptic coloration of some benthos; effects associated with sampling a sparsely distributed animal along a line versus an area; difficulties in discriminating some of the benthos and substratum categories; and differences due to visual acuity since LIT measurements were taken by divers close to the seabed whereas RRI measurements were taken by divers higher in the water column. The relative cost efficiency of the RRI technique was at least three times that of LIT for all benthos and substratum categories and as much as 10 times higher for two categories. These results suggest that the RRI can be used to obtain reliable and accurate estimates of relative abundance of broad categories of reef benthos and substratum.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
The efficiency with which a small beam trawl (1 x 0.5 m mouth) sampled postlarvae and juveniles of tiger prawns Penaeus esculentus and P, semisulcatus at night was estimated in 3 tropical seagrass communities (dominated by Thalassia hemprichii, Syringodium isoetifolium and Enhalus acoroides, respectively) in the shallow waters of the Gulf of Carpentaria in northern Australia. An area of seagrass (40 x 3 m) was enclosed by a net and the beam trawl was repeatedly hand-hauled over the substrate. Net efficiency (q) was calculated using 4 methods: the unweighted Leslie, weighted Leslie, DeLury and Maximum-likelihood (ML) methods. The Maximum-likelihood is the preferred method for estimating efficiency because it makes the fewest assumptions and is not affected by zero catches. The major difference in net efficiencies was between postlarvae (mean ML q +/- 95% confidence limits = 0.66 +/- 0.16) and juveniles of both species (mean q for juveniles in water less than or equal to 1.0 m deep = 0.47 +/- 0.05), i.e. the beam trawl was more efficient at capturing postlarvae than juveniles. There was little difference in net efficiency for P, esculentus between seagrass types (T, hemprichii versus S. isoetifolium), even though the biomass and morphologies of seagrass in these communities differed greatly (biomasses were 54 and 204 g m(-2), respectively). The efficiency of the net appeared to be the same for juveniles of the 2 species in shallow water, but was lower for juvenile P, semisulcatus at high tide when the water was deeper (1.6 to 1.9 m) (0.35 +/- 0.08). The lower efficiency near the time of high tide is possibly because the prawns are more active at high than low tide, and can also escape above the net. Factors affecting net efficiency and alternative methods of estimating net efficiency are discussed.
Resumo:
Traditional comparisons between the capture efficiency of sampling devices have generally looked at the absolute differences between devices. We recommend that the signal-to-noise ratio be used when comparing the capture efficiency of benthic sampling devices. Using the signal-to-noise ratio rather than the absolute difference has the advantages that the variance is taken into account when determining how important the difference is, the hypothesis and minimum detectable difference can be made identical for all taxa, it is independent of the units used for measurement, and the sample-size calculation is independent of the variance. This new technique is illustrated by comparing the capture efficiency of a 0.05 m(2) van Veen grab and an airlift suction device, using samples taken from Heron and One Tree lagoons, Australia.
Resumo:
Criminological theories of cross-national studies of homicide have underestimated the effects of quality governance of liberal democracy and region. Data sets from several sources are combined and a comprehensive model of homicide is proposed. Results of the spatial regression model, which controls for the effect of spatial autocorrelation, show that quality governance, human development, economic inequality, and ethnic heterogeneity are statistically significant in predicting homicide. In addition, regions of Latin America and non-Muslim Sub-Saharan Africa have significantly higher rates of homicides ceteris paribus while the effects of East Asian countries and Islamic societies are not statistically significant. These findings are consistent with the expectation of the new modernization and regional theories.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
Resumo:
This report describes the development and simulation of a variable rate controller for a 6-degree of freedom nonlinear model. The variable rate simulation model represents an off the shelf autopilot. Flight experiment involves risks and can be expensive. Therefore a dynamic model to understand the performance characteristics of the UAS in mission simulation before actual flight test or to obtain parameters needed for the flight is important. The control and guidance is implemented in Simulink. The report tests the use of the model for air search and air sampling path planning. A GUI in which a set of mission scenarios, in which two experts (mission expert, i.e. air sampling or air search and an UAV expert) interact, is presented showing the benefits of the method.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Assessing police classifications of sexual assault reports: A meta-analysis of false reporting rates
Resumo:
The objective of the study was to determine, through meta-analysis, the rate of confirmed false reports of sexual assault to police. The meta-analysis initially involved a search for relevant articles. The search revealed seven studies where researchers or their trained helpers evaluated reported sexual assault cases to determine the rate of confirmed false reports. The meta-analysis calculated an overall rate and tested for possible moderators of effect size. The meta-analytic rate of false reports of sexual assault was .052 (95% CIs .030, .089). The rates for the individual studies were heterogeneous, suggesting the possibility of moderators of rate. However, the four possible moderators examined, year of publication, whether the data set used had information in addition to police reports, whether the study was completed in the U.S. or elsewhere, and whether inter-rater reliabilities were reported, were all not significant. The meta-analysis of seven relevant studies shows that confirmed false allegations of sexual assault made to police occur at a significant rate. The total false reporting rate, including both confirmed and equivocal cases, would be greater than the 5 percent rate found here.
Resumo:
OBJECTIVES To explore factors associated with postpartum glucose screening among women with Gestational Diabetes Mellitus (GDM). METHODS A retrospective study using linked records from women with GDM who gave birth at Cairns Hospital in Far North Queensland, Australia, from 1 January 2004 to 31 December 2010. RESULTS The rates of postpartum Oral Glucose Tolerance Test (OGTT) screening, while having increased significantly among both Indigenous* and non-Indigenous women from 2004 to 2010 (HR 1.15 per year, 95%CI 1.08-1.22, p<0.0001), remain low, particularly among Indigenous women (10% versus 27%, respectively at six months postpartum). Indigenous women in Cairns had a longer time to postpartum OGTT than Indigenous women in remote areas (HR 0.58, 0.38-0.71, p=0.01). Non-Indigenous women had a longer time to postpartum OGTT if they: were born in Australia (HR 0.76, 0.59-1.00, 0.05); were aged <25 years (HR 0.45, 0.23-0.89, p=0.02); had parity >5 (HR 0.33, 0.12-0.90, p=0.03); smoked (HR 0.48, 0.31-0.76, p=0.001); and did not breastfeed (HR 0.09, 0.01-0.64, p=0.02). CONCLUSIONS Postpartum diabetes screening rates following GDM in Far North Queensland are low, particularly among Indigenous women, with lower rates seen in the regional centre; and among non-Indigenous women with indicators of low socioeconomic status. IMPLICATIONS Strategies are urgently needed to improve postpartum diabetes screening after GDM that reach women most at risk.
Resumo:
Background: Falls among hospitalised patients impose a considerable burden on health systems globally and prevention is a priority. Some patient-level interventions have been effective in reducing falls, but others have not. An alternative and promising approach to reducing inpatient falls is through the modification of the hospital physical environment and the night lighting of hospital wards is a leading candidate for investigation. In this pilot trial, we will determine the feasibility of conducting a main trial to evaluate the effects of modified night lighting on inpatient ward level fall rates. We will test also the feasibility of collecting novel forms of patient level data through a concurrent observational sub-study. Methods/design: A stepped wedge, cluster randomised controlled trial will be conducted in six inpatient wards over 14 months in a metropolitan teaching hospital in Brisbane (Australia). The intervention will consist of supplementary night lighting installed across all patient rooms within study wards. The planned placement of luminaires, configurations and spectral characteristics are based on prior published research and pre-trial testing and modification. We will collect data on rates of falls on study wards (falls per 1000 patient days), the proportion of patients who fall once or more, and average length of stay. We will recruit two patients per ward per month to a concurrent observational sub-study aimed at understanding potential impacts on a range of patient sleep and mobility behaviour. The effect on the environment will be monitored with sensors to detect variation in light levels and night-time room activity. We will also collect data on possible patient-level confounders including demographics, pre-admission sleep quality, reported vision, hearing impairment and functional status. Discussion: This pragmatic pilot trial will assess the feasibility of conducting a main trial to investigate the effects of modified night lighting on inpatient fall rates using several new methods previously untested in the context of environmental modifications and patient safety. Pilot data collected through both parts of the trial will be utilised to inform sample size calculations, trial design and final data collection methods for a subsequent main trial.