218 resultados para stratified random sampling
Resumo:
• In December 1986 funds were approved to double the intensity of random breath testing (RBT) and provide publicity support for police efforts. These changes were considered necessary to make RBT effective. • RBT methods were changed in the metropolitan area to enable block testing (pulling over a block of traffic rather than one or two cars), deployment of police to cut off escape routes, and testing by traffic patrols in all police subdivisions. Additional operators were trained for country RBT. • A publicity campaign was developed, aimed mainly at male drivers aged 18-50. The campaign consisted of the “cardsharp” television commercials, radio commercials, newspaper articles, posters and pamphlets. • Increased testing and the publicity campaigns were launched on 10 April 1987. • Police tests increased by 92.5% in May – December 1987, compared with the same period in the previous four years. • The detection rate for drinking drivers picked up by police who were cutting off escape routes was comparatively high, indicating that drivers were attempting to avoid RBT, and that this police method was effective at detecting these drivers. • A telephone survey indicated that drivers were aware of the messages of the publicity campaign. • The telephone survey also indicated that the target group had been exposed to high levels of RBT, as planned, and that fear of apprehension was the major factor deterring them from drink driving. • A roadside survey of driver blood alcohol concentrations (BACs) by the University of Adelaide’s Road Accident Research Unit (RARU) showed that, between 10p.m. and 3a.m., the proportion of drivers in Adelaide with a BAC greater than or equal to 0/08 decreased by 42%. • Drivers under 21 were identified as a possible problem area. • Fatalities in the twelve month period commencing May 1987 decreased by 18% in comparison with the previous twelve month period, and by 13% in comparison with the average of the previous two twelve month periods (commencing May 1985 and May 1986). There are indications that this trend is continuing. • It is concluded that the increase in RBT, plus publicity, was successful in achieving its aims of reductions in drink driving and accidents.
Resumo:
Random breath testing (RBT) was introduced in South Australia in 1981 with the intention of reducing the incidence of accidents involving alcohol. In April 1985, a Select Committee of the Upper House which had been established to “review the operation of random breath testing in this State and any other associated matters and report accordingly” presented its report. After consideration of this report, the Government introduced extensive amendments to those sections of the Motor Vehicles Act (MVA) and Road Traffic Act (RTA) which deal with RBT and drink driving penalties. The amended section 47da of the RTA requires that: “(5) The Minister shall cause a report to be prepared within three months after the end of each calendar year on the operation and effectiveness of this section and related sections during that calendar year. (6) The Minister shall, within 12 sitting days after receipt of a report under subsection (5), cause copies of the report to be laid before each House of Parliament.” This is the first such report. Whilst it deals with RBT over a full year, the changed procedures and improved flexibility allowed by the revision to the RTA were only introduced late in 1985 and then only to the extent that the existing resources would allow.
Resumo:
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
Resumo:
The rapid uptake of transcriptomic approaches in freshwater ecology has seen a wealth of data produced concerning the ways in which organisms interact with their environment on a molecular level. Typically, such studies focus either at the community level and so don’t require species identifications, or on laboratory strains of known species identity or natural populations of large, easily identifiable taxa. For chironomids, impediments still exist for applying these technologies to natural populations because they are small-bodied and often require time-consuming secondary sorting of stream material and morphological voucher preparation to confirm species diagnosis. These procedures limit the ability to maintain RNA quantity and quality in such organisms because RNA degrades rapidly and gene expression can be altered rapidly in organisms; thereby limiting the inclusion of such taxa in transcriptomic studies. Here, we demonstrate that these limitations can be overcome and outline an optimised protocol for collecting, sorting and preserving chironomid larvae that enables retention of both morphological vouchers and RNA for subsequent transcriptomics purposes. By ensuring that sorting and voucher preparation are completed within <4 hours after collection and that samples are kept cold at all times, we successfully retained both RNA and morphological vouchers from all specimens. Although not prescriptive in specific methodology, we anticipate that this paper will assist in promoting transcriptomic investigations of the sublethal impact on chironomid gene expression of changes to aquatic environments.
Resumo:
A spatial sampling design that uses pair-copulas is presented that aims to reduce prediction uncertainty by selecting additional sampling locations based on both the spatial configuration of existing locations and the values of the observations at those locations. The novelty of the approach arises in the use of pair-copulas to estimate uncertainty at unsampled locations. Spatial pair-copulas are able to more accurately capture spatial dependence compared to other types of spatial copula models. Additionally, unlike traditional kriging variance, uncertainty estimates from the pair-copula account for influence from measurement values and not just the configuration of observations. This feature is beneficial, for example, for more accurate identification of soil contamination zones where high contamination measurements are located near measurements of varying contamination. The proposed design methodology is applied to a soil contamination example from the Swiss Jura region. A partial redesign of the original sampling configuration demonstrates the potential of the proposed methodology.
Resumo:
Quantifying nitrous oxide (N(2)O) fluxes, a potent greenhouse gas, from soils is necessary to improve our knowledge of terrestrial N(2)O losses. Developing universal sampling frequencies for calculating annual N(2)O fluxes is difficult, as fluxes are renowned for their high temporal variability. We demonstrate daily sampling was largely required to achieve annual N(2)O fluxes within 10% of the best estimate for 28 annual datasets collected from three continents, Australia, Europe and Asia. Decreasing the regularity of measurements either under- or overestimated annual N(2)O fluxes, with a maximum overestimation of 935%. Measurement frequency was lowered using a sampling strategy based on environmental factors known to affect temporal variability, but still required sampling more than once a week. Consequently, uncertainty in current global terrestrial N(2)O budgets associated with the upscaling of field-based datasets can be decreased significantly using adequate sampling frequencies.
Resumo:
Accurately quantifying total greenhouse gas emissions (e.g. methane) from natural systems such as lakes, reservoirs and wetlands requires the spatial-temporal measurement of both diffusive and ebullitive (bubbling) emissions. Traditional, manual, measurement techniques provide only limited localised assessment of methane flux, often introducing significant errors when extrapolated to the whole-of-system. In this paper, we directly address these current sampling limitations and present a novel multiple robotic boat system configured to measure the spatiotemporal release of methane to atmosphere across inland waterways. The system, consisting of multiple networked Autonomous Surface Vehicles (ASVs) and capable of persistent operation, enables scientists to remotely evaluate the performance of sampling and modelling algorithms for real-world process quantification over extended periods of time. This paper provides an overview of the multi-robot sampling system including the vehicle and gas sampling unit design. Experimental results are shown demonstrating the system’s ability to autonomously navigate and implement an exploratory sampling algorithm to measure methane emissions on two inland reservoirs.
Resumo:
Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.