82 resultados para MCMC sampling
Resumo:
This paper reports the feasibility and methodological considerations of using the Short Message System Experience Sampling (SMS-ES) Method, which is an experience sampling research method developed to assist researchers to collect repeat measures of consumers’ affective experiences. The method combines SMS with web-based technology in a simple yet effective way. It is described using a practical implementation study that collected consumers’ emotions in response to using mobile phones in everyday situations. The method is further evaluated in terms of the quality of data collected in the study, as well as against the methodological considerations for experience sampling studies. These two evaluations suggest that the SMS-ES Method is both a valid and reliable approach for collecting consumers’ affective experiences. Moreover, the method can be applied across a range of for-profit and not-for-profit contexts where researchers want to capture repeated measures of consumers’ affective experiences occurring over a period of time. The benefits of the method are discussed to assist researchers who wish to apply the SMS-ES Method in their own research designs.
Resumo:
Purpose of review: This review provides an overview on the importance of characterising and considering insect distribution infor- mation for designing stored commodity sampling protocols. Findings: Sampling protocols are influenced by a number of factors including government regulations, management practices, new technology and current perceptions of the status of insect pest damage. The spatial distribution of insects in stored commodities influ- ences the efficiency of sampling protocols; these can vary in response to season, treatment and other factors. It is important to use sam- pling designs based on robust statistics suitable for the purpose. Future research: The development of sampling protocols based on flexible, robust statistics allows for accuracy across a range of spatial distributions. Additionally, power can be added to sampling protocols through the integration of external information such as treatment history and climate. Bayesian analysis provides a coherent and well understood means to achieve this.
Resumo:
Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, we are able to characterize the mechanisms underlying age-related loss, and show that the reduction in Vernier acuity can be mainly attributed to the reduction in efficiency of sampling, with no significant change in the level of internal position noise, or spatial distortion, in the visual system.
Resumo:
Effective, statistically robust sampling and surveillance strategies form an integral component of large agricultural industries such as the grains industry. Intensive in-storage sampling is essential for pest detection, Integrated Pest Management (IPM), to determine grain quality and to satisfy importing nation’s biosecurity concerns, while surveillance over broad geographic regions ensures that biosecurity risks can be excluded, monitored, eradicated or contained within an area. In the grains industry, a number of qualitative and quantitative methodologies for surveillance and in-storage sampling have been considered. Primarily, research has focussed on developing statistical methodologies for in storage sampling strategies concentrating on detection of pest insects within a grain bulk, however, the need for effective and statistically defensible surveillance strategies has also been recognised. Interestingly, although surveillance and in storage sampling have typically been considered independently, many techniques and concepts are common between the two fields of research. This review aims to consider the development of statistically based in storage sampling and surveillance strategies and to identify methods that may be useful for both surveillance and in storage sampling. We discuss the utility of new quantitative and qualitative approaches, such as Bayesian statistics, fault trees and more traditional probabilistic methods and show how these methods may be used in both surveillance and in storage sampling systems.
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.
Resumo:
The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.
Resumo:
Ocean gliders constitute an important advance in the highly demanding ocean monitoring scenario. Their effciency, endurance and increasing robustness make these vehicles an ideal observing platform for many long term oceanographic applications. However, they have proved to be also useful in the opportunis-tic short term characterization of dynamic structures. Among these, mesoscale eddies are of particular interest due to the relevance they have in many oceano-graphic processes.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
This book describes the mortality for all causes of death and the trend in major causes of death since 1970s in Shandong Province, China.
Resumo:
Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.