864 resultados para Hierarchical sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spinosad, diatomaceous earth, and cyfluthrin were assessed on two broiler farms at Gleneagle and Gatton in southeastern Queensland, Australia in 2004-2005 and 2007-2009, respectively to determine their effectiveness in controlling lesser mealworm, Alphitobius diaperinus (Panzer) (Coleoptera: Tenebrionidae). Insecticide treatments were applied mostly to earth or 'hard' cement floors of broiler houses before the placement of new bedding. Efficacy of each agent was assessed by regular sampling of litter and counting of immature stages and adult beetles, and comparing insect counts in treatments to counts in untreated houses. Generally, the lowest numbers of lesser mealworm were recorded in the house with hard floors, these numbers equalling the most effective spinosad applications. The most effective treatment was a strategic application of spinosad under feed supply lines on a hard floor. In compacted earth floor houses, mean numbers of lesser mealworms for two under-feed-line spinosad treatments (i.e., 2-m-wide application at 0.18 g of active insecticide (g [AI]) in 100-ml water/m(2), and 1-m-wide application at 0.11 g ([AI] in 33-ml water/m(2)), and an entire floor spinosad treatment (0.07 g [AI] in 86-ml water/m2) were significantly lower (i.e., better control) than those numbers for cyfluthrin, and no treatment (controls). The 1-m-wide under-feed-line treatment was the most cost-effective dose, providing similar control to the other two most effective spinosad treatments, but using less than half the active component per broiler house. No efficacy was demonstrated when spinosad was applied to the surface of bedding in relatively large volumes of water. All applications of diatomaceous earth, applied with and without spinosad, and cyfluthrin at the label rate of 0.02 g (AI)/100-ml water/m(2) showed no effect, with insect counts not significantly different to untreated controls. Overall, the results of this field assessment indicate that cyfluthrin (the Australian industry standard) and diatomaceous earth were ineffective on these two farms and that spinosad can be a viable alternative for broiler house use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm is described for developing a hierarchy among a set of elements having certain precedence relations. This algorithm, which is based on tracing a path through the graph, is easily implemented by a computer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pathogens and pests of stored grains move through complex dynamic networks linking fields, farms, and bulk storage facilities. Human transport and other forms of dispersal link the components of this network. A network model for pathogen and pest movement through stored grain systems is a first step toward new sampling and mitigation strategies that utilize information about the network structure. An understanding of network structure can be applied to identifying the key network components for pathogen or pest movement through the system. For example, it may be useful to identify a network node, such as a local grain storage facility, through which grain from a large number of fields will be accumulated and move through the network. This node may be particularly important for sampling and mitigation. In some cases more detailed information about network structure can identify key nodes that link two large sections of the network, such that management at the key nodes will greatly reduce the risk of spread between the two sections. In addition to the spread of particular species of pathogens and pests, we also evaluate the spread of problematic subpopulations, such as subpopulations with pesticide resistance. We present an analysis of stored grain pathogen and pest networks for Australia and the United States.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm is described for developing a hierarchy among a set of elements having certain precedence relations. This algorithm, which is based on tracing a path through the graph, is easily implemented by a computer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invasive and noxious weeds are well known as a pervasive problem, imposing significant economic burdens on all areas of agriculture. Whilst there are multiple possible pathways of weed dispersal in this industry, of particular interest to this discussion is the unintended dispersal of weed seeds within fodder. During periods of drought or following natural disasters such as wild fire or flood, there arises the urgent need for 'relief' fodder to ensure survival and recovery of livestock. In emergency situations, relief fodder may be sourced from widely dispersed geographic regions, and some of these regions may be invaded by an extensive variety of weeds that are both exotic and detrimental to the intended destination for the fodder. Pasture hay is a common source of relief fodder and it typically consists of a mixture of grassy and broadleaf species that may include noxious weeds. When required urgently, pasture hay for relief fodder can be cut, baled, and transported over long distances in a short period of time, with little opportunity for prebaling inspection. It appears that, at the present time, there has been little effort towards rapid testing of bales, post-baling, for the presence of noxious weeds, as a measure to prevent dispersal of seeds. Published studies have relied on the analysis of relatively small numbers of bales, tested to destruction, in order to reveal seed species for identification and enumeration. The development of faster, more reliable, and non-destructive sampling methods is essential to increase the fodder industry's capacity to prevent the dispersal of noxious weeds to previously unaffected locales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid uptake of transcriptomic approaches in freshwater ecology has seen a wealth of data produced concerning the ways in which organisms interact with their environment on a molecular level. Typically, such studies focus either at the community level and so don’t require species identifications, or on laboratory strains of known species identity or natural populations of large, easily identifiable taxa. For chironomids, impediments still exist for applying these technologies to natural populations because they are small-bodied and often require time-consuming secondary sorting of stream material and morphological voucher preparation to confirm species diagnosis. These procedures limit the ability to maintain RNA quantity and quality in such organisms because RNA degrades rapidly and gene expression can be altered rapidly in organisms; thereby limiting the inclusion of such taxa in transcriptomic studies. Here, we demonstrate that these limitations can be overcome and outline an optimised protocol for collecting, sorting and preserving chironomid larvae that enables retention of both morphological vouchers and RNA for subsequent transcriptomics purposes. By ensuring that sorting and voucher preparation are completed within <4 hours after collection and that samples are kept cold at all times, we successfully retained both RNA and morphological vouchers from all specimens. Although not prescriptive in specific methodology, we anticipate that this paper will assist in promoting transcriptomic investigations of the sublethal impact on chironomid gene expression of changes to aquatic environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A spatial sampling design that uses pair-copulas is presented that aims to reduce prediction uncertainty by selecting additional sampling locations based on both the spatial configuration of existing locations and the values of the observations at those locations. The novelty of the approach arises in the use of pair-copulas to estimate uncertainty at unsampled locations. Spatial pair-copulas are able to more accurately capture spatial dependence compared to other types of spatial copula models. Additionally, unlike traditional kriging variance, uncertainty estimates from the pair-copula account for influence from measurement values and not just the configuration of observations. This feature is beneficial, for example, for more accurate identification of soil contamination zones where high contamination measurements are located near measurements of varying contamination. The proposed design methodology is applied to a soil contamination example from the Swiss Jura region. A partial redesign of the original sampling configuration demonstrates the potential of the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term acclimation has been used with several connotations in the field of acclimatory physiology. An attempt has been made, in this paper, to define precisely the term “acclimation” for effective modelling of acclimatory processes. Acclimation is defined with respect to a specific variable, as cumulative experience gained by the organism when subjected to a step change in the environment. Experimental observations on a large number of variables in animals exposed to sustained stress, show that after initial deviation from the basal value (defined as “growth”), the variables tend to return to basal levels (defined as “decay”). This forms the basis for modelling biological responses in terms of their growth and decay. Hierarchical systems theory as presented by Mesarovic, Macko & Takahara (1970) facilitates modelling of complex and partially characterized systems. This theory, in conjunction with “growth-decay” analysis of biological variables, is used to model temperature regulating system in animals exposed to cold. This approach appears to be applicable at all levels of biological organization. Regulation of hormonal activity which forms a part of the temperature regulating system, and the relationship of the latter with the “energy” system of the animal of which it forms a part, are also effectively modelled by this approach. It is believed that this systematic approach would eliminate much of the current circular thinking in the area of acclimatory physiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantifying nitrous oxide (N(2)O) fluxes, a potent greenhouse gas, from soils is necessary to improve our knowledge of terrestrial N(2)O losses. Developing universal sampling frequencies for calculating annual N(2)O fluxes is difficult, as fluxes are renowned for their high temporal variability. We demonstrate daily sampling was largely required to achieve annual N(2)O fluxes within 10% of the best estimate for 28 annual datasets collected from three continents, Australia, Europe and Asia. Decreasing the regularity of measurements either under- or overestimated annual N(2)O fluxes, with a maximum overestimation of 935%. Measurement frequency was lowered using a sampling strategy based on environmental factors known to affect temporal variability, but still required sampling more than once a week. Consequently, uncertainty in current global terrestrial N(2)O budgets associated with the upscaling of field-based datasets can be decreased significantly using adequate sampling frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurately quantifying total greenhouse gas emissions (e.g. methane) from natural systems such as lakes, reservoirs and wetlands requires the spatial-temporal measurement of both diffusive and ebullitive (bubbling) emissions. Traditional, manual, measurement techniques provide only limited localised assessment of methane flux, often introducing significant errors when extrapolated to the whole-of-system. In this paper, we directly address these current sampling limitations and present a novel multiple robotic boat system configured to measure the spatiotemporal release of methane to atmosphere across inland waterways. The system, consisting of multiple networked Autonomous Surface Vehicles (ASVs) and capable of persistent operation, enables scientists to remotely evaluate the performance of sampling and modelling algorithms for real-world process quantification over extended periods of time. This paper provides an overview of the multi-robot sampling system including the vehicle and gas sampling unit design. Experimental results are shown demonstrating the system’s ability to autonomously navigate and implement an exploratory sampling algorithm to measure methane emissions on two inland reservoirs.